Friday, February 13

One of the coolest things about generative AI models — both large language models (LLMs) and diffusion-based image generators — is that they are “non-deterministic.” That is, despite their reputation among some critics as being “fancy autocorrect,” generative AI models actually generate their outputs by choosing from a distribution of the most probable next tokens (units of information) to fill out their response.Asking an LLM: “What is the capital of France?” will have it sample its probability distribution for France…
Read More

Share.
Leave A Reply

Exit mobile version