3

Is the generation of text in a LLM determined by a random seed, just like procedurally generated worlds in video games?

2 Answers 2

3

Assuming all other parameters such as the temperature and top_p values stay the same, the text of a LLM is mostly determined by a random seed (used during token sampling), with some exceptions.

1

Yes, the generation of text in a Large Language Model (LLM) can be influenced by a random seed, similar to how procedurally generated worlds in video games work. Setting a specific seed makes the text generation deterministic, meaning the same input will always produce the same output, useful for reproducibility. Without a fixed seed, the output can vary, allowing for creative and diverse text generation.

Not the answer you're looking for? Browse other questions tagged or ask your own question.