7

In stable diffusion, a negative prompt can be used to specify elements that should not be part of the generated image.
Example:

Prompt: Portrait photo of a man
Negative Prompt: mustache

The negative prompt is often necessary because most models have difficulties interpreting the following prompt correctly

Prompt: Portrait photo of a man without mustache

and instead generate images of men with mustaches.
With LLMs, it is apparently also possible to specify a negative prompt (for example, with Text generation web UI).
Screenshot of the parameters menu of the Text generation web UI with a red circle around the negative prompt field I would like to know how they work.
My current understanding of LLMs is that the prompt, along with the previous chat and other contextual information, forms the context. Based on this context, the model generates the token that is most likely to follow the context. It will then append this token to the context and generate the next token and so on.
Where in this process does the negative prompt come in?
If negative prompts are simply inserted into the context, they seem redundant, since we could just include them directly in the prompt.

4
  • Please be more specific about specifying a negative prompt in the referred tool.
    – Wicket
    Commented Aug 17, 2023 at 20:44
  • @Wicket do you want me to add a screenshot of the option? I'm not exactly sure what you mean.
    – Turamarth
    Commented Aug 17, 2023 at 21:31
  • Please excuse me if this is obvious to you... I'm thinking of making the question helpful to a broad audience, especially considering that this is the first question. about using "negative prompts" on LLMs. I think that some people that are familiar with LLMs might be familiar with chatbots but not with text-to-image models, and not familiar with Automatic1111's Stable Diffusion Web UI which is mentioned as the reference in the Text Generation Web UI GitHub repository. Probably a screenshot will be helpful for a broader audience than using a text only description, but...
    – Wicket
    Commented Aug 17, 2023 at 22:07
  • .... the text description should be included as the search only index text and there are users that using screen readers.
    – Wicket
    Commented Aug 17, 2023 at 22:08

3 Answers 3

2

What are negative prompts in LLMs?

Same as with text-to-image generation: themes we'd like to avoid in the generated text. The paper {1} describes one example of how negative prompts can be implemented in LLMs.


References:

1

I have not seen negative prompts implemented, but have been involved in dev discussions about them.

Where they go (in theory)

A System Instruction is sent once at the beginning of the session. But in a long chat session, the original system instructions will be diluted if not forgotten entirely.

Additional instructions can be inserted within the Prompt and Response loop to re-enforce the instruction as the chat continues.

In GPT4All, an instruction within the loop can look like:

INSTRUCTION: Roleplay as a [...]
PROMPT: %1
RESPONSE: 

The %1 is replaced by the human-user part of the chat, and the LLM will add its response at the end.

The human-user does not see this embedded instruction. Re-enforcing character traits and roleplay instructions within the prompt-loop keeps the bot in character.

In a roleplaying game, these instructions could be updated while the chat is on-going, allowing for an external 'dramaturge' to steer the character (stats in a relationship game, for instance).

Why negative instructions might be used

With your image-generation example, saying 'no mustache' in a positive prompt might trigger a mustache, or focus the render on the area where the absent mustache should be.

A similar thing can happen with LLM, but more likely a homonym word will introduce errant topics.A '2-shot' prompt is used to first steer the subject to the correct topic, or just extra sentences to specify the context.

A negative synonym should block the unintended meanings faster, and without drawing attention to it.

Don't trust a bot with sensitive information

LLM occasionally respond with entire sentences from their instructions, verbatim.

Consider a mystery game where a suspect should misdirect from certain topics. The "motive to lie" can't be explained to an LLM.

Negative prompts should be safer than relying on a (sometimes) contorted double-negative to get the right idea.

As with negative prompts in image-generation, the need may not be as apparent until the implimentation allows for better results.

0

It looks to me that the concept of "negative prompts" has not been broadly transposed to LLMs context.

The Wikipedia article about prompt engineering includes a section for Negative prompts. It only mentions their use in text-to-image models.

ChatGPT hasn't a particular way to add a "negative prompt"; it recently said Custom Instructions, which has two boxes, one to describe the user and another to tell how ChatGPT should respond.

On Artificial Intelligence, there is a related question Does Negative Prompting Exist?. It was asked one month ago and had only one answer, but it refers to testing and instruction-tunning.

1
  • I recently added an answer in that AI stackexchange thread covering a recent paper that allowed for negative prompts in LLMs, although I'm unsure if that's the algorithm used in OP's interface. Commented Sep 4, 2023 at 2:20

Not the answer you're looking for? Browse other questions tagged or ask your own question.