151

February 22, 2024 Update

The help center articles have been rolled out network-wide. See revision 15 of the February 7th post for more details.

If you're interested in enabling one of these banners on your site, please see the "How can I enable this on my site?" section near the bottom of this post.


February 7, 2024 Update

I've created a separate post to get feedback on changes to an existing help center article, and a draft for a separate new help center article. It will be open for feedback for a week.


January 31, 2024 Update

TL;DR: The Community Team will work on drafting a network-wide Help Center article that explains that AI-generated content is prohibited unless it is posted with appropriate attribution, and then making the “answers must be cited” variant below the default option for all network sites. More details in the coming weeks.

As was pointed out in the comment thread by Joe W (thank you!), our current Code of Conduct (more specifically, its Inauthentic Usage policy) prohibits posting AI-generated content without appropriate attribution (explanation here). With that in mind, it doesn’t make sense that the “answers must be cited” variant, shown below as one of the possible two options for sites to opt-in to, would be optional. As such, we’ll be rolling out the “answers must be cited” option as the default for all Stack Exchange sites in the coming weeks. Sorry for the crossed wires on this, making for a slightly messier roll-out than originally planned.

Since a part of the originally proposed process for sites to request any of the two variants was that they’d need to agree on language for a Help Center article that explains their site’s policy on AI-generated content, before making that rollout, I’ll be drafting a Help Center article that’s supposed to be available on all network sites, and coming back to Meta Stack Exchange for feedback on it before publishing it. The idea is that that article would serve as the bare minimum for all sites, and sites would then be able to further tweak their policy to suit their needs, either by iterating on the “answers must be cited” policy, or by going through the request process laid out below to request the banner be changed to the “not allowed” variant instead.

I’ll make a separate post to gather feedback on the Help Center article draft, but will update this post once that’s up.


January 10, 2024 Update

The experiment has now been graduated, and the banner has been enabled on SO. Please see the bottom-most section of this question for more details on how to request it be enabled on your site.


tl;dr

We’ve recently run an experiment to test a banner highlighting the “AI-generated content” policy on Stack Overflow. Next week we’ll be graduating the experiment that ran on Stack Overflow, and adding functionality to allow the "AI-generated content" policy banner to be enabled on Stack Overflow and all other sites in the Stack Exchange network. The variant group had no significant impact on answer rates; however, it did see a reduction in posts flagged for AI-generated content. Please see the post on MSO for more details on the experiment results.

The banner on SO looked something like this:

Image showing the answer field, in focus, with a banner saying "Reminder: Answers generated by Artificial Intelligence tools are not allowed on Stack Overflow. Learn more"

Next steps:

  • All sites in the Stack Exchange network will be able to opt in (the feature is off by default, network-wide)

  • We will initially offer two banner text options that all sites in the Stack Exchange network can opt-in to. Those options are the following:

    • **Reminder**: Answers generated by artificial intelligence tools are not allowed on [Site Name]. Learn more

    • **Reminder**: Answers generated by artificial intelligence tools must be cited on [Site Name]. Learn more

  • The banner will display once users select the answer field

  • The “new contributor” banner will no longer be shown in the answer field

  • All users will see this banner when posting an answer with the option to dismiss. Once dismissed, logged-in users will not see this banner again

"How can I enable this on my site?"

If this is something you think your community is interested in opting into, please start a discussion in its Meta site to get community consensus on the appropriate course of action.

Both options have a "Learn more" link, which will point to a per-site help center article, whose contents should also be a part of the community discussion — this article should explain what the site's policy on AI-generated content is (here's SO's article, as an example).

Once a consensus is reached, escalate it to the Community Management Team by adding the [status-review] tag, so the team can assess the request.


If you have any questions about the process, please post them as an answer below.

17
  • 6
    If there's consensus on a per-site meta that we want the banner, what needs to be done to enable it? E.g. on ruSO. Commented Jan 5 at 11:08
  • 6
    Request it on that site's Meta, as per the last paragraph, and escalate it to the CM Team ;)
    – JNat StaffMod
    Commented Jan 5 at 11:29
  • 51
    I'm a bit disappointed that we're stuck with pre-generated options, because I would love to also briefly mention the other rules of the site on such a banner (e.g., be thorough, don't plagiarize).
    – Laurel
    Commented Jan 5 at 12:44
  • 3
    I’m curious as to the reason for requiring a new discussion for each site, about these banners, instead of just simply enabling them for sites which have banned AI-generated content. That is, the first paragraph for sites which have banned it, and the second one for the remaining sites. Commented Jan 5 at 14:29
  • 9
    Not all sites have a policy about AI-generated content (as far as I'm aware), @AndreasmovedtoCodidact, so the separate discussions ensure that each community has a policy that suits their own needs, and that proper guidance and documentation of that policy is created as a part of that process. Hopefully my latest edit makes that a bit clearer.
    – JNat StaffMod
    Commented Jan 5 at 16:09
  • Hi, I present an observation regarding the placement of this banner and the corresponding effect. (The choice of the specific placement (inside the textarea) may end up contributing to an accusatory and distressing experience). I have already posted it in the AskUbuntu meta, I request that you look at it and consider it: meta.askubuntu.com/a/20408/1157519
    – Levente
    Commented Jan 5 at 20:16
  • 9
    "All users will see this banner when posting an answer with the option to dismiss. Once dismissed, logged-in users will not see this banner again" probably this should require something like 20 points to be dismissable. But I'm just "thinking too loud".
    – Largato
    Commented Jan 6 at 5:15
  • 6
    Make this mandatory across all SE sites. Please.
    – user314962
    Commented Jan 6 at 10:44
  • 15
    @ElEctric Not all SE sites have agreed to ban AI-generated content, and even among those that have, some implement it more strictly than others. There isn't a unified network policy on this, hence no unified network banner. Commented Jan 6 at 15:09
  • It's not really artificial intelligence (which would be an incredible technological and sociological breakthrough). I do hope we get to have a say in the wording on our site if we go for this.
    – ouflak
    Commented Jan 10 at 11:10
  • 7
    A banner would be great if it pops up only when and if one copies a chunk of characters. With this implementation, I worry that it would have the same effect as telling people that adblockers are forbidden — it notifies them that they are an option respectively. We shouldn't continuously notify people about AI options, but we should clarify our stance when we encounter a possible infringement — when someone copies and pastes their answer.
    – Akixkisu
    Commented Jan 11 at 19:40
  • 2
    I am a little confused about the second option for requiring AI answers to be cited when another answer from a staff member suggests that answers that do not cite AI answers are violating the code of conduct. meta.stackexchange.com/a/393682
    – Joe W
    Commented Jan 17 at 19:47
  • 2
    That is a good point, @JoeW — there might've been some crossed wires with communications here, internally. Gonna determine next steps internally and update once we have 'em. Thanks for pointing that out.
    – JNat StaffMod
    Commented Jan 22 at 18:21
  • 1
    Does this answer answer your question, @Plusjamaisquoiencore? If not, I'd propose creating a separate question to ask for clarification on it, as the comment section here is a bit inadequate to go into any more detail than that answer already provides.
    – JNat StaffMod
    Commented Jan 23 at 11:09
  • @JNat Thank you, I had read this before, so I gather it means "attribution" (to the AI). But it does not seem to include the attribution the AI might provide (accurately or not) to the content it is outputting. Furthermore consider a X network user quoting content from SE with attribution and an AI being trained on this. It is not clear how the AI would attribute output based on this, but that's another question altogether. Commented Jan 25 at 0:13

5 Answers 5

48

You've suggested two options for the banner's text, both written in English. I suspect that they will be allowed to translate via Traducir/Transifex. But the translation pipeline is still broken for non-English Stack Exchange sites, see Recently added translations don't reach the site again.

If it doesn't use the normal Traducir/Transifex pipeline, but works for instance like the text used for forbidden tags — through conversation with a community manager, we need somehow (on local Meta?) to propose translated versions for both or only for one selected option.

Anyway as a first step I would like to know how it expected to work on non-English sites.

41

It would be great to also allow for a similar banner on the page where questions are composed. Over on Physics.SE, we often get questions that are of the following form:

Why is it the case that [false statement] is true? I asked ChatGPT about it and here's what it said:

[paragraph that is actively misleading or false]

Now I'm confused.

Such questions are also against our site policy, just like AI-generated answers are. But if there's a "no AI" banner that shows up above the "answers" text entry box and not above the "questions" text entry box, it might lead new users to think that it's fine to ask questions about confusing AI-generated text.

2
  • 24
    Sounds like you should customize the Ask Question popup. (You should of course put more than just the AI policy in here—I suspect there are more common problems that askers are running into.)
    – Laurel
    Commented Jan 5 at 19:36
  • 5
    Indeed, as Laurel pointed out the Ask Question page already has built-in, customizable UI that would address the concerns here — please see the link provided in Laurel's comment for details on how to request said customization.
    – JNat StaffMod
    Commented Jan 10 at 16:04
13

You note

Both options have a "Learn more" link, which will point to a per-site help center article, whose contents should also be a part of the community discussion — this article should explain what the site's policy on AI-generated content is ...

Can sites link to a Meta post containing the policy instead? Like this one for Ask Ubuntu? Or does it need to be finalized into a Help Center article?

2
  • I think that each community can discuss the banner text and link in meta-site. Commented Jan 6 at 6:41
  • 5
    The link is hard-coded, @cocomac, so a help center article would be preferred.
    – JNat StaffMod
    Commented Jan 8 at 10:19
1

"generated by artificial intelligence tools"* is a broad description. The "learn more"/AI policy page of Stackoverflow also talks of

any answer crafted in part or in whole using a tool that writes a response automatically based on a prompt it is provided

So when we take this literally, this would effectively make it impossible to use contemporary translation services like DeepL or Google Translate, or grammar checkers for assistance, since those are AI tools as well which write a response automatically based on a prompt.

However, this Stackoverflow Q&A gives me the impression that translation tools are not prohibited in general, at least not when they are specialized on translations which preserve the original meaning. Moreover, the formerly mentioned AI policy page only contains examples in terms of LLM based services, and not translation tools or grammar checkers based on other kind of AI models. That leaves anyone like me - who is not an English native speaker and uses a translation service from time to time - in a very unsatisfying state of uncertainty if that is still ok or not.

I understand the desire not to restrict the AI ban to LLMs alone, since we do not know what new fancy kind of AI tool will hit the market next month, and if the term LLM will still fit to them, but when certain AI tools like translation services are allowed (maybe under certain restrictions), IMHO the AI policy pages should be clearer about that than the current Stackoverflow AI policy. Even in case translation services will be banned in general since they all fall into this broad interpretion of "generative AI", then the policy should state that as well.

35
  • 2
    This is one of those rules (like the rule against plagiarism) that get enforced arbitrarily because it is impossible to enforce at scale across all content. It's there so that sites have a reason to delete content they don't want, not because the community is going to try to hunt down all uses of AI and remove them. How would anyone know if you used an AI-based tool if you didn't tell them? If your post reads like it was written by a human, it's good quality, and you have a history of good posts, it won't be flagged. If you're a new user on a site, you'll have to be a little more careful.
    – ColleenV
    Commented Jan 15 at 16:36
  • 1
    @ColleenV: the current policy speaks about AI tools, but leaves translations tools completely out. By not saying one word about them this creates uncertainty among non-native English speakers like me about what is allowed and what is forbidden. That is IMHO quite unsatisfying.
    – Doc Brown
    Commented Jan 15 at 16:41
  • 1
    I'm not saying the wording is good. I think it's kind of dumb to focus so hard on AI tools and not the problem the rule is actually trying to solve (people posting garbage that's difficult to curate). I'm just commenting that, in general, people who are worried about the quality of their posts and are engaging with the site in good faith probably don't have to worry.
    – ColleenV
    Commented Jan 15 at 16:45
  • @ColleenV: if people who are worried about the quality of their posts and are engaging with the site in good faith probably don't have to worry, why not mentioning this in the policy?
    – Doc Brown
    Commented Jan 15 at 16:47
  • 1
    @DocBrown by the time we're done mentioning every such edge case the policy will be 10 answers long. If the avg reader reads your post and it reads like an ai generated answer, it's likely to get treated like one even if you are using the translation tool and not chatgpt directly.
    – Kevin B
    Commented Jan 15 at 16:47
  • @KevinB: "translation tools" are not an edge case.
    – Doc Brown
    Commented Jan 15 at 16:48
  • @DocBrown they are similarly unclear though. One can easily call chatgpt itself a translation tool and not be entirely wrong. What of translation tools that can be broken to output answers? technically... translation tool was used... what's important is the intent and the outcome, moreso that the path that resulted in the answer.
    – Kevin B
    Commented Jan 15 at 16:50
  • @KevinB: absolutely, that's why I wrote "tools specialized on translations which preserve the original meaning" in my answer. I think it should be possible to find a wording that gives answerers a little more certainty that it is ok to use translation tools to a certain extent. It should probably not focus on the tool, but how the result should look like, and that the presented expertise should be the expertise of the answerer.
    – Doc Brown
    Commented Jan 15 at 16:55
  • 1
    Is it possible for a translation service based on an LLM to not produce AI generated content? Would a user of said service be able to identify that distinction?
    – Kevin B
    Commented Jan 15 at 16:58
  • @KevinB: that's quite unimportant. My question is, is the phrase "AI generated content" - without any restrictions - a too broad description which overshoots the mark?
    – Doc Brown
    Commented Jan 15 at 17:03
  • 3
    the problem with saying "translation/grammar services are allowed" is that's a broad spectrum, as i've already explained. even the one you demonstrated clearly falls into the "generative" category because it is doing far more than just improving grammar.
    – Kevin B
    Commented Jan 15 at 17:19
  • 3
    If the translation service generates content, yes. Not all translation services do. Translation services did exist prior to chatgpt.
    – Kevin B
    Commented Jan 15 at 17:24
  • 1
    Prior to chatgpt we generally frowned upon using translation services because there needs to be an ability for the parties (the asker and the answerers) to be able to understand one another, and one can't accurately communicate if they can't read what they're posting.
    – Kevin B
    Commented Jan 15 at 17:27
  • 1
    I appreciate you calling this out as a reminder for any folks who might be interested in defining policy on this for their site. As a reminder, if you're actually proposing changes to the SO policy itself, though, those would be preferable on MSO rather than as an answer to this question, @DocBrown ^_^
    – JNat StaffMod
    Commented Jan 19 at 12:48
  • 1
    I would imagine a discussion on the policy wouldn't be an issue, @DocBrown — it's standard practice for other policies to be discussed on Meta, and it is common for those discussions to be started by community members rather than mods seeking their feedback.
    – JNat StaffMod
    Commented Jan 19 at 14:31
-2

Poor Colour Choice


I think this is a good concept and, when tweaked a bit, should be implemented. I agree that verbiage might need to be altered allowing for translations that use AI. I also agree that the banner should be placed on query as well as response composition panels.


The only problem is that the light grey / blue colour is all but invisible. It's such a neutral, friendly colour, much more suitable for an informational bar.


This is a warning bar and ought to be bold and distinct. Make it RED.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .