5

There has been discussion about this at the network level, but at this time there is no network-wide policy, and individual sites are encouraged to develop their own policies. It has already cropped up in practice, though it's not (known to be) common; I think it might be helpful to have our policy in place before we need it.

Leaving aside for a moment the question about how we could know this, should we ban questions and answers that are known to be AI-generated?

2 Answers 2

6

Moderation is a scarce resource. Given that we want a site with high-quality questions and answers, how can we best apply that resource? Requiring stochastically-parroted posts to be deleted on sight would be just as overwhelming as allowing a potential flood of subtly-wrong answers with the same benefit of doubt that we should extend to humans.

But this, I think, could be a solution: humans deserve empathy and the benefit of doubt, "AI" does not.

  • When an answer that was likely generated by humans has problems, an appropriate response would typically be to ask for clarification, for example asking for sources and links to be added.

  • Such follow-up makes no sense with stochastic parrots. At most, the human who posted the AI-generated text could engage with that feedback, but if they had the ability to engage with the material on that level they would probably already have done so, and it wouldn't have been an AI-generated post.

    Instead of expending the effort to engage with the content, it should be allowed to do the minimum to remove the problematic content from the site: simply deleting it.

  • If the effort for deciding “is this AI-generated?” and “does this have problems that should be addressed?” become a noticeable moderation burden, that burden should be minimized, for example by allowing all AI-generated content to be deleted, regardless of whether it has problems. It could then also be appropriate to ban the posting of such content entirely.

2

There should be a rule (not sure if it already exists) that only humans are entitled to have user accounts. We should not allow any bots on this site to post questions and answers (as well as other user interactions such as up-votes, down-votes or accepting answers).

There might be legitimate reasons for a human member to use OpenAI or similar to generate a question, answer or comment, for example a language barrier. We should balance this with the interests for protecting the integrity of this site.

AI-generated questions do not seem to be a huge problem, as long as we can be certain that it would always be a human who posts them. The questions obviously reflect an interest or problem of the person posting the question.

AI-generated answers and comments pose a big problem, because they dilute the legitimate content created by human experts with additional texts of high word-sequence-probability but limited meaningfulness and significance. Increasing number of AI-generated answers will make it difficult for visitors and members to find actual answers for their problem. Therefore we need a rule against AI-generated answers.

But can we assume that members are good citizens and follow the rules and don't use AI here? I am not so optimistic. It is similar to users not reading or ignoring the 'How do I ask a good question'. People don't research existing questions and answers, it is a tl;dr mentality, they want quick answers. The anonymity of this site makes it easy to break rules (pls note, I do not want to question/change the anonymity).

AI-generated translations of an original human text might show similar characteristics as a fully AI-generated text. So when there will be tools available to identify OpenAI generated text we might find it difficult to differentiate these from AI-generated translations (such as DeepL).

So how could we cope with it? One option is to add a check box that needs to be clicked each time a member posts an answer, and by clicking the check box they confirm that they wrote it without the help of an AI tool. And yes, the Terms of Use need to allow moderators to remove AI-generated content, and if a user is repeatedly posting it there needs to be the options for yellow and red cards. Thinking into the future: We might want to consider a new 'privilege' at a certain reputation level, which allows a member to flag an AI-content suspicion, which could then lead to an automatic submission of that answer to an AI-detection tool.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .