3

AI-generated content—content made using, for example, ChatGPT—has been a hot topic this year. Policies surrounding it (as well as the way they were introduced) instigated the moderator strike last July-August. It is completely banned on StackOverflow, on MSE itself, and most other sites.

What is our stance? How does the Arts & Crafts community think this content should be handled?

2 Answers 2

4

Today, AI output, like ChatGPT, is usually recognizable. The writing style is different from how most people write. Near-future refinements are likely to make the writing style less recognizable. So a policy on AI-generated content might soon be unenforceable, suggesting that might not be the best way to frame the issue.

The real problem with AI-generated content is the content quality, and AI doesn't have a good way to improve on that. AI has the same problem as a visitor who is unfamiliar and inexperienced with the subject of the question, and tries to answer by Googling it.

Much of what can be found in a search was created by people who have no real expertise and is often wrong. The source information isn't curated or rated (voted/commented on), by subject matter experts. So AI answers tend to be low quality. If not outright wrong, they are often inaccurate, miss important considerations, too generic, etc.

Rather than basing a policy on what the source was, it may be more useful to just focus on answer quality. Users familiar with the subject can identify how well a post answers the question. Voting may be the best solution (the uncurated source information would get curated here).

  • Is the information correct and accurate?
  • Does it identify and focus on the appropriate and important considerations?
  • Does it recognize and address possible variations due to ambiguity in the question (either by the author having requested clarification from the questioner or covering the alternate possible cases within the answer)?
  • Does it reflect actual user experience?
  • Is it actionable information targeted to the conditions in the question, or general truisms and hedging?
  • Does it get directly and definitively to the heart of the answer, or dump a bunch of tangential fluff?

Experienced users who can write well will produce high-quality answers. AI won't. The AI answers will tend to be very polished garbage.

That said, AI can still be useful. It's a tool that can do a better job of searching and summarizing than most people have time for. The AI output can be a good starting point for creating an answer.

People have also raised the issue that posting ChatGPT output without attribution is plagiarism. It could be argued that it could be considered an automated and comprehensive collection of writing tools, researching and generating content to your specifications. So the output would be considered your work (only humans have copyright protection), similar to commissioning someone to ghost-write your content.

I don't think we need to engage in that debate. The more relevant aspect is whether an answer represents the user's personal experience and knowledge, or is simply summarizing what other people reported. That is critical to interpreting the answer. If it is just uncritical regurgitation of Internet debris, it doesn't really matter whether ChatGPT created it or an unknowledgable user did their own search.

Rather than requiring attribution for AI-generated answers, it might be more useful to suggest/require that (all) answers specify whether they are based on the writer's actual experience if that is not obvious from the content of the answer.

4
  • 2
    Thanks for you input, Dolly! A common problem with AI-generated texts is that it can authoritatively present misinformation, which is risky. Also, attribution is required on the network, and not something we can create our own policy for: the 'generation' here is key, and if it's not performed by the user posting it, attribution is necessary.
    – Joachim Mod
    Commented Nov 8, 2023 at 10:45
  • As a sort of summary, you're suggesting to only judge answers based on quality of content, regardless of whether they're pre-generated?
    – Joachim Mod
    Commented Nov 8, 2023 at 16:24
  • 1
    @Joachim--I agree authoritatively presented misinformation is a problem, but the same issue exists with human-generated content. Identifying whether the answer is based on the author's experience goes a long way there. Attribution is required, but it can be argued that using AI tools to generate an answer is still the poster's work (more delegated). As a practical matter, there's little difference between a ChatGPT answer and a well-written answer from a human with no subject matter knowledge, who just Google's the question and regurgitates the gist of the result. re: judging--what Elmy said.
    – Dolly
    Commented Nov 8, 2023 at 18:31
  • 3
    Attribution is a must. Even if the human has issued the prompt, the answer is not theirs, they are copying it from another source. Nobody should need to be ghost writing their answers on something like StackExchange. If they do not want to put some effort in (and the same goes for questioners) they should find some other way to pass their time. The idea is that we all help each other by sharing our knowledge or even occasionally doing somebody's research for them, its not about collecting clout using automation.
    – rebusB
    Commented Nov 20, 2023 at 15:37
3

Be much more active in your voting, but ignore the author and only rate the content of an answer.

Ultimately it doesn't matter if a human or chat bot wrote an answer, we should treat it the same way.

  • If it's good, up-vote it.
  • If it's bad, down-vote it. Consider leaving a comment pointing out problems or asking for clarification.
  • If it's nonsensical or factually wrong, down-vote it and please leave a comment pointing out the error. That gives other users an aid for voting on the answer as well and the author gets a chance to correct their answer.
  • If it's rude or violates site policy, flag it for mod attention.

Please don't expect mods to curate every single post that "looks like ChatGPT". Our role is supposed to be "human exception handler". Unfortunately AI generated answers are no longer exceptions and we simply cannot manage every single one of them. If more users actively up- and down-vote posts, the quality mechanism embedded in SE will work much better and authors of bad answers will be deterred automatically.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .