19

ChatGPT-generated answers are becoming more and more frequent across Stack Exchange, and unfortunately, as of an hour or so ago, Arqade is no different. (I have flagged this answer with a link to some proof that it was generated using AI and it was deleted by a moderator, though it had other problems that lead to its deletion, not just that it was generated using AI)

Stack Overflow has implemented a temporary ban on Chat GPT and have even authored a help center article on the subject. The proposal to ban Chat GPT network-wide was declined by Stack Exchange staff with the stated reason being that the moderation of such content should be done on a per-site basis.

We do not currently have a policy one way or another regarding use of artificial intelligence to generate content for posts here on Arqade, and I think it's probably about time we had a chat about it. I'm going to personally leave my opinion about whether we should or should not ban it out of this question's content for the purposes of being unbiased. I'll state my personal opinion in an answer, or comment, below.

So then... What shall we do? Should we ban Chat GPT and other AI-generated answers on Gaming.SE? Should we moderate them in some other way?

7 Answers 7

10

Moderation should focus on what the user is posting rather than how the user got their information.

Cases such as the one you mentioned, in which the answerer did not credit ChatGPT and simply copypasted ChatGPT's response without attribution, should not be allowed and should be deleted. Such posts are often useless, gibberish/nonsensical, and/or non-factual. For example, the answer you mentioned is mostly nonsense. It did not address the question at all and seems to be answering an entirely different question, and it talks about a different game — a mobile game, not SimCity (2013), which is what the question is about.

If a user consistently posts such AI-generated content, the post(s) should be deleted and appropriate mod action taken against the user (warning, suspension, etc.).

Quoting OpenAI, the developer of ChatGPT:

Limitations

  • ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging […]

We should allow it if the answerer specifies which parts of their answer came from an AI tool and states that they verified or tested it themselves, as the answer may contain correct and useful information if verified by a human. Relevant Meta SE post: Is uncited LLM usage considered a CoC violation under the Inauthentic Usage policy?

It would also be helpful if the answerer included the prompt they used so that others could verify the response.

Other AI search engine or chatbot responses, e.g., those by Bing's AI chatbot, Bard by Google, you.com's YouChat, etc., should be treated similarly. Unlike ChatGPT, these AI-based search engines or chatbots can (though not always) include sources in their responses and present information more like traditional search engines (i.e., likely to present more factual content).

I meant this answer to be future-proof. As AI-based tools continue to improve and become more reliable, their usage is likely to become more prevalent, and they may eventually replace traditional search engines. It may not be beneficial to impose blanket bans on their use, especially since they seem to be improving at a fast rate.

3
  • 1
    I've seen an extensive test of ChatGPT on Minecraft datapack creation. Even when constantly telling it that its answers didn't work and what the error messages were, even nudging it in the right direction, it only very slowly approached a somewhat working answer, which was far from optimal. The best use of ChatGPT is IMO to point towards potentially interesting areas/terms that the asker can then research, for example when I asked it about the relation between a warm winter and a warm spring, it mentioned "El Niño" and old-school googling gave me explanations and a forecast for 2023. Commented Jan 30, 2023 at 21:43
  • 2
    This position is largely how I feel we should handle this. Despite us not sharing the problem that Stack Overflow does (in that they have an absolute avalanche of CGPT posts falling on the site) the topic we cover is unique in that it seems CGPT is just overly likely to produce a wrong answer, and as you mentioned, it'll be dressed-up to look professionally written. A polished load of crap is still a load of crap, though.
    – Spevacus
    Commented Jan 31, 2023 at 14:38
  • ... coupled with the idea (mentioned elsewhere here) that the AI-generated content itself also had a source -- and citations will probably not accompany the content. Commented Jun 13, 2023 at 2:17
12

I think that we need to have a flat-out ban on all ChatGPT content.

  • It is not possible to avoid plagiarism. Others have discussed the citation problem, which is important. But if someone is simply copy-pasting ChatGPT content as the entirety of their answer, a citation isn't sufficient to avoid falling afoul of our plagiarism rules.
  • It has an accuracy problem. It generates wrong answers that look good.

Its content is also questionably licensed (if at all) but I'm not a lawyer so I'm going to not try to dive into that. Instead I'm going to discuss the second bullet above because the first is covered by other answers.


After spending some amount of time asking ChatGPT game related questions (for games I've played, a mix of questions found on this site and some I've come up with myself), I've yet to actually convince ChatGPT to give me a correct answer for any game related question beyond something so trivial it should be downvoted as low-effort rather than answered.

The problem with this is, there are a lot of games; it's not feasible for us to have experts in all of them. Many times we help figure out the correct answer not by actually having experience in the game but by finding other people with similar issues or who mention work arounds and bring that information together to come up with a solution. But sometimes we do have people answer who have played the game themselves; often these answers are worth more because they can speak more accurately to the solution(s) or lackthereof.

The problem with ChatGPT is that its answers are stated authoritatively. It doesn't come out and say that it has played the game, but it states its response as facts.

Consider this question:

ChatGPT Q&A

Q: How can I tell the size of my enclosure in Jurassic Park Evolution?

A: In Jurassic Park Evolution, you can view the size of your enclosure by selecting the enclosure in question and then checking the information panel that appears on the right side of the screen. The size of the enclosure will be listed in square meters. Additionally, you can also view the size of the enclosure by hovering over the enclosure with your cursor and a tooltip will appear showing you the size of the enclosure.

This sounds very reasonable. It's definitely how this game mechanic works in other "zoo park" type games like Planet Zoo (also made by Frontier.) If I had no experience with this game at all, and came upon this Q&A pair, I would have upvoted the answer because it's well written, succinct, provides two solutions; it looks like a genuine, useful answer.

The problem is, of course, that it's completely, 100% wrong. It is a beautifully written, very good, and completely wrong answer.


If we let this stay as an answer (and assume that the poster properly attributed ChatGPT so it's not plagiarism), we have the problem that now we need to moderate the accuracy of answers for a massive library of games each of us likely only has played a fraction of.

People don't like to downvote, so our organic moderation via downvotes (especially for well written answers) isn't something we could rely on. There's a non-negligible number of people who post a question and never come back (or posted a question a very long time ago and haven't been back so a new ChatGPT answer doesn't get any reaction from them.)

I suspect there's a number of people like me who vote up more along the lines of a well written answer that describes the solution(s) well even though we're not necessarily experts in the topic ourselves (or have played the game at all.) We're not voting on the accuracy of the answer (unless it directly contradicts a statement of fact in the OP with no proof, or a "that's wrong/doesn't work" comment has been left).

I don't think it's possible to moderate for accuracy with the tools we have at hand, or the current level of participation we have on this site.


For the curious, the answer to the question I posed to ChatGPT above is "you can't." But there are ways to estimate whether it'll meet the dinosaur's requirements that could be detailed in a real answer by someone who's actually played the game or who searches through the Steam community discussion and pieces together an answer. It's answerable by the sorts of humans who participate in this site, but not by ChatGPT.

7
  • 1
    Are you suggesting to ban ChatGPT answers altogether? Because based on your and other's answers, I'd agree: even if properly attributed to an AI, it would mean only those who actually know the answer can verify it (and they probably want to give their own answer in the first place), and I for one would never upvote a pre-generated answer, simply because it shows very little effort, there is a big chance it's wrong, and it would completely warp the concept of reputation in the long run.
    – Joachim
    Commented Jan 29, 2023 at 11:09
  • 6
    Yes, sorry if that wasn't clear -- that is what I'm suggesting. I don't think we should permit ChatGPT answers at all. I'll reword the beginning a bit. Commented Jan 29, 2023 at 18:15
  • 1
    You should see the answer ChatGPT provides if you ask How can I tell if a corpse is safe to eat? Commented Jan 30, 2023 at 17:44
  • 6
    "The problem is, of course, that it's completely, 100% wrong. It is a beautifully written, very good, and completely wrong answer." That is the part that scares me about this AI.
    – Timmy Jim Mod
    Commented Jan 30, 2023 at 18:02
  • re Nethack and cannibalism, I did manage to get it to actually give me a reasonable sounding response, but I can't vouch for its veracity since I've never played the game. It looks like an ok answer, but what do I know? (which probably sums up the problems with using this tool, in general.) It also took a bit of "but what about...?" lines of questioning to get the AI off of its holier than thou "cannibalism is immoral" stance which was kind of amusing. Commented Feb 1, 2023 at 0:04
  • I usually only upvote answers that actually work for me, personally, as opposed to simply being well-written (on the other hand, questions are worth an upvote primarily for being well-written) Commented May 10, 2023 at 17:55
  • @Wondercricket "Eating a corpse is both ethically and legally unacceptable in most societies, as it is considered cannibalism and a crime. Additionally, consuming human flesh can pose serious health risks due to the potential transmission of diseases. It's essential to respect the ethical and legal boundaries in your society and not engage in such activities." The second paragraph is about the safety of food. The more you know.
    – Joachim
    Commented Nov 7, 2023 at 21:19
8

AI generated answers should at the very least cite what tool was used to generate them. If a user does not disclose that their post was generated by an AI, it should be flagged like any other plagiarized post.

I'm still uncertain as to my feelings on AI generated posts in general. Glacticninja's suggestion about allowing posts verified by the user seems reasonable, although that doesn't address the issue of AI not crediting its own sources.

1
  • I tend to dislike answers that state information as though they were factual without any sort of reference anyway. But in this case it is indeed even more important. (Which, tangentially, might mean that in a few years' time any answer without reference will be frowned upon.)
    – Joachim
    Commented Jan 29, 2023 at 11:03
5

The problem with ChatGPT and the reason why it got banned on the main site was because ChatGPT does not "know" anything and because of that often gives inaccurate or even completely wrong answers. ChatGPT has no notion of what C# is, or what a pointer or a linked list or a sorting algorithm or branch prediction or a cache miss is. All it does is just statistically predict which word will likely be next for what the right answer could be. That sort-of works for some content that's less about facts like fiction or poetry, but for stuff that strictly speaking has a knowable and objective answer, it often fails in grand ways.

That being said, part of the reason why it's banned on the main site is also because the problems people are faced with on a programmer site are 9/10 times something they're struggling with at work or in school, and often something that if it ends up wrong or buggy can cost money or maybe even lives (Note: if lives depend on you not making any mistakes in your code, you're probably better off not using Stack Overflow to begin with and stick to code you personally can verify works, but that aside). At worst, a bad answer on Arqade can cost a person progress. The scale at which a fuckup can happen is more limited.

I've seen Fredy31 above say that it depends on the quality of the answer. I'd honestly agree with that, but I'd also say we should treat answers that we suspect to be ChatGPT related with more scrutiny. I'd even say that we should restrict ChatGPT based answers such that any answers written by the service should be marked as community wiki, such that they can be vetted, maintained and factchecked by the community AND aren't an easy way to make fast reputation. I'd also restrict them to only be applied to recent questions to avoid our massive backlog being flooded by AI answers.

2
  • 4
    If code you produce has lives depending on it... you should probably dry run it first lol. Can't imagine someone making code for like a pacemaker being like 'yep gonna push that to prod, no tests, adgaf'
    – Fredy31 Mod
    Commented Jan 25, 2023 at 21:55
  • ChatGPT has no notion of what C# is. Yep, and same can be applied to [insert any game title here].
    – dly
    Commented Jan 28, 2023 at 9:53
5

An outright ban on AI-generated answers may throw the baby out with the bathwater. There could be useful AI-generated answers, especially if someone decides to train one specifically for that purpose. For example, it could be feasible for someone to train an AI on taking a request for a Minecraft Command Block command and coming up with a command for them, which is a much more constrained domain than generating arbitrary code.

The site already has rules and tools to deal with low quality answers and people posting a lot of low quality answers at the same time. If an answer is low quality, incomprehensible or plain wrong it can be down-voted or removed.

3

Really, and I have not been part of the ChatGPT conversation on the main SE site; to me a good answer is a good answer.

But I know there are pitfalls to that opinion. Like ChatGPT could be used to try to farm rep (which I personally never got... not like there is any advantage for an account having high rep, haha)

One of the biggest problems with the question posted here... its just a wall of text, that was not given any cleanup or trying to get to the point quickly and dropping it on a question from 2013 that has also already been answered since then.

But that second point, GPT or not, was just a bad answer and would have been deleted anyways.

To put a bow on it all: I personally would keep an eye onto how users use ChatGPT on this site, but would not delete on sight if GPT is used. Same rules apply. A good answer will stay and be upvoted, a bad answer will be downvoted and deleted.

EDIT: I should add, as a mod; if our 'bosses' at StackExchange decide ChatGPT is out and should be deleted on sight, it will be what is applied.

5
  • 1
    I agree with this. After all, I think fighting against AI generated content like this will ultimately fail. As the tech improves, the distinction between user-made and bot-made will become minimal. Much better to treat questions and answers the same, regardless of who or what made them. If it's good, great, if it's not, we already have procedures for that.
    – Batophobia
    Commented Jan 25, 2023 at 21:50
  • 1
    I would add that if we see clear work to try to game the system, like rep farming, it will get the same attention/punishment.
    – Fredy31 Mod
    Commented Jan 25, 2023 at 21:52
  • Re: 'bosses', just a note that currently SE has declined to put a blanket ban and instead let each community decide its policy.
    – antimo
    Commented Feb 7, 2023 at 1:00
  • A good answer is a good answer, but the problem is that "good" is relative, and those terms need to be defined for that statement to have any significance.
    – Joachim
    Commented Feb 21, 2023 at 20:41
  • Good or bad is definitely a weird bar because it changes user to user. Thats why this community is built on up and down votes. There are guidelines, but bad answers will be downvoted and good answers will be upvoted. And that doesnt change if ChatGPT is involved or not.
    – Fredy31 Mod
    Commented Feb 21, 2023 at 21:41
1

My opinion is this:

  • Is it wrong? Don't post it
  • Is it right? Credit it

If there has been no human verification used - Don't post it; it's like copy-pasting some random answer on the internet without verifying it's correct or not, regardless of whether it came from a person or not.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .