12

As title. I just saw a new user post an answer to my question: https://vi.stackexchange.com/a/39433/10189. While the author did tell me about this, and I did appreciate his honesty, I still think these machine-generated answers are inappropriate. Is this allowed?

Sorry that my content above is almost the same as the one I just posted on meta.stackexchange. I'm doing this since some people there suggested posting here. (I have no idea how to migrate question.)

I can only complain about this here since I have no idea how to check whether an answer is generated by ChatGPT. I hate machine-generated answers because:

  1. I think they will not help this forum to grow.
  2. The author might not understand what they just copy-pasted. This means that the author is probably not able to handle the comments below it: for correction/confirmation, and so on.
  3. Instead, I do appreciate those people(mostly ones with high reputations, like moderators, by my current experience on Vi and Vim) spending their time composing careful-minded answers that are more readable/suitable for different levels of readers. This avoid 2, and definitely help the forum to grow.

If ChatGPT answers are everywhere this forum will have no value in my perspective.

1

5 Answers 5

5

Just a reminder: Chat-GPT is temporarily banned on StackOverflow (this is not technically network-wide yet).


As for what to do… without the ban, I would probably downvote and move on (let the garbage collector have it). With the ban, flag this stuff or vote to delete/close as appropriate. We'll have to be proactive as a community.

3
  • 1
    That could read as if it (currently) applies to more than only Stack Overflow. Perhaps make that clearer? A network-wide ban is still only a proposal (at this time). Commented Dec 6, 2022 at 20:25
  • 1
    @PeterMortensen indeed… the SO post is not clear on that point, in particular now that the brand is SO, not SE.
    – D. Ben Knoble Mod
    Commented Dec 7, 2022 at 14:56
  • Why downvote if the ChatGPT answer is correct? Commented Dec 18, 2022 at 7:40
4

ChatGPT is a free tool (even GPT-4 is free via Bing Chat) – if people want to get answers from ChatGPT then they can use it themselves. This site being a "ChatGPT with extra steps" doesn't really benefit anyone, just as this site merely being a "copy/paste of the Vim documentation" isn't very helpful.

A good edited ChatGPT answer is indistinguishable from a good human answer. I don't think anyone minds if someone 1) asks ChatGPT, 2) verifies the information is correct, complete, and appropriate for the question at hand, and 3) rewrites the answer ChatGPT gave them. That's no different than looking stuff up in the documentation and rewriting it: it's just one of many possible sources of information you can use.

The problem is when people don't:

  • have enough knowledge to do step 2 well ("verify the information is correct, complete, and appropriate for the question");
  • skip step 3 and just copy/paste the ChatGPT answer – or significant parts of it – without any editing, which is almost always badly written text.

If you find yourself asking ChatGPT and pressing Control+C on entire paragraphs then you're doing it wrong.

To give an impression of how well ChatGPT does:

Humans are wrong all the time as well – I've certainly posted my share of wrong or incomplete answers and I've downvoted plenty of them over the years – but I've rarely been as "confidently incorrect" as ChatGPT. Words like "maybe" or "perhaps" are not in its vocabulary, never mind things like "I need more information to answer your question", "what you're asking for is daft", or "this sounds like an XY problem".

In short: ChatGPT just gives bad answers, even when it's correct. Combine this with the fact that it takes almost zero effort to actually post such an answer it's best to just say "ChatGPT is banned" as that's a simple and clear policy, whereas "ChatGPT can be used as an information source if you verify it for correctness and completeness, and rewrite the answers" is vague and unclear.

In the future all of this may significantly change. I don't think it will, but it might. Whatever the case: today is not the future.


P.S. another example: ChatGPT confidently asserts that I contributed to cwm. This is complete nonsense, I never contributed a line of code. I also never worked on NeoVim beyond posting 3 or 4 messages on their issue tracker; some of my patches to Vim did end up in NeoVim. When I asked it last month it confidently told me I was an OpenBSD developer – also complete nonsense. Also note it claimed to not know who "Martin Tournoij" is, but then stated that "arp242 is Martin Tournoij" – this is a pretty good example of how ChatGPT doesn't really "know" anything. It also asserts that GoatCounter was created by "Jan-Lukas Else"; I have no idea who that is, and he never contributed any patches. If I ask it directly it does answer correctly.

This site was launched on September 11, 2011. Oh no, February 19 2013. Or February 18 2013? We're still in private beta. No, we did leave private beta. Ask ChatGPT the same question a few times and you'll get directly opposing answers.

2
  • Thank you for the writing. (I'm really afraid that fewer people like you're willing to write the answer themselves in the future, to be honest.) Commented Jun 5, 2023 at 16:21
  • 1
    I'm a bit less pessimistic about it @NeoZoom.lua; back when you posted this question there was a large "novelty factor" to ChatGPT and people were trying it on everything; the novelty of it has since worn out. There will no doubt be people using it for spam, blogspam, and other things, but that's probably a minority. You can compare it to email spam: how many people actually send out email spam? Not that many, but the sheer volume of it does make things worse for everyone. I suspect these tools will be similar. Commented Jun 5, 2023 at 20:37
3

I think that a good answer is a good answer, regardless of its source. Humans can provide wrong answers as well, not only robots. Even though there is a good chance that the provided answer is good, especially if the answer was tested by human. Thus, I feel that banning this bot is more out of our natural hesitation regarding AI, what is sometimes called "the uncanny valley", than anything.

The problematic thing here is increasing a user score based on that. However, if the user tested it, it would be appropriate to keep it and give it a positive feedback.

I copy it an answer I like regarding it. (by https://meta.stackexchange.com/users/244695/machavity)

Answers generated by an AI should be considered as being written by the AI. That means you can quote them like any other source, but you must attribute them to the AI, just like any other source, and not use a bulk-copied AI answer quote as an answer. This way, we're avoiding the thorny issues of people running to the latest AI to get answers so they can copy-paste them as their own. We have plagiarism tools (current and forthcoming) in this wheelhouse so we don't need to reinvent any wheels.

4
  • Imagine a world all resources come from robotic, and we're just readers... Commented Dec 7, 2022 at 19:19
  • 4
    The problem in this specific case is that it's not a good answer. I don't care if people use ChatGPT as a starting point, but when they simply copy/paste the waffling nonsense it produces without spending any effort on cleaning it up a bit (or verifying it's correct; I'm not sure if this answer is even correct), it just becomes very spammy. Commented Dec 10, 2022 at 11:38
  • 1
    @MartinTournoij ChatGPT sometimes does give good answers. Commented Dec 18, 2022 at 7:43
  • 2
    That's great @FranckDernoncourt, but copy/pasting incorrect waffling nonsense is still not a good idea. Commented Dec 18, 2022 at 15:47
1

The fundamental problem is that while ChatGPT is very good at writing what looks like a really good answer, that answer might be totally wrong. ChatGPT makes things up (i.e. lies) in order to give the questioners what they wanted.

Without fact checking, the answer will appear to be well written and knowledgeable, but will actually be condescending garbage.

Many SE sites will already have a mechanism for rejecting such answers, as they require references that back up the facts, and ChatGPT never provides its sources.

1

TL;DR Current network-wide policy is very murky.

My best (personal) advice is to continue to operate as in my original post:

As for what to do… without the ban, I would probably downvote and move on (let the garbage collector have it). With the ban […]

That is, downvote as you feel appropriate. If something is low-quality or otherwise deserves a flag for reasons unrelated to AI generation (e.g., plagiarism is not allowed; rude or abusive behavior is not allowed; spam is not allowed), continue to flag or vote as necessary.

The Code of Conduct and all the other trappings of community-driven, StackExchange Q&A still apply.


First, the facts. (Remember that existing policy was to create site-specific policy as needed and to follow attribution rules.)

  1. What is the network policy regarding AI Generated content? writes

    Earlier this week, Stack Exchange released guidance to moderators on how to moderate AI Generated content. What does this guidance include?

    Editorially, the word "guidance" is not correct; the information we were given was presented as a fact of not-yet-public policy.

  2. The same post presents the following as policy:

    We recently performed a set of analyses on the current approach to AI-generated content moderation. The conclusions of these analyses strongly indicate to us that AI-generated content is not being properly identified across the network, and that the potential for false-positives is very high. Through no fault of moderators' own, we also suspect that there have been biases for or against residents of specific countries as a potential result of the heuristics being applied to these posts. Finally, internal evidence strongly suggests that the overapplication of suspensions for AI-generated content may be turning away a large number of legitimate contributors to the site.

    In order to help mitigate the issue, we've asked moderators to apply a very strict standard of evidence to determining whether a post is AI-authored when deciding to suspend a user. This standard of evidence excludes the use of moderators' best guesses based on users' writing styles and behavioral indicators, because we could not validate that these indicators are actually successfully identifying AI-generated posts when they are written. This standard would exclude most suspensions issued to date.

    We've also identified that current GPT detectors have an unacceptably high false positive rate for content on our network and should not be regarded as reliable indicators of GPT authorship. While these aren't the sole tools that moderators rely upon to identify AI-generated content, some of the heuristics used have been developed with their assistance.

    We've reminded moderators that suspensions (and typically mod messages as well) are for real, verifiable malfeasance only, and should not be enacted on the basis of hunches, guesses, intuition, or unverified heuristics. Therefore, we are not confident that either GPT detectors or best-guess heuristics can be used to definitively identify suspicious content for the purposes of suspension.

    As always, moderators who identify that a user has a problematic pattern of low-quality posts should continue to act on such users as they otherwise would. Indicators moderators currently use to determine that a post was authored with the help of AI can in some cases form a reliable set of indicators that the content quality may be poor, and moderators should feel free to review posts as such. If someone is repeatedly contributing low-quality content, we already have policies in place to help handle it, including a suspension reason that can, in those cases, be used.


Now for some editorial.

As a result, it is not at all clear how sites, elected diamond moderators, and community members with moderation privileges should behave. I stand by my personal advice at the top of the post, though it is fundamentally you, a community member, who has to choose how you will respond. The Code of Conduct and all the other trappings of community-driven, StackExchange Q&A still apply.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .