21
\$\begingroup\$

In general, I agree that chatbot content should be forbidden.

I am also aware that detecting chatbot content isn't always easy - there are bound to be false positives and false negatives.

As I have flagged several posts as chatbot content myself, I hope that I haven't gotten any innocent parties suspended. I try to be certain before I flag such content, and I'm sure the moderators do their best to verify it before they hit "suspend," but we are all fallible human beings.

What recourse does an innocent user have if they receive this message in error:

You have recently been detected as posting AI-generated (e.g. ChatGPT) content on Stack Exchange. This is neither good community citizenship, nor is it what we expect of our users.

Considering there have been warnings about the disruptiveness of this behavior all over the Stack Exchange Metas, we feel there's sufficient warnings about the inappropriateness of this.

Also this behavior counts as plagiarism since you did not (and cannot) cite & reference the original sources used by the AI to generate that content.

Do not post AI-generated content again. Your account has been temporarily suspended for 30 days.

I have the impression (from looking around other Meta sites) that there is supposed to be a link in (or with) that message that should lead to a place to request a review of the suspension, but I have not seen the message itself, nor do I know what it looks like when it pops up. I also do not know if the message is somehow linked into the user profile so that the user can review it and take needed actions.


I ask this question because I am in personal contact with a user who received the above message. This person swears to have never posted a chatbot based answer.

One possibility that I see is that this person is not a native English speaking person. The somewhat stilted translation from that person's native language (German) to English may have resulted in a text that "looks" somewhat "chatbotish."

While the available chatbot detectors seem to work, I know that they also make mistakes.

I've fed some of my own answers to a few of the chatbot detectors. They usually come back as more than 90 percent certain that they were written by a human, but they have flagged individual passages as "chatbot output" - and I know that I wrote that text. Such "chatbot" passages are usually where I stop the explanations and make a simple blanket statement that summarizes the explanation.

\$\endgroup\$
1
  • 6
    \$\begingroup\$ The chatbot detectors are apparently about as dumb as the AI itself. I pasted in some random posts written by myself and on several occasions it told me it was AI-generated. This from the official ChatGPT detector from OpenAI. These tools seem to be as blunt as counting the number of paragraphs and checking the grammar, something like that... \$\endgroup\$
    – Lundin
    Commented Apr 3, 2023 at 14:35

1 Answer 1

22
\$\begingroup\$

You've asked an important question. Given more time, I could probably write a more detailed & polished answer, but that would delay things. On the basis that "something is better than nothing", here is a bit of a "brain dump" (as this is something I'm already actively thinking about) so you can see that your question is not being ignored:

  • IMHO ChatGPT and similar tools & technology pose a significant threat to Stack Exchange, in various different ways.

    One obvious concern is their ability to provide often plausible-sounding apparently authoritative answers, but containing various amounts of wrongness which can sometimes only be detected by subject matter experts (SME) in that specific area.

    That leads to difficult-to-detect wrong answers being left on the site to mislead people, until spotted by an SME in that area (and sometimes accumulating upvotes in the meantime due to the answer's plausible-but-wrong contents, leading to even more people believing the answer).

    Stack Exchange overall is (again IMHO) still catching-up on processes and policies in this area. It's not a great situation. We have more work to do (e.g. guess what I had planned for this weekend? :) ) but it will take time.

  • Due to ChatGPT flags and detections, I know that my mod workload has increased significantly. That's partly due to the amount of work that goes into background checks on each one. As you have seen from the flags you submit, we do take action.

    From personal experience, I have occasionally seen the site being swamped with ChatGPT-generated answers quicker than I can review them. And that's part of the problem - it takes almost no effort for a "bad actor" to copy a question from this site into ChatGPT, get an "answer", and post it here. This happened on Stack Overflow (only much worse).

  • We are going to get things wrong in moderating this issue: Partly because processes & policies are still being developed; partly because the detection tools aren't perfect as you said; partly because people do try to evade being detected when posting ChatGPT content, and partly because we're fallible humans as you also said.

    We've been thrown into a new moderation situation that we didn't ask for, but somehow we have to navigate though it, as best we can, with the tools we have and while we are also learning.


Due to some clues you have included, I suspect I know which user is in contact with you. As usual, I have to be cautious about revealing specifics. However (if I guessed correctly) I will say that they are in the queue for a response, as they have used the usual reply mechanism for a suspension (which is the answer to part of your question - yes, there is a way for a user to reply to a suspension). But, as I said, workload (including all the usual flags) and even writing this, mean that the queue is taking a while to get through.

If you are in contact with that user, please reassure them that they will get a private response, after their reply message and the original flags / detection results have been thoroughly reviewed.

\$\endgroup\$
5
  • 4
    \$\begingroup\$ Thank you for taking the time to reply. I've sent a message to that particular person, referring to this page. \$\endgroup\$
    – JRE
    Commented Apr 1, 2023 at 15:47
  • 5
    \$\begingroup\$ Good question and good reply, Thanks both of you. \$\endgroup\$
    – Solar Mike
    Commented Apr 2, 2023 at 11:30
  • 1
    \$\begingroup\$ @JRE It might be helpful if they could reply here with a link to the answer they were suspended for. Partially to have the individual's record cleaned, partially to let the mods analyze why things went wrong. \$\endgroup\$
    – Lundin
    Commented Apr 3, 2023 at 14:41
  • 1
    \$\begingroup\$ The person I asked about has confirmed that the suspension has been lifted. I've seen a new post from that user. I've also gone through many of the earlier posts, and cannot tell which might have been suspected of being chatbot output. \$\endgroup\$
    – JRE
    Commented Apr 5, 2023 at 6:19
  • 3
    \$\begingroup\$ FWIW, it seems to me that people can also produce authoritative plausible-sounding answers -- the ability to BS or make mistakes does not belong to AI. The existing "peer-review" system handles this well enough, keeping the number and effect of incorrect answers negligible. The 2nd point is the real problem (or is what makes the 1st point become a real problem) -- that these correct-seeming answers can be generated quickly by anyone and with no personal effort, quickly exceeding the capacity of the system to cope. \$\endgroup\$ Commented Apr 9, 2023 at 1:40

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .