15
$\begingroup$

Higher question-rate sites have already addressed chatbot-generated posting in an essentially universal negative way, for example:

Stack Overflow:

Politics SE

Math SE

Recently a bit of ChatGTP was added to a question in Astronomy SE by the author AFTER the question was already answered by the same user. They properly cited the generating website and made it clear what was being done, but it seems almost gratuitous ex post facto chatbotting. I assume it's only a matter of (a short) time before it's introduced here as well:

Question: While this is being sorted out in the greater Stack Exchange Ecosystem of communities and some guidelines and procedures are worked out, should we at least temporarily ban chatbot-generated content in Space SE?

$\endgroup$
10
  • $\begingroup$ this was also posted in Astronomy SE $\endgroup$
    – uhoh
    Commented Dec 12, 2022 at 21:59
  • $\begingroup$ I think that it’s harmful to have dozens of individual stacks ban or allow chatGPT, rather than having a site wide policy for or against it. $\endgroup$
    – Topcode
    Commented Dec 13, 2022 at 13:15
  • 3
    $\begingroup$ @Topcode where exactly are you seeing "allow"? Anyway, 1) a lot of folks in the lower question-rate sites don't necessarily see the big debates in the main meta so the links in the question help get the word out, and 2) where exactly is the harm; can you be specific? There are plenty of characteristics, "rules", customs and best practices that vary from site to site. $\endgroup$
    – uhoh
    Commented Dec 13, 2022 at 14:03
  • 2
    $\begingroup$ “where exactly are you seeing "allow"?” Allow is the inverse of ban, so if we don’t ban, that’s called allowing. $\endgroup$
    – Topcode
    Commented Dec 13, 2022 at 15:05
  • $\begingroup$ @Topcode But where have you seen a community decide to allow it? I don't think that's happened. $\endgroup$
    – uhoh
    Commented Dec 14, 2022 at 0:25
  • $\begingroup$ “But where have you seen a community decide to allow it?” By default, if there is no policy on something $\endgroup$
    – Topcode
    Commented Dec 14, 2022 at 2:54
  • $\begingroup$ @Topcode that's glass-half-empty thinking $\endgroup$
    – uhoh
    Commented Dec 14, 2022 at 6:02
  • $\begingroup$ @uhoh You still appear to have a bass ackwards view regarding closure votes as you have left multiple comments saying you were voting to keep a question open. That is not an option; you can either vote to close or not vote. There is no "vote to stay open" option. You might want to suggest adding "vote to stay open" as a change to the SE software that would in some manner counteract votes to close. I'm not sure whether I would agree with such a proposal. $\endgroup$ Commented Feb 14, 2023 at 14:43
  • $\begingroup$ @uhoh Most governments and religions have laws that forbid certain acts. "Thou shalt not kill": Murder is forbidden; it's a bad thing to do. That is the opposite of glass half full thinking as rules that explicitly spell out what is disallowed means that what is not explicitly forbidden is allowed. Some countries do have laws that explicitly spell out what little behavior is allowed. I for one would not want to live in such a place, or even travel to such places. $\endgroup$ Commented Feb 14, 2023 at 14:50
  • $\begingroup$ @DavidHammen we've had this discussion about voting to leave open before. You continue to confuse what you think should be true with what is. See this answer to What exactly happens with the button "Leave Open" (previously "Do Not Close")? $\endgroup$
    – uhoh
    Commented Feb 15, 2023 at 0:45

4 Answers 4

18
$\begingroup$

I would happily ban it here, and on all sites, if possible, as it fundamentally just causes extra work for reviewers and moderators. It does not produce correct answers, but produces ones which sound like they are correct, and it can take some digging to work out what is wrong.

It's the same as a human posting a correct-sounding answer that is wrong, but with the added problem that ChatGPT is automatable, so can produce posts at speed and scale. Which leads to the process for ensuring better posts get voted up and worse posts get voted down getting clogged, and the site becoming a worse experience for everyone.

On one of the other sites I mod one user posted 23 ChatGPT posts in a few minutes. All incorrect. All potentially convincing to someone not well versed in the subject. The effort for folks to flag and then mods to track down each of his posts, check and then delete them and suspend the user is not insignificant.

$\endgroup$
12
  • $\begingroup$ “The effort for folks to flag and then mods to track down each of his posts, check and then delete them and suspend the user is not insignificant.” And doing the exact same but because the answer is chatGPT rather than bad, would be less effort? It is possible that some users would be dissuaded by the rules changing, but I think that is an insignificant number. $\endgroup$
    – Topcode
    Commented Dec 13, 2022 at 18:51
  • 3
    $\begingroup$ @Topcode If we explicitly disallow chatGPT, then it may raise awareness in some portion of casual users to be more careful to avoid upvoting answers that look like they may have been generated in this way. If the answers are less likely to receive upvotes, it's also more likely that the more active users will be able to help us delete these from the review queues without requiring moderator intervention. It's a small thing, but until such point in time that we get a sitewide policy, we'll take what we can get. $\endgroup$
    – called2voyage Mod
    Commented Dec 13, 2022 at 19:45
  • $\begingroup$ @called2voyage “look like they may have been generated in this way” and can anyone explain what this means? Because there has been at least one false positive on SO already, and in the discussion regarding that nobody has made any progress with creating any sort of guidelines for this. $\endgroup$
    – Topcode
    Commented Dec 14, 2022 at 2:56
  • $\begingroup$ "It does not produce correct answers". Sometimes it does. $\endgroup$ Commented Dec 14, 2022 at 3:57
  • 5
    $\begingroup$ @FranckDernoncourt It's more accurate to say it does not produce valid answers according to the standards of this site. Answers which reference outside content require attribution, and chatGPT answers cannot include guaranteed-accurate attribution. $\endgroup$
    – called2voyage Mod
    Commented Dec 14, 2022 at 14:56
  • 1
    $\begingroup$ @Topcode Your first red flag is that an answer has no attribution. While chatGPT can certainly be told to include references, a lot of the chatGPT answers we're getting do not include attribution, and answers with no attribution should be discouraged here anyway. So when people see them, they need to be flagging them for attention. They may turn out to be false positives, but more experienced users can help determine that in the review queues. $\endgroup$
    – called2voyage Mod
    Commented Dec 14, 2022 at 14:58
  • $\begingroup$ @called2voyage makes sense. I'd tend to guess it's quite uncommon that a ChatGPT answer overly copies a single source to the point of requiring some attribution though, given the training set size. $\endgroup$ Commented Dec 14, 2022 at 15:38
  • 4
    $\begingroup$ @FranckDernoncourt For specialist subjects, I think the odds of it overly copying a single source are fairly high. $\endgroup$
    – called2voyage Mod
    Commented Dec 14, 2022 at 16:14
  • 6
    $\begingroup$ Noooo! It's an automated BS-unreferenced-answer generator! Kill it with fire. $\endgroup$ Commented Dec 14, 2022 at 22:13
  • 5
    $\begingroup$ From twitter.com/studentactivism/status/… : "Because if ChatGPT is, as it seems to be, a consummate bullshitter, it's also—definitionally—a bullshitter who doesn't know when its bullshitting. And we all know that that's the most dangerous kind." $\endgroup$
    – PM 2Ring
    Commented Dec 15, 2022 at 14:41
  • $\begingroup$ Wondering what to make of this now-deleted answer, which I think contained some truth and some falsehood, one totally spurious attribution, and based on the user's history might have come from Bing? It's better than some of the earlier stuff but still seems to just fall apart under scrutiny $\endgroup$
    – Erin Anne
    Commented Aug 30, 2023 at 5:27
  • $\begingroup$ @ErinAnne I think it's still poor quality, so happy to leave it deleted $\endgroup$
    – Rory Alsop Mod
    Commented Aug 30, 2023 at 17:08
2
$\begingroup$

I think the policy should simply be: "don't post wholesale stuff you didn't write; asking a computer to write it for you doesn't count as writing it yourself".

Naturally, we have an exception for attributed quotes, when the answer depends on the quoted material but also adds something to it. And I could even stomach an answer that was of the form "I asked ChatGPT your question and it said X, and I believe this analysis is valid because Y, or has caveats Z" — as long as it's aboveboard and reasonably high-quality. But we shouldn't have bot accounts posting answers, or humans acting as bots' proxies.

$\endgroup$
2
$\begingroup$

I completely agree with banning ChatGPT as a source for answers. Far too often, it's answers are flat-out wrong. However, because ChatGPT's answers oftentimes are garbage / confusing, this makes people who are using it to re-ask their question on the SE network. I don't particularly see a case for banning ChatGPT convos in questions. One of our main tasks on the SE network is to clear up confusion. Besides, the issue will soon become moot once ChatGPT starts charging for its nonsense answers.

I hope this helps!

$\endgroup$
4
  • 4
    $\begingroup$ The last line ("I hope this helps!") is sarcastic. That's how ChatCGT ends its answers. $\endgroup$ Commented Jan 3, 2023 at 10:48
  • 1
    $\begingroup$ "One of our main tasks on the SE network is to clear up confusion." But... I mean, it should be genuine goof-faith sentient confusion, not "Hey, these two chatterbot answers disagree with each other - so which one is correct?" $\endgroup$
    – uhoh
    Commented Jan 3, 2023 at 11:11
  • $\begingroup$ Do you mean ChatGBT and not ChatCGT? $\endgroup$ Commented Feb 14, 2023 at 14:18
  • 1
    $\begingroup$ @TheRocketfan I meant to type ChatGPT. Fixed. $\endgroup$ Commented Feb 14, 2023 at 14:23
0
$\begingroup$

I don't see a reason why we should explicitly ban chatGPT or other AI-generated content: we already have robust rules and guidelines in place against low-quality or generally just bad answers (along with malicious users) and I think that instituting more rules and restrictions is fundamentally a bad idea when you have the option to just not. Especially since chatGPT is so easy to filter out (it can't cite sources), this makes it less of an issue here on Space Stackexchange because so much of what is asked and answered requires evidence, compared to some of the "softer" stack exchanges.

So: Don't ban chatbot content, simply enforce current content guidelines as they stand.

$\endgroup$
12
  • $\begingroup$ this might be related to this recent question which purports to use chatbot generated material as prior research, or a semi-authoritative source. I was careful to ask this question about "chatbot generated content" to include both question and answer posts. About "more rules and restrictions", what are the current rules and restrictions that address the potential slew of new questions of the form "ChatBot says 2+2=5 can be the basis of a new religion/mathematical formalism/science fiction story. Is that true?" $\endgroup$
    – uhoh
    Commented Jan 3, 2023 at 10:04
  • $\begingroup$ Those can be 100% mindlessly, algorithmically generated and posted and an arbitrary rate. This is different, new. Shouldn't rules and restrictions adapt as technology develops? $\endgroup$
    – uhoh
    Commented Jan 3, 2023 at 10:06
  • 2
    $\begingroup$ @uhoh When new rules are sufficiently necessary, sure, implementing them should be carefully considered, but I don't think this is the case here. Adhering to present moderation guidelines and site/answer rules, moderators already have no issue "legitimately" deleting chatGPT posts, and I don't see how making an explicit "no chatGPT" rule would make the job of moderators easier or increase the user experience. $\endgroup$
    – Dragongeek
    Commented Jan 3, 2023 at 10:19
  • 2
    $\begingroup$ @uhoh Rules in general are much easier to implement than get rid of, so I want to avoid "shooting from the hip" when it comes to creating new rules that will probably stay. If we do want to create a new rule that aims to curb chatGPT, I would advise something like "Answers must cite sources" as this rule would not only filter out chatGPT answers but also increase the user experience by raising the quality of answers. $\endgroup$
    – Dragongeek
    Commented Jan 3, 2023 at 10:21
  • $\begingroup$ That sounds workable only if one then spells out that chatterbots don't count as sources, which brings us back to the simpler temporary ban on chatterbot content. $\endgroup$
    – uhoh
    Commented Jan 3, 2023 at 10:23
  • $\begingroup$ @uhoh Is chatbots being sources even a discussion? They aren't cite-able, accountable, or even (currently) legally capable of creating intellectual property. I don't think there's a real argument here for saying they are $\endgroup$
    – Dragongeek
    Commented Jan 3, 2023 at 10:27
  • 2
    $\begingroup$ Yes, there are folks who will likely argue that chatterbot output is a sufficiently good source upon which to base a "chatterbot sez.." question, since it at least derives (in some way) from material that includes facts. I think it is important to at least temporarily ban it and make it clear that chatterbot output is worse than monetized weird pseudoscience YouTube channels and should not be the basis of questions. I guess I feel that needs to be explicit, we can't just assume everyone will automatically view it the same. We're in an era of norms and precedent violation; explicit is good. $\endgroup$
    – uhoh
    Commented Jan 3, 2023 at 11:25
  • $\begingroup$ In this recent question the OP begins " have read many times that..." but the only source is a pair of conflicting chatterbot outputs. I'd like to be able to edit the question and delete the "chatterbot sez..." and link to our policy as an explanation. $\endgroup$
    – uhoh
    Commented Jan 3, 2023 at 11:37
  • 1
    $\begingroup$ @uhoh I don't see why you can't edit the question in that specific case--I would classify it under "needs improvement" regardless of policy on chatGPT. Changing the question to be "I have heard conflicting explanations that [X] and would like to know exactly how thrust is produced in Ion thrusters" is a perfectly fine question and, although it shows a lack of own-effort research, I don't think chatGPT is the issue here. I would recommend a similar edit to any other question where paragraphs of extra and superfluous background were given, regardless of data origin $\endgroup$
    – Dragongeek
    Commented Jan 3, 2023 at 12:49
  • 1
    $\begingroup$ @uhoh at the end of the day, the site is here to answer questions and clear up confusion, and I think that while people shouldn't be using chatGPT to educate themselves, being confused about something and coming to Space.SE for an explanation is a perfectly valid reason and something that we want to encourage. $\endgroup$
    – Dragongeek
    Commented Jan 3, 2023 at 12:52
  • $\begingroup$ I'm sure we agree on that quite nicely, but since chatterbots can generate confusion almost algorithmically at kHz rates, I am concerned. Because it takes only seconds to ask a chatterbot a one-sentence question, then to post the output in an SE question followed by "Is this true?" I think that that kind of question needs an a priori consensus that it's insufficient. Otherwise it can quickly get out of hand. Thus I've asked here for some agreement about a temporary ban. To quote Melissa McCarthy (and make fun of myself) "The ban is not a ban." $\endgroup$
    – uhoh
    Commented Jan 3, 2023 at 23:58
  • $\begingroup$ So perhaps a meta question like "Are 'Chatterbot says X, is it true?' questions OK here?" better defines my concern, and an answer like "Well, if they don't get out of hand in terms of a daily rate and we can edit them and successfully discourage the OP from asking a dozen more like it, yes. And if they get out out of hand, then we can change to no later." would work for most people including me. $\endgroup$
    – uhoh
    Commented Jan 4, 2023 at 0:13

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .