24

ChatGPT is a human assisted AI that can be used to generate reasonable looking answers to questions. It seems to have been used for at least one answer here (now deleted). However StackOverflow has (temporarily) banned it for answers there.

One major criticism of it is that it generates text that can be factually "questionable" since it depends on the data that was used to train it, which isn't always reliable.

Should we take a position and make policy on using it?

The possibilities are to forbid it, permit it with citation, or to permit it generally. There might be others as well.

One problem I see is that it might be difficult to "notice" its use. It might, therefore, be hard or impossible to enforce any policy.

Another problem is that humans also (present company excluded) sometimes generate faulty reasoning and unfactual "facts".

Personally, I think its use could greatly degrade the usefulness and validity of this site if it is overused. But, then, I'm generally skeptical of AI in its present form. We are, after all, trying to provide valid career guidance to our peers.


Insights into how a restrictive policy might be enforced would be welcome in answers if you believe they are appropriate. Here is some discussion about how to recognize these).


The New York Times has an article (probably paywalled) concerning ChatGPT.

12
  • 7
    BTW, a network-wide ban has been proposed but not (yet) adopted. There is also some chatter across the meta network about how such bans might be enforced.
    – cag51 Mod
    Commented Dec 6, 2022 at 15:23
  • 3
    We actually already deleted more than one AI-generated answer. Commented Dec 6, 2022 at 17:51
  • 1
    @MassimoOrtolano For my interest, how do you determine whether an answer is AI-generated? From what I have seen, ChatGPT produces text that looks substantially more coherent than many of the genuine answers, and especially questions, we get here.
    – xLeitix
    Commented Dec 7, 2022 at 21:47
  • @xLeitix, it is a hard problem surely. But a policy and ways to flag suspected answers (in close and flag dialogs) might reduce the problem to a minimum, assuming people obey the rules, as most do.
    – Buffy
    Commented Dec 7, 2022 at 21:49
  • 2
    @xLeitix For the moment, mods are not disclosing how these kind of posts are caught. See this meta post for the reason (note that the links reported there are mod-only). Anyway, there are mods and users, especially from Stack Overflow, who are putting a lot of effort to fight this phenomenon, and some of them flagged a few of our posts. Commented Dec 7, 2022 at 22:46
  • @MassimoOrtolano I understand we cannot discuss how to figure out the posts are AI-generated. My problem is that I just saw a couple of fishy posts which I am planning to flag. On the other hand, I want to keep my flagging records super good (I hate declined flags), So, please give me some advice. Better yet, please provide a convenient way to notify the mods without hurting the users' reputations. A suggestion, mods can dispute the flags, not decline them.
    – Nobody
    Commented Dec 8, 2022 at 7:28
  • 2
    @Nobody If you see fishy posts, please by all means flag them. Flagging doesn't hurt user reputation, and for this kind of stuff we'll likely consider the flag helpful even if further investigation doesn't confirm the allegation. Notice that only comment flags cannot be marked helpful without doing nothing. Commented Dec 8, 2022 at 8:01
  • Just to underline the above point: flags are generally helpful to us; we'd rather people like you err on the side of raising the flag so we can monitor a situation, even if no immediate action is required. For flags on answers or questions, we can mark the flag helpful without taking any further action (and I think all 4 of us do this often). But it's true that flags on comments do not have this option: we either have to decline the flag or delete the comment. So that may help with strategizing your flagging record, if that's important to you.
    – cag51 Mod
    Commented Dec 8, 2022 at 10:28
  • 1
    Zach at SMBC must read this site: smbc-comics.com/comic/themes
    – Buffy
    Commented Dec 8, 2022 at 11:37
  • 1
    I have a beginner question: Can the bot ask questions? Commented Dec 8, 2022 at 22:30
  • 1
    @AnonymousPhysicist Certainly, in the prompt one can tell it to write a question. One can even provide detailed requirements or specify that the question should be a good fit for Academia SE. It remains to be seen how effective SE will be in preventing such spam.
    – GoodDeeds
    Commented Dec 8, 2022 at 23:02
  • 2
    Update: Seems like a pretty strong consensus; the resulting policy is available here.
    – cag51 Mod
    Commented Dec 12, 2022 at 4:00

3 Answers 3

34

I do not see any value in posted answers generated by ChatGPT-like services.

  • If the answer is bad -> the answer should be deleted.
  • If the answer is somewhat good -> then, ChatGPT-answer can be looked similar to search engines; thus, a user could have asked ChatGPT the question directly without asking it on Academia SE. Nobody would find posting a "screenshot" of Google search results as an answer useful.

Therefore, I completely do not see a place for ChatGPT answers, particularly at Academia SE, where, in my opinion, there is a very small percentage of questions that can be answered adequately by an AI. Thus, regardless of the decision on the network-wide ban on ChatGPT-like answers, Academia SE should adopt a strict policy against ChatGPT answers.

8
  • Please see: meta.stackoverflow.com/questions/422066/… Occasionally, a real human answer (hopefully written with good intentions) can be mistaken for a ChatGPT (bot) answer. Commented Dec 14, 2022 at 17:28
  • @JosephDoggie yep, this is an inevitable side-effect. Commented Dec 14, 2022 at 17:29
  • I certainly wouldn't want my writing to be called "bot" writing! Commented Dec 14, 2022 at 18:53
  • 2
    @JosephDoggie sure, I doubt anybody would want that. what's your point? Commented Dec 14, 2022 at 19:09
  • My point is: Be very careful in marking things as 'bot' unless, one is sure. Apparently, "Here's an example" could be 'bot' language -- see the meta SO question I cited and the answers. Certain people's styles could be mistaken for 'bots' which could cause hurt feelings, etc. As I pointed out on SO, we are supposed to be a welcoming site! Commented Dec 14, 2022 at 21:12
  • 1
    @JosephDoggie I am unsure how it is at all related to my post. Commented Dec 14, 2022 at 21:14
  • There were too many comments on the OP's question itself, so I commented here. However, I would point out, your answer does say "ChatGPT-like answers" .... Therefore, I'm simply reminding readers NOT to declare everything ChatGPT, as this could be a misclassification of a human-written answer. Commented Dec 14, 2022 at 21:23
  • @JosephDoggie I think I got you this time. Edited my post to clearly communicate the thought. Commented Dec 14, 2022 at 21:25
26

I believe we should have a firm policy that things like ChatGPT are not allowed. For most of our questions, the personal experiences of humans are critical to a good answer. Some auto-generated pablum in no way is useful to the users of the site. If anything, it is more harmful than spam.

1
  • 8
    It's even worse: one can ask ChatGPT to write an answer from the perspective of a human, and even include made up personal experiences.
    – GoodDeeds
    Commented Dec 9, 2022 at 23:07
-11

I see no value in making new rules before we have precise information about the difference between ChatGPT answers and human answers.

Excessive posting of bad answers should continue to be punished, via downvotes, with revocation of the ability to post questions and answers.

11
  • 7
    It's a good thing we have that information and can make new rules based on it now.
    – Nij
    Commented Dec 9, 2022 at 7:10
  • 2
    @Nij Where is that? Commented Dec 9, 2022 at 22:42
  • 8
    @AnonymousPhysicist Perhaps not precisely, but you can find lot of examples (or try them out yourself) of instances where ChatGPT provides an answer that looks plausibly correct but is actually nonsense. For humans, both the process of coming up with and detecting such text would take a decent amount of effort, but with ChatGPT one can produce them with nearly no effort, and would make it very difficult to keep up with curation. So already as a matter of practicality it makes sense to ban it, since it gives bad faith users too much power.
    – GoodDeeds
    Commented Dec 9, 2022 at 23:01
  • 1
    @GoodDeeds You said there are examples where ChatGPT produces nonsense. Sure, I expected that. But what portion of the posts are not nonsense? How can "ban it" be consistent with "its very difficult to curate?" Commented Dec 9, 2022 at 23:13
  • 1
    Here's an example of precise information: x% of ChatGPT posts have a negative score. Humans can identify ChatGPT posts with y% false positives and z% false negatives. Commented Dec 9, 2022 at 23:15
  • 1
    10% error is fine. For all I know the false positive rate might be 50%. Commented Dec 9, 2022 at 23:34
  • 3
    @AnonymousPhysicist 'How can "ban it" be consistent with "its very difficult to curate?"': because it has serious disadvantages without having any meaningful advantage, as detailed in the other two answers. As for precise information, I am sure research is ongoing on that question at the moment.
    – GoodDeeds
    Commented Dec 12, 2022 at 7:22
  • 1
    @GoodDeeds Banning is a kind of curation. You said it is difficult to curate. How is it not difficult to ban? Commented Dec 14, 2022 at 2:40
  • 1
    I'm not disputing that ChatGPT has disadvantages; I'm just saying posts with disadvantages were here before ChatGPT. Nobody has yet articulated how bad ChatGPT content is different from bad human content. (There are already rate limits, so post rate isn't a difference.) Commented Dec 14, 2022 at 2:43
  • @AnonymousPhysicist Here's an example, suppose an answer says, "In my experience, abc", but in actual fact there are is no such experience to draw on. The fact that experienced academic have experienced abc has non-trivial advisory value. Real people have something to lose from making unrealistic, false or disprovable claims. ChatGPT does not.
    – user165871
    Commented Dec 15, 2022 at 17:30
  • @Araucaria-Nothereanymore. "Real people have something to lose from making unrealistic, false or disprovable claims." Why do you think that applies to this site? In my view, reputation points are worthless. Commented Dec 15, 2022 at 19:14

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .