25

New software has been released recently named ChatGPT that is causing quite a stir around the network:

Slate, a Stack Exchange Community Manager, posted a comment on the Meta.SE link above:

We have begun internal discussions to identify options for addressing this issue. We’re also reading what folks write about the topic on their individual sites, as one piece of assessing the overall impact. While we evaluate, we hope that folks on network sites feel comfortable establishing per-site policies responsive to their communities’ needs.

This is fairly new territory, so I'm interested in gathering the UNIX & Linux community's thoughts on AI-generated answers. There are several key points that have been raised already, so I'll seed the discussion with some of them:

Do we want to prohibit AI-generated answers? Do we allow them, with attribution?

12
  • 6
    My input is: Prohibit. Expert systems are fine when a question includes complete and accurate details, and the question is well formed. My long experience in phone tech support for computer systems, and my experiences here since June are that U&L site questions often lack sufficient/accurate info, and the question asked is unclear. Extracting the needed details and clarity requires a question-and-answer exchange with the OP. The percentage of questions needing this conversation is too high for expert system answers to be beneficial. IMO
    – Sotto Voce
    Commented Dec 6, 2022 at 19:57
  • On this I have a point of curiosity over licensing and copyright. I wonder about the risk of AI generated answers producing actual content scraped from uncited sources. This would raise a very significant licensing concern if it's then published under CC-BY-SA 4 At least with copy-paste wikipedia answers this issue is fairly clear cut. Commented Dec 14, 2022 at 2:55
  • @PhilipCouling Something related is going on with AI art. Commented Dec 14, 2022 at 12:52
  • 1
    @schrodingerscatcuriosity Thanks, that's a good read. What made me think of it was my colleague's experience with copilot suddenly suggesting significant blocks of code (10+ lines) which even included a comment making it clear which project it had been scraped from. The really scary thing is what happens when you can't trace it. Commented Dec 14, 2022 at 13:07
  • 2
    Now all we need an AI that can spot AI answers. Problem solved ^^. Commented Dec 27, 2022 at 20:43
  • OpenAI is supposedly working on a statistical / cryptographic "watermark" for ChatGPT, so it would be possible to spot AI answers by checking for that watermark, if they give us a means to do so. Of course, it would also be possible to remove the watermark by running the output through a program to adjust it. Commented Jan 1, 2023 at 15:32
  • ChatGPT is basically just predictive text, and easily wrong in detail. But its output - NOT “generated” but transformed content - is a derivative of all of its inputs, and therefore usually illegal. I fully support the blanket ban on ML (so-called “AI”) content.
    – mirabilos
    Commented Jan 13, 2023 at 20:32
  • 3
    @mirabilos It is generated. "Transform" would mean that ChatGPT stores the training dataset and later draws from it when answering questions. But it doesn't do that. It learns and then later generates answers from what it has learned. If this is transformation then human answers are also transformations. But I agree that it is just a text prediction system and it can easily generate wrong answers while sounding very confident.
    – user31389
    Commented Jan 19, 2023 at 21:57
  • @user31389 do you have examples of generating wrong answers?
    – Braiam
    Commented Jan 20, 2023 at 1:11
  • 1
  • @user31389 I expect examples of Stack Exchange questions and answers, or at least Questions and answers. Those are just conversations, not inline with the format of the sites.
    – Braiam
    Commented Jan 20, 2023 at 21:45
  • @user31389 no, it’s transformed. “Transform” does not mean it stores the training dataset literally; it’s sufficient that it stores the training dataset in a transformed form (which is executed by software running on a deterministic computer). People have been able to extract sufficiently detailled traning data from these systems, which proves that this is enough.
    – mirabilos
    Commented Jan 23, 2023 at 22:11

8 Answers 8

45

I think such answers should be banned entirely, and anyone posting ChatGPT answers without attribution should be banned with prejudice.

If the asker wants an answer from an AI, they can go to ChatGPT directly.

I personally don't think that even ChatGPT answers with attribution should be allowed, but I am willing to compromise on this point providing the attribution is provided up front (not at the end of the answer and certainly NOT in an edit after the answer is first posted), and providing the entire ChatGPT text is in a block quote so nobody can mistake it for a directly written human answer to the question.

Now this raises some questions, "What about people using ChatGPT for grammar and spelling? What about if the answer poster carefully checks the ChatGPT answer before posting?" My answer to both is, that's only fine if the answerer then writes or re-writes the answer themselves in their own words and takes responsibility for every word of it being what THEY actually want to say. This is in alignment with the ChatGPT terms and conditions, and with how we handle any other sort of source of information.

In short, I think we should handle ChatGPT answers the same as we would handle copy-paste from other websites without attribution, but with added prejudice because of: wasting everybody's time, the difficulty of detection, and the need to dissuade other people from the easy "rep-farming" that will occur if we hold a tolerant stance on ChatGPT answers.

43
  • 5
    Ugh... "rep farming" (by bot) to create high rep accounts that are then sold for real money to unscrupulous people trying to make their CV look better... sigh. My worst hatred from MMO Gaming days have finally caught up with me on my favourite internet haunt. Commented Dec 14, 2022 at 3:03
  • 2
    @FranckDernoncourt what's the point of answering a question succinctly in a few words if quoting twenty pages of the relevant documentation will eventually lead a careful, meticulous reader to the right answer?
    – Wildcard
    Commented Dec 20, 2022 at 1:30
  • 2
    @FranckDernoncourt also, note that I never used the word "rephrase." I said you need to take responsibility for every word of your answer being what YOU want to communicate. I define the phrase "in your own words" to mean "in words you yourself would use." Anyway, if you think it's an absolutely perfect explanation and perfect wording that can't be improved upon, then quote it directly and say you are doing so and also mention that you can't say it any better yourself. That's already what we would do with any other source of information if the explanation were perfect.
    – Wildcard
    Commented Dec 20, 2022 at 1:33
  • 4
    I would be in favor of a complete ban. What's the point in posting AI-generated answers, or even worse, chatGPT-generated ones which don't even have an AI behind them, even with attribution? That just wastes everyone's time since we still need to check that the machine actually wrote something useful and if anyone wants to know what chatGPT would have said, they can go and ask it. So why post them here?
    – terdon Mod
    Commented Dec 31, 2022 at 17:52
  • 1
    The U&L community has spoken!
    – Jeff Schaller Mod
    Commented Jan 15, 2023 at 15:20
  • 1
    @JeffSchaller : Errr ??? Could you please provide more details about the rationale leading to such statement / judgement ? (Being said that I do not necessarily disagree with this answer in particular and of course do not challenge the authority of the OP to accept one answer.)
    – MC68020
    Commented Jan 17, 2023 at 23:30
  • 2
    @MC68020 this is just the way meta in particular, and voting in general, works: if you don't vote, your voice isn't heard. This is imperfect, but it's the best we can do. And, actually, this is among the better cases where quite a few people have voted. This is how community support has always been measured and, imperfect as it is, it's the best we have.
    – terdon Mod
    Commented Jan 18, 2023 at 18:48
  • 2
    @MC68020 After a month of this Meta question being featured, we've accumulated 9 answers/perspectives, none of which appear to me to have any support for allowing ChatGPT-generated answers. It's important, like you said, to keep in mind that only a small subset of active users (who are themselves a small subset of site visitors) actually vote or participate on Meta, but like terdon said, it's the best we have. I'm confused; are you seeing support for allowing ChatGPT-generated answers here?
    – Jeff Schaller Mod
    Commented Jan 18, 2023 at 19:52
  • 4
    I'm with banning chatGPT. If you want to ask that bot, go ask it directly instead of here. Forums are meant to ask human experts for help, not AI that doesn't know what it says and may write answers that sound right but are actually wrong. Until the AI scene changes and the AI becomes so good at answering like human experts, the ban should stay.
    – td211
    Commented Jan 20, 2023 at 7:48
  • 4
    @Braiam So far, what I have seen on the site, and what I have seen by directly interacting with ChatGPT, is that it gives professionally looking, convincing, wrong answers. They are wrong often enough that I'm totally ok with a blanked ban. We don't want ChatGPT-generated code in people's production environments. You may possibly personally not blindly trust ChatGPT, but you can be absolutely sure that many of our visitors here blindly trusts the code that they find on the site, no matter whether they understand the code or not.
    – Kusalananda Mod
    Commented Jan 20, 2023 at 22:08
  • 2
    @Braiam I will not change a policy based on your personal wants, nor will I be forced to do something that don't think is right. Please take this to the community managers.
    – Kusalananda Mod
    Commented Mar 15, 2023 at 15:26
  • 2
    @Braiam I'm afraid we're probably looking at different aspects of this, here. The bulk of the opposition that I'm seeing in the answers here are not that the answers are necessarily incorrect, but are about attribution, original research, and appearance of authority, causing extra work for reviewers and future readers to suss out the truth. A plagiarised "correct" answer is still unacceptable, which -- for me -- is the source of the concern. (1/2)
    – Jeff Schaller Mod
    Commented Mar 15, 2023 at 16:55
  • 5
    If a person can't personally explain, defend, and update their answer, then it isn't their answer. Having ownership of a Question and Answer is central to SE's model. (2/2)
    – Jeff Schaller Mod
    Commented Mar 15, 2023 at 16:55
  • 2
    @Braiam plagiarism isn't acceptable. Period. Whether you are presenting a machine's work or another human's work as your own makes no difference. You are still pretending that you wrote something you did not write and that isn't welcome here and never has been. Even if CGPT were 100% accurate every time, we still shouldn't allow it to post answers here. If it were always right, then we would close down the site, but we wouldn't act as a frontend to CGPT. Anyone who wants to ask CGPT is free to do so, but people come here to get answers from actual human experts.
    – terdon Mod
    Commented Mar 15, 2023 at 18:37
  • 3
    @Braiam I don't understand. You are showing a post that has not been deleted and it wasn't deleted precisely because it has attribution. Now, I would argue that it is still useless and I just don't understand why anyone would post machine-generated answers here: if you want machine generated, go ask the machine. But it isn't breaking the policy as stated and hasn't been deleted. The comment is simply suggesting that it would be better to rewrite. And yes, of course it would be better to rewrite. Same as we do when using any other external source.
    – terdon Mod
    Commented Mar 16, 2023 at 16:18
13

I think it is really important to understand the thing that we are banning or not banning.

As far as I'm aware ChatGPT and all other successful AI in this space are NOT doing original research to produce an answer. Eg: they are not running Linux commands they suggest, or writing a proof of concept.

These models are trying to crack the Turing Test with ever higher success rate1. What's really interesting is the extent to which the Chinese Room argument has proven more meaningful than it first appeared: There is a very large gap between convincing humans that an AI understands something and the AI actually understanding it2.

The information provided by ChatGPT is very intelligently collated information from across the internet. But this sets its position in the world similar to that of Wikipedia and Google Search. These are very fine tools, but they should never be considered authoritative sources of information3.

Unlike Wikipedia, ChatGPT answers are very hard to trace. With copy-paste answers from Wikipedia, we can not only trace the origin of bad answers, but actually go and correct it at source!4. As far as I know, ChatGPT has no such capability.

The sheer volumes that have been seen make them a real problem that needs to be dealt with firmly.


Thanks to Kamil Maciorowski for this comment:

If this answer is true then it's very relevant.

That answer nails it. Discussing with those I know in the field, I believe that answer is very true. These AIs are super smart at word play. Really very smart. But they are not conscious. Not yet.

E.g: The last time I heard "entity linking" across many unconnected sources remains a bit of an unsolved problem. If you see a name "Mickey Mouse" in a document then it's hard to be sure if the document was discussing the Disney character or using it as a euphemism like "Mickey Mouse operation" to mean silly or poorly run.

Besides that, AI has made some amazing advances in recent years with various "models" for various specific tasks: image recognition, image generation, text generation. And logical reasoning has long been relatively trivial in AI. But one thing that remains frustratingly out of reach is a good way to connect these different models into a single system.

In short people should not hold their breath waiting for a really great language model to be connected to a really great logical reasoning engine.


To my mind, the idea of allowing AI answers onto SE must wait until AI can take a questions, read some manuals, and then run some tests to prove the solution worked.

I.e.: AI answers must wait until the AI actually understands what it is talking about.


  • 1The next frontier is fooling people with subject matter expertise
  • 2My own experience with interviewing tech candidates for a role is that even some humans can pass the Turing Test but ultimate show zero understanding of the real subject matter when presented with our trivial tech tests.
  • 3Wikipedia even has a ban on original research
  • 4My only ever wikipedia update came from just this case.
4
  • 4
    If this answer is true then it's very relevant. Commented Dec 14, 2022 at 13:53
  • "AI answers must wait until the AI actually understands what it is talking about." How can one show that the AI actually understands what it is talking about? Commented Dec 19, 2022 at 4:22
  • @FranckDernoncourt described one paragraph earlier. The AI filtering linguistic knowledge with experimentation would be a good marker. Commented Dec 19, 2022 at 8:19
  • AI will generate original research the same way other intelligent creatures do. They will observe their surroundings, gather information and make assumptions.
    – Braiam
    Commented Dec 27, 2022 at 17:38
9

I strongly believe we should ban answers generated by ChatGPT -- or any other AI, for that matter. I actually think (concerning U&L topics) that these answers are bad, except when the question is a very easy one. These answers might look good at a first sight, but are often generic, miss the point, and lack the real-world knowledge and experience of all the intricacies in doing real Linux sysadmin work.

StackOverflow has now empowered moderators to ban users up to 30 days that post ChatGPT-generated content, and added a banner on their site. We should perhaps do the same.

1
  • 1
    Yes, to the 30-day ban. Commented Dec 13, 2022 at 9:50
6

The worst of all is well worded by the OP of the meta-stackoverflow link you suggested :

The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good.

In my letter to Santa… I asked for… a bottle of wine. I would not really mind if getting vinegar in return. But… I would hate if the vinegar was packed into a bottle of premier cru.

This would certainly not only mislead the author of the question but also discourage members of the community to contribute and provide their own true knowledge based on their own true experience.
The latter point being in contradiction with the philosophy I understand in a comment from terdon :

I think the idea is we'd rather have more answers than one, accepted one.

I acknowledge that AI-generated answers could be good at answering those questions from students asking the community to do their homework.
But do we really want that sort of questions here ? I don't think so.


However, I would strongly support the idea of a bot pushing an IA-generated answer on old && unanswered questions.
This would necessarily provide more value than the disgraceful bump of the community bot.

6
  • 12
    I agree with all of this answer except the last portion. I strongly oppose a bot pushing AI-generated answers to old and unanswered questions. Putting vinegar in a wine bottle doesn't become an acceptable thing to do just because you're sticking it on a shelf next to a dining table that nobody seems to use.
    – Wildcard
    Commented Dec 9, 2022 at 19:47
  • I wouldn't mind if SO somehow rendered AI generated answers somewhere down the bottom of the page and very clearly highlighted the whole answer's background and with a little warning symbol. But for me this would have to be done as a side agreement between SO and ChatGPT (or other). Building on Whildard's metaphore there's nothing wrong with Vinigar when it's put in a vinagar bottle. Commented Dec 14, 2022 at 2:51
  • 4
    @PhilipCouling do I understand that you're saying an unvetted and potentially wrong answer is better than no answer at all, simply because it might be right? Commented Dec 20, 2022 at 1:10
  • @roaima no that's not what i meant. I wouldn't call such things answers. They should not be rendered as answers. But that's not to say that such AI is no better than a stopped clock ("might be right" ... it is twice a day). I think the debate about wether or not there is a place for such tools at all is much more nuanced. We rather assume most people do a google search before coming here, yet Google's engine is ultimately no better conceptually (except it cites its sources) Commented Dec 20, 2022 at 9:16
  • "We rather assume most people do a google search before coming here" that's optimistic of you :-( Commented Dec 20, 2022 at 9:33
  • @roaima I toyed with that idea on my answer, but as a pre-ask thing, rather than after the fact. Unanswered questions can go the way of the dodo instead.
    – Braiam
    Commented Dec 27, 2022 at 17:26
6

I vote for a blanket ban on AI-generated answers, just like Stack Overflow.

The primary problem for AI-gen'd answers, as is the case on SO, is their high rate of inaccuracy disguised in a good-looking form. This goes against SE's target as a repository of useful knowledge. We want treasure, not elegant garbage. This is not something AI can reliably generate for us.

Attribution is not even the second problem. What comes next about AI answers is the human incentive behind that. We are expecting users to provide quality answers, or at least, have an intention to add quality answers. Users coming with verbatim copy-paste from AI output are unlikely willing to contribute positively, especially when posting in volumes. They're only coming for rep farming or what have you, and they're more likely to add moderation workload than valuable content.

In this respect, users utilizing AI to improve their answers pose minimal problems to us, if any. These answers are in essence human-composed content, decorated with AI-aided language & expression. This does not violate the intention requirement as described in the previous paragraph.

Finally talking about attribution. More often, lack of attribution alone is a minor issue if it is one after all. If a decent answer lacks proper attribution, we ask for clarification in the comments and fix it up if needed. If it's a bad answer, we're not even concerned whether it's properly attributed. Attribution alone has minimal influence on the quality of the answer.

3
  • "is their high rate of inaccuracy disguised in a good-looking form." [citation needed]
    – Braiam
    Commented Jan 10, 2023 at 14:01
  • @Braiam Only rephrased from the SO announcement, the 3rd paragraph, beginning with The primary problem.
    – iBug
    Commented Jan 10, 2023 at 16:20
  • Which is, as unsubstantiated from the get go. If you are going to use someone as a source, make sure that the source's source check out. BTW, the only example of ChatGPT on this site was a mostly correct answer.
    – Braiam
    Commented Jan 11, 2023 at 17:06
0

My personal opinion about this is that I'd like to see correct and well-written answers. If an entity can explain the underlying issue that gives rise to the problem, show with examples that the given solution solves the problem for the user in question, reason about assumptions made, respond to comments in a helpful way, and cite necessary sources, then, if they are an AI or not does not matter.

It is clear from what we've seen on the site so far that AIs are not yet at that point though.

I've removed most of the following paragraph as I no longer think we should allow generated answers at all, not even as attributed wiki answers. Answers written using ChatGPT or any other AI software should be community wiki answers and disclose that they were provided by a bot (no matter if the bot wrote it all or just helped with the technical bits). This is partly so that the answer can be appropriately reviewed by a human for the correctness and partly so that the reputation from votes on the site goes to users who use their own brains to solve issues. Failing to attribute the source of text that is taken from elsewhere is already not allowed on the site.

Artificially inflating one's reputation using an AI tool is a form of deception. This becomes extra serious when users use their reputation on this site to leverage influence in other places (which may well happen in workplaces or other social situations).

ChatGPT or other software should be allowed to help with the grammar of the natural language used in answers, provided the user proofreads the text before it is posted. This is akin to using tools like Grammarly, which I see nothing wrong with.

Summary:

  1. Discourage AI-generated answers unless the AI is so good it's indistinguishable from a human (able to respond and correct their answers etc.) There is currently no AI that is that good. If there were, we would not detect them, so it would be pointless to disallow them.

    Note that "discourage" could well mean "suspending users using AI". If they have posted a clearly identifiably AI-generated answer as a community wiki answer, they should not be suspended and the answer should instead be reviewed by humans. The threshold for deleting these answers would be severely lower than ordinarily.

  2. Material from external sources (which IMHO includes AI) must be attributed. This is already a site-wide requirement, see e.g. Users are calling me a plagiarist. What do I do?

    In the context of Stack Exchange sites, any copying and pasting of any amount of text or code that wasn't written by you is plagiarism if you try, explicitly or implicitly, to pass it off as your own work.

    Remember, you still have to write an actual answer, in your own words. A post that consists only of copied text, even when attributed, is not your work either. Use quotes sparingly, to support your own words.

    Ignoring this would normally result in a suspension, and possibly a network-wide suspension for repeat offenders.

  3. Artificially inflating one's reputation by means of AI should not be allowed. It is already not allowed to artificially inflate one's reputation (or other metrics, such as the "people reached" count).

    Again, suspending users who do this is not uncommon.

5
  • "It is clear from what we've seen on the site so far that AIs are not yet at that point though" what examples can you show about this? Because I haven't seen actual AI answers at all (yes, it's pedantic, but it shows how little we are knowing about the topic).
    – Braiam
    Commented Dec 27, 2022 at 17:01
  • @Braiam There are several answers (somewhere between 10 and 20) that were clearly from ChatGPT, which are now deleted due to being inaccurate and due to the user(s) not being able to correct them or even respond when asked about them. It would not be proper of me to expose the users doing this. We have been deleting these when it's clear they are incorrect, and they have started collecting deletion votes. There is right now one not deleted answer that clearly labels itself as from ChatGPT: $PS1 vs $PROMPT_COMMAND in bash?
    – Kusalananda Mod
    Commented Dec 27, 2022 at 20:34
  • @Braiam Moderators from other sites have also tipped us off about users employing ChatGPT for answers (and, in a couple of cases, questions). These are usually from users that post a large amount of generated answers on various sites (on topics ranging from cooking to astrophysics and interpersonal relations) within a ridiculously short timespan. The answers are usually junk and, therefore, a waste of review time. Most answers follow a similar formula, making it easy to spot them.
    – Kusalananda Mod
    Commented Dec 27, 2022 at 20:43
  • @Kusalananda When I said "I haven't seen actual AI" I'm explicitly excluding chatGPT, since it isn't AI. It's machine learning. Also, the fact that it's only 20 in the worse case, and all of them wrong is very low despite being such a "big problem" that requires "immediate" attention. ChatGPT is capable of generating right and wrong answers (like humans), BTW, the answer that you linked is correct and avoids inserting personal preferences like the other answer does. Basically, it's neutral providing only the information asked (right or wrong).
    – Braiam
    Commented Dec 28, 2022 at 15:06
  • @Kusalananda "The answers are usually junk and, therefore, a waste of review time", I mean, it's usually like that before if you take your time in any review queue, just that the effort to write seemly coherently them is lower. As I said in my answer, I've managed to extract actual good information, better written that I could do it. If the answer is incorrect, we expect votes to reflect that (although, from the answer you linked, it seems that humans can't see past their own pettiness :/)
    – Braiam
    Commented Dec 28, 2022 at 15:10
-4

Machine-generated content (vice (artificial) intelligence-generated content) is certainly useful as a tool, but machines are fundamentally unable to understand the full context of the question or answer.

Machines are only as good as their programming, and they are unable to defend or justify their results beyond the fact that their algorithm created it. With machine learning/neural network models, the results can even be stochastic vice deterministic.

The value of forums such as StackExchange is the exchange of ideas and the challenging of assumptions or beliefs. Machine-generated content cannot question its own algorithms or neural network training to justify its output, or more importantly, to adapt or update its output when presented with problems or flaws.

ChatGPT and other machine learning/neural network tools are just that--tools. Until a machine can participate in a discourse about a topic and grow from intellectual challenges, it must remain a tool in the hands of a human.

5
  • "machines are fundamentally unable to understand the full context of the question or answer" except that this is something that the asker must provide either way. So, if a machine can me misled to provide a incomplete/wrong answer, so could a human.
    – Braiam
    Commented Dec 29, 2022 at 19:53
  • @Braiam, while humans can be misled to provide an incomplete/wrong answer, other humans are able to challenge and engage with said human. Said humans can have a real discourse. Even if the machine could change its own algorithms and build new neural networks in real time, would it be true learning or would it be more mimicry and guessing? Commented Dec 29, 2022 at 20:18
  • Yeah, sadly chatgpt is too "polite" to challenge you on your assumptions, but I would argue that most humans are too "sensitive" when someone challenge the framing of their questions. So, having these two together while allowing everyone else to challenge/be challenged on the actual site seems like a win to me. (BTW, that would serve to train better the model, so it challenges your assumptions too, double win).
    – Braiam
    Commented Dec 30, 2022 at 12:35
  • 1
    @Braiam, in the case of ChatGPT, the model is already trained and is not going to grow from additional data. In a sense, it's frozen in time and cannot benefit from new information until a new version is released, and that's only if the ChatGPT overlords implement changes based on feedback--and on the scale of answering Internet answers, I could almost guarantee 99% of ChatGPT incorrect answers will go unaddressed. At least with sensitive humans, we can flag and downvote the nasty ones. Commented Dec 30, 2022 at 22:27
  • That presumes that a important part of chatgpt answers are wrong. And as noted elsewhere by a moderator, the only answer provided by chatGPT was "mostly correct"
    – Braiam
    Commented Jan 11, 2023 at 17:10
-6

It's important to separate Artificial Intelligence from Machine Learning. ChatGPT is firmly the later, it doesn't do anything we expect from a intelligent creature, but seems intelligent from the emergency properties of the system. Like an ant hill, it's smart in the way it respond to stimuli based on "simple" rules. Given this, it obviously has limitations and do pretty stupid things, like other systems*, which is something that is not unheard of humans doing, yet it tends to be more successful in a specific set of scenarios, either by context or because it has better information.

My interactions with ChatGPT is that it can generate incredibly long response to tasks that I can then refine by hand, consult about circular references and implement algorithms without errors (which is something I usually do not consider myself capable of). And yet, it annoys me to no end that it gets the same things wrong about stuff that most humans get wrong: economic theories and concepts, history, philosophy, etc. The model is, after all, a popularity contest, and no amount of vetting will ever find them all, just the most popular ones.

Instead of any "banning" we should ask SE to actually implement ChatGPT as a pre-ask option for askers. If the askers are satisfied with the results, then we could review that by actual experts and post that as a pair of Q&A. Basically, embrace the thing, since it has the potential to offload much work that we do that we shouldn't have to otherwise, and only focus on verify the information. After all, it's easier to spot a wrong answer than to generate a right one*.

* And despite of this, human players preferred to work with it due precisely lacking those same human characteristics, like pettiness, vengeance, or feeling the blues.
** This is basically the "easier said than done" but adapted to the context

5
  • I hope that there's a way to integrate or embrace this new system, because I'm hearing that it's often difficult to spot the incorrect pieces of these generated answers -- that it can require a subject matter expert to do so, which raises the bar considerably in the review queues. I'm curious to see how this all plays out!
    – Jeff Schaller Mod
    Commented Dec 27, 2022 at 18:01
  • @JeffSchaller "that it's often difficult to spot the incorrect pieces of these generated answers -- that it can require a subject matter expert to do so" I hope you understand how irrational that train of thought is. You need to know about the topic to be able to know when the thing is wrong. Expecting someone that doesn't know anything about a topic, to know when someone is saying something wrong is a bad start in the argument. That's why I proposed my solution, there's someone that can immediately verify this by testing: the asker.
    – Braiam
    Commented Dec 27, 2022 at 18:06
  • Ahhh, sorry for the confusion. I slipped between two ideas: (1) yours of using the program during asking, and then (2) the concern over using it to generate answers.
    – Jeff Schaller Mod
    Commented Dec 27, 2022 at 19:49
  • 1
    ChatGPT generates confident-sounding nonsense. In the realm of computer programming or technical measures, its answers are wrong and usually meaningless. I recently went through several "recommendations for improvement" that a colleague generated with ChatGPT for a piece of open source code; literally all of them were nonsense. Plausible sounding, but actually meaningless when inspected. It would be an EXTREME disservice to our users to put this sort of nonsense in front of them when they come here for help.
    – Wildcard
    Commented Jan 18, 2023 at 21:29
  • @Wildcard I imagine that you have lots of examples. Because examples I've seen of chatgpt being wrong is a) because most people are actually wrong and chatgpt parrots it back, b) some strange fixation with peregrine falcons, c) not knowing how to count. Otherwise, it has demonstrated to be a powerful tool for both, IT, non-IT question I know the correct answer and to bounce ideas off.
    – Braiam
    Commented Jan 20, 2023 at 1:02

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .