7

So I see a question....

I ask ChatGPT to answer it....

I copy/paste that answer here....

Problem??


Clearly if the answer is inaccurate other users will most likely vote accordingly. However, inaccurate answers overall do nothing but lower the value of this Stack. Even if such answers are downvoted or ignored.

At times an AI answer may be close to accurate, or contain a few key phrases making it appear somewhat accurate. However, due to the lack of technical knowledge all AIs will posses, answers are detached or void of some specific or intricate detail that an actual person would be aware of. This could lead users down a futile path only to discover there is something missing. This does more harm than good. I don't have any statistical data, but I'd wager that more often than not these AI-generated answers will result in user frustration more than anything. And they may deter other users from posting answers based on personal experience or knowledge.

Recently a new user posted a couple answers. I commented on both. Both answers were clearly inaccurate. One contains just misinformation.. to use features in Illustrator which do not exist. The other answer detailed steps that were not logical for the desired question.

One is here: creating animation in illustrator or photoshop Originally there was an Illustrator section that was essentially a duplicate of the Photoshop section - meaning inaccurate. There's no timeline in Illustrator.

The other is here: Subscript to a subscript in Illustrator? (Need enough rep to see his/her deleted answer) - This answer was entirely flat-out wrong. It contained steps that were nonsensical to anyone familiar with Illustrator and ultimately would not result in a proper outcome. The user deleted this answer after I commented on it. (For me, the up vote here is just laughable... how can someone up vote such a clearly incorrect answer?)

To be fair... I had no idea these were "AI generated" until the user disclosed that in comments on the first answer above. I thought, unwittingly, that there was perhaps a language barrier and the user was trying to answer but was stumbling over words/phrases. After the AI disclosure, it all made perfect sense as to why both answers were so odd.


I wonder... should AI-generated answers be banned at GD.SE? At the very least, would it be a good idea to require attribution if one wishes to post AI-generated content?

[This question was not generated with an AI.]

10
  • We haven't had much activity on meta recently... Let's hear you, people of GD.SE! Thanks for the post Scott.
    – curious Mod
    Commented Apr 4, 2023 at 22:23
  • 2
    Relevant: meta.stackexchange.com/questions/384922/…
    – curious Mod
    Commented Apr 4, 2023 at 22:26
  • Sure, you can post an AI answer… if you have thoroughly vetted it yourself for accuracy. [I've tested ChatGPT on many topics, especially tech issues I know cannot be done. It still blithely gives as good-looking an answer as it does when it's right.]
    – Tetsujin
    Commented Apr 5, 2023 at 16:11
  • @Tetsujin Could you post this as an answer so that people could vote accordingly? Your answer raises points that would be good to elaborate on.
    – curious Mod
    Commented Apr 5, 2023 at 17:28
  • @curious - can do. I'm not sure how much further it will elaborate...
    – Tetsujin
    Commented Apr 5, 2023 at 17:35
  • We all seem to agree more or less that AI answers should be prevented, but how do we determine whether an answer is written by an AI? If we can't, a ban would be symbolic only. If an answer is correct, we can't really close it just because it "reads like an AI generated answer", can we?
    – Wolff
    Commented Apr 5, 2023 at 20:01
  • @Wolff: We usually use AIs trained on detecting AI-generated texts (no joke).
    – Wrzlprmft Mod
    Commented Apr 5, 2023 at 20:54
  • @Wrzlprmft I see. I've heard about those. I'm curious about their success rate. And worried it will become harder to detect AI in the near future.
    – Wolff
    Commented Apr 5, 2023 at 21:00
  • Will we need AIs to detect if an AI is being used to detect an AI?? I'ma tellin ya .. Skynet 2024!
    – Scott
    Commented Apr 5, 2023 at 22:16
  • 1
    @Wolff: The tools give you a score in terms of probabilities. Also, nobody complained about a misdiagnosis so far.
    – Wrzlprmft Mod
    Commented Apr 6, 2023 at 9:10

4 Answers 4

10

There are quite a few issues that I see with answers generated with help of AI:

  • AI is very good at generating text that looks right, even though it is completely wrong. A user able to tell these two apart fundamentally does not need an AI to generate the answer in the first place.

  • It takes a lot more energy to refute false information than it takes to create it (Brandolini's law). Especially if one can create it at the touch of a button. We cannot expect our users to wade through hundreds of lines of auto-generated text to catch errors.

  • Our Stack was created to be a community of experts. Language Learning Models are not, and never will be, an expert.

  • We encourage users to be concise, and preferably use steps where necessary. AI tools like ChatGPT tend to generate a lot of text, since that is their purpose.

  • We provide tools to go into discussion about answers, like comments, chat and answer editing. This assumes the person posting the answer understands what they are posting, and are willing to go into discussion about the contents. These assumptions are not true for people copy-pasting content from chatbots.

  • In principle, one should attribute text one did not write. But attributing the chatbot might not even be enough. Problem here is the question: did the chatbot write it, or just recompose text from other sources? It is a legal swamp we as a Stack are not prepared for.

  • If I wanted an AI to generate an answer for me, I could put the question into a chatbot. I don't need others to do this for me.


In short:

Overall, I think it is a question of effort. This stack prides itself in providing answers by people who have invested their time in learning their craft. These people have learned from others, from their own mistakes, from research, from trying.

Their answers are useful because they can channel that effort into a concise, precise and correct answer. Chatbots do none of these things.


In even shorter: BAN chatbot answers outright, they do more harm than good.

2
  • 2
    Hey i dont want to be banned.
    – joojaa
    Commented Apr 6, 2023 at 14:25
  • 1
    The people have spoken @joojaa
    – PieBie Mod
    Commented Apr 7, 2023 at 13:44
8

My Personal opinion is that Stack is for humans by humans. There are just so many things no AI will ever be aware of when it comes to the aesthetics or subjective nature of "graphic design".

For software-based questions, it's clear to me that the AIs just spit out spidered steps. There's no telling if these steps are accurate, justified, or even possible at times.

My vote is NO. AI generated content should not be allowed. It does not add any value to GD.SE and can only serve to lower the overall quality of content here.

5

Sure, you can post a ChatGPT answer… if you have thoroughly vetted it yourself for accuracy.
…and also if you credit your source.

I've tested ChatGPT on many topics, especially tech issues I know cannot be done. It still blithely gives as good-looking an answer as it does when it's right. Half my problem with it is I lose interest before I get to the end of its 'essay'. To say the least, it's prosaic. Verbose. Long-winded. Filling for time. It's not so much trying to prove a point as keep going so long you forget what your point was in the first place.

How can you test this theory is correct?
Let's ask ChatGPT.

Me: How can I test whether ChatGPT gives me a correct, true answer?

ChatGPT: As an AI language model, ChatGPT's responses are generated based on patterns it has learned from large datasets of human language. While ChatGPT is capable of producing responses that are grammatically correct and coherent, it is not always possible for it to provide accurate or true answers to every question.

To evaluate the accuracy of ChatGPT's responses, you can compare its answers with credible sources of information such as scientific research, academic publications, or reputable news sources. You can also ask ChatGPT follow-up questions to clarify its responses and probe deeper into the topic.

Additionally, it is important to remember that ChatGPT is a machine and can make errors or provide biased responses due to the limitations of its training data. Therefore, it is always a good idea to approach its answers with a critical eye and to verify the information with multiple sources whenever possible.

Source: ChatGPT

In other words, you don't.
This is the 'right answer' for a given value of 'right' but it's couched in such a way to make you either have to blindly believe it [copy/paste], or by believing it, have to go do your own research.
People just don't answer like that, only politicians answer like that. By the time it's got to the end you've forgotten what you asked.

In short, it is neither use nor ornament.

1
  • I had a student design a simple circuit with chatgpt, was very close to welding stuff to the table accidenttally when powering up. Just saying. its not all just fun and games.
    – joojaa
    Commented Apr 6, 2023 at 14:24
1

Machine learning (ML), tools are one more tool in the arsenal of things. It is okay to use it as a tool. Though only for certain things that somehow reflects consensus.

Ive allready have had near accidents in the workshop when students have blindly trusted chatgpt to solve technical problems. Not terribly dangerous mind only stuff that could have at most have led to loss of two fingers and an eye at maximum. Which all things considered isnt all that bad, but bad enough.

So yes you should be able to use ML tools if you find it useful to format your answer, draw a diagam or help you solving the problem. But you should be:

  • Responsible for the answer

  • Aware that this kind of behaviour may poison the well. I mean what happens to usefulness of models that train on generations o text thats predominantly models.

    At the end of the day we still need sources for the AI to learn from.

Problem is that if you just want to make plausible presence as a person you are not then you might be tempted to use this for purposes that arent as clean as they may seem.

1
  • 1
    I appreciate you are using the term ML. When/if we get true AI, it will be a whole other discussion. And the AI will have its own opinion.
    – Wolff
    Commented Apr 7, 2023 at 12:56

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .