21
\$\begingroup\$

There seems to be a consensus that ChatGPT answers should be deleted.

What about questions that are essentially "ChatGPT told me X. Is X correct?"

This question is a long exposition about some responses from ChatGPT about antenna models. The asker claims to want to know about antenna models, but nearly all of the text references ChatGPT. Many comments from the asker feign surprise that many comments (including a number of them from me) are about ChatGPT rather than antenna models. Many of the commenters are exasperated with all the references to ChatGPT in the question.

This has happened to other questions. A question that should have been about some subject X turns into a mess of explaining why ChatGPT cannot answer questions with a mix of attempts at answering the question and explaining what ChatGPT got wrong.

In this case, the asker seems to have gone straight to ChatGPT to "learn" about antenna models rather than cracking open a book to see real explanations because that would take too long. (See the comments under the question.)

Should we just close ChatGPT driven questions with a general "ChatGPT don't know Jack," or is there a better way to handle them?

\$\endgroup\$
7
  • 3
    \$\begingroup\$ Any EE question that asserts things that are wrong or inaccurate are a problem because, in order to make an answer, you also need to undo the incorrect assertions. If chatGPT (I've never heard of it until today) produces problematic statements then, this is just the same because, you have to untangle the way the OP has interpreted those statements. Even if chatGPT does make good statements, in my very limited experience of chat GPT statements, they are open to interpretation and, that interpretation will lead to errors --> incorrect assertions in EE questions. Back to what I originally said. \$\endgroup\$
    – Andy aka
    Commented Jan 8, 2023 at 12:47
  • \$\begingroup\$ Relevant discussion from meta.SE: meta.stackexchange.com/questions/384396/… \$\endgroup\$ Commented Jan 8, 2023 at 14:44
  • \$\begingroup\$ FYI: I added this question to the list on MSE \$\endgroup\$
    – toolic
    Commented Jan 8, 2023 at 15:27
  • \$\begingroup\$ Jut edit out the chatgpt reference. Obviously if OP is clearly the semi-intelligent zero-effort type, then killemall. \$\endgroup\$
    – peterh
    Commented Jan 9, 2023 at 16:21
  • 7
    \$\begingroup\$ @peterh problem is that this is hard. In the example linked above, OP had been tricked by ChatGPT into believing an elaborate made-up version of antenna modelling, and insisted that this understanding is worth debating. What of that is chatgpt reference, and what of that is original? The problem there really becomes that ChatGPT intentionally blurs (and is used to blur) the boundary between correct reproduction and fiction, with no addition of factual understanding. That's really the core of the problem – ChatGPT sounds smart, so it's very hard for humans to believe it's dumb as dirt. \$\endgroup\$ Commented Jan 10, 2023 at 11:05
  • 1
    \$\begingroup\$ @MarcusMüller Thanks. So there was a reasonable OP, we could say, a beginner antenna design guy. He was enough smart to think about antenna designs and ask about them intelligently. He was not enough smart to detect that ChatGPT gave him a bullshit. I think, that knowledge level makes him a nearly ideal OP. Now compare this to a case, if he gets the same antenna model from a bad book. What should had happened to his question then? What happens if the following OPs will already know the ChatGPT ban and they cover it? \$\endgroup\$
    – peterh
    Commented Jan 10, 2023 at 11:32
  • \$\begingroup\$ @peterh sorry for the late response: there's a very human-time limit to how many wrong books are written, whereas ChatGPT can produce the amount of text in all RF textbooks ever written so far in minutes; it's an imbalance of available workforce to try and correct a false-facts machine. \$\endgroup\$ Commented Oct 2, 2023 at 13:58

3 Answers 3

22
\$\begingroup\$

Yes.

The discussion here,

Temporary policy: ChatGPT is banned

mainly focuses on how hard ChatGPT makes it for experts to vet out wrong generated answers, and it makes it impossible for lesser experienced people, i.e., the people actually asking questions.

The main-site question that caused your Meta-question is an excellent example for how hard it is for a non-expert to disbelieve what ChatGPT tells them.

Because such questions sink much more time than they cost the ChatGPT system and the asker to generate, without being based on actual research, they should at least been downvoted.

Think about it this way: ChatGPT, literally, is just a fancy random generator that generates texts that look plausible. It has zero understanding of EE (or anything for that matter), it is just astonishingly good at creating text as it would look when done human would answer. The veracity of that text is totally irrelevant to the model!

In other words, ChatGPT is designed to make it as hard as possible to spot the nonsense it's spilling, which maximizes the effort of debunking the statements.

Not having any own understanding is basically "too broad" (now: needs more focus, a SE-wide close reason wording change that I still don't like).

Because we're not taking anything away from people that actually put in the works to read upon a topic on their own, I'll say that we don't need too much nuance here: if any post, question, answer or comment, is primarily based on a generative text model, it's immediately up for deletion.

In the end, chatGPT is quite a lot like online trolls on social media: it's cheap for them to produce counterfactual or offensive content, while it requires high effort to write answers and rebuttals. Let's not feed the trollschatbots. Identify, banish, move on.

\$\endgroup\$
4
  • 3
    \$\begingroup\$ I think the troll comparison is spot on! While I fully agree that it should be banned from questions (and obviously answers!) for now, I think that it is not just destructive such as usual trolls. This should be revisited in a few years! Wikipedia and similar sources used to be frowned upon in the 2000s before swarm quality control increased its trustworthiness. The AI's users perform a very similar thing to "swarm quality control" \$\endgroup\$
    – tobalt
    Commented Jan 9, 2023 at 13:49
  • \$\begingroup\$ absolutely, and I can see some places on SE where things like ChatGPT would be worth much; namely, ask them "If I wondered about XYZ, what questions and answers from here should I read, and what are the core statements from these?". Rules for social constructs always need to evolve. \$\endgroup\$ Commented Jan 9, 2023 at 13:53
  • \$\begingroup\$ "It has zero understanding of EE (or anything for that matter)" -- I take issue with this. Having worked with it a little, it is clear that its model does encode understanding of many subjects, including a lot of EE related topics. The issue is that once you scratch the surface it doesn't take long before you reach something that it doesn't have any knowledge of, and 99% of the time it just invests something plausible sounding to fake it, and you need to be an expert to work out when. But overall, I've found most questions I've asked it, it has given correct answers to. \$\endgroup\$
    – occipita
    Commented Jan 21, 2023 at 21:22
  • 4
    \$\begingroup\$ @occipita sorry, it really does not have understanding. It seems to be able to fluently put together working answers, and that works better at more superficial levels that have been discussed extensively. But that's not understanding, that's the ability to put together sentences as someone with understanding would. But I really don't want to argue with you here - I think we both agree the problem is that there is no indication for the user whether what is said has much resemblance to truth, and that's what makes it bad as a basis for questions. \$\endgroup\$ Commented Jan 22, 2023 at 7:29
1
\$\begingroup\$

Yes, but...

The writing is on the wall for everyone. First it came for the artists and story-tellers. Engineers and scientists will be next, eventually. However, that may take another decade or more before the descendent of chatGPT and other Large Language Models start to turn out accurate and innovative answers (and questions).

Whether it would be possible to even detect them is a matter for an Engineering Turing Test.

We might want to ban them, but by that time it will be irrelevent.

\$\endgroup\$
6
  • 3
    \$\begingroup\$ fair enough, but "posts that experienced engineers in the field can't tell from a real engineer's texts" have a high likelihood of actually being correct – so, by then, the problem we're solving will be a different one, hopefully. \$\endgroup\$ Commented Jan 10, 2023 at 10:54
  • \$\begingroup\$ I don't see "the writing on the wall." The so-called "art" AI produces is a meaningless mish-mash of junk. The "stories" are worse - incoherent streams composed of elements from different stories and plots. The "answers" it provides are more of the same - a bit of the form and none of the substance. AI is a total misnomer. There's no intelligence involved (except in the people who write the code for such systems.) The "AI" itself understands nothing. It merely strings together bits and pieces that seem to match the words in the prompt given to it. \$\endgroup\$
    – JRE
    Commented Jan 19, 2023 at 10:33
  • 1
    \$\begingroup\$ @JRE That sounds more like a critique of normal human output across all genres. It's telling that AI now has to be compared to the absolute best that humans can accomplish and not Mr Ordinary \$\endgroup\$ Commented Jan 19, 2023 at 11:40
  • \$\begingroup\$ No. I am saying that "AI" is not even up to the standards of average. It spews crap with zero understanding. Even Joe Average has some understanding of what he's saying or writing. "AI" has zero understanding. It mashes words together at random in a statistical process. Ask it the same question in separate sessions and you'll get different answers - from the exact same text. \$\endgroup\$
    – JRE
    Commented Jan 19, 2023 at 11:44
  • 1
    \$\begingroup\$ I am betting that it would pass the Turing Test with more than 90% of people, especially it it were connected to the Net. "But the TT is not a valid test! etc etc" \$\endgroup\$ Commented Jan 19, 2023 at 15:09
  • 1
    \$\begingroup\$ @DirkBruere in fact it does! that's why people keep asking it for actual advice and getting nonsense \$\endgroup\$ Commented Jan 20, 2023 at 16:25
-5
\$\begingroup\$

As I mentioned in my mini rant, if chatGPT is harvesting info from the interwebs to create its answer, then posting that answer on the interwebs is akin to putting your effluent into your source of drinking water. So called ‘poisoning the system’. It also means the system can be manipulated. From what I can see there is no feedback mechanism to tell the AI if the result was good or bad.

I do see that chatGPT has been injected with a large dose of ‘wokeness’. Just an observation, so let’s not light a fire.

For a laugh, ask it about the HAL9000

\$\endgroup\$
4
  • \$\begingroup\$ Of course there is a) a way to like/dislike its answers right in the interface and even add a clarifying comments b) a strong incentive for its devs to improve its answers based on such feedback... \$\endgroup\$
    – tobalt
    Commented Jan 11, 2023 at 8:11
  • 5
    \$\begingroup\$ everything I don't like is 'wokeness' \$\endgroup\$ Commented Jan 13, 2023 at 15:55
  • \$\begingroup\$ Read about how it was trained -- initial training was with random text from the Internet ("Common Crawl" and Wikipedia, mostly), but the training was finished with custom-designed question & answer sessions with experts, and then generated responses were rated additionally by more experts. I don't think this is a serious concern. \$\endgroup\$
    – occipita
    Commented Jan 21, 2023 at 21:28
  • \$\begingroup\$ "feedback mechanism to tell the AI if the result was good or bad" — the conversation can continue. I asked about Common Crew & Cast in two films and it gave not only a great answer, but found the perfect person to be that common link. Unfortunately, it wasn't true. I pointed this out and it replied "I apologize for the error in my previous response. You are correct …". Even worse, Coprime Matroid is a seemingly convincing math proof of an untruth. \$\endgroup\$ Commented Mar 12, 2023 at 13:46

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .