Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

5
  • 3
    $\begingroup$ This isn't entirely true. ChatGPT can generate correct C code that compiles and runs, based on my questions. If it makes a mistake, I merely point out that a mistake was made by saying something abstract like "But then I have to know the type ahead of time," and it finds the mistake on it's own, and offers a correction that will compile. It makes deductions that are far beyond "make plausible English sentences." $\endgroup$ Commented Jan 10, 2023 at 7:19
  • 5
    $\begingroup$ @SO_fix_the_vote_sorting_bug I believe that's because to ChatGPT, a C language question is still just a "language learning" question. It's still a souped up phone prediction algorithm, just one that has seen a lot of C code and knows "what comes after this". Incidentally, ChatGPT is banned from Stack Overflow precisely because its answers often look superficially correct, but can be fundamentally wrong, because it doesn't actually have an understanding. Or a compiler to check itself with. $\endgroup$
    – JamieB
    Commented Mar 21, 2023 at 18:59
  • $\begingroup$ What is the 5/5/5 100/100/100 error $\endgroup$ Commented Jun 7, 2023 at 17:31
  • 1
    $\begingroup$ @SO_fix_the_vote_sorting_bug In my experience, this is not true in general. Language models generate plausible code that is sometimes correct, sometimes wrong in subtle ways, and sometimes ridiculously wrong. I've had ChatGPT make blatant errors, and when corrected, apologize and confidently give a new answer, which is also totally wrong. $\endgroup$
    – LarsH
    Commented Jun 19, 2023 at 10:49
  • $\begingroup$ Generally this is a good answer, but not "It's not so much that chatGPT is intelligent, but more that we are less so that we think." Fitting answers suggest intelligence to humans because they are strongly correlated in human speech and writing. We've not had experience before with entities that could could produce fitting output without intelligence or understanding. $\endgroup$
    – LarsH
    Commented Jun 19, 2023 at 10:52