Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

3
  • 2
    Another piece using the parrot analogy: On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? (ACM. 2021-03-01) Commented Dec 12, 2022 at 16:22
  • @Machavity I may be asking a lot, but can you provide a reference that actually provides a concrete example of a question that causes ChatGPT to produce code that's vulnerable to a SQL injection attack, or any other "standard" code vulnerability for that matter? The reason being that I really want to put together a blog post showing such an answer, side by side with asking ChatGPT "Why is SQL injection bad?" This will be an extremely visible and comprehensible example of ChatGPT's complete inability to understand and reason about what it's saying.
    – dgnuff
    Commented Apr 14 at 6:11
  • This problem is not limited to AI generated answers. Answer get additional upvotes just because they already have many of them, see for example stackoverflow.com/a/70517348/3027266 (scroll down in the comments) Commented Jun 19 at 7:19