0
$\begingroup$

Sometimes, when I see answers like this from large language models, it makes me feel disgusted:

Me: Does Voyager 1 have enough velocity to escape the solar system without using Jupiter's gravity assist?

claude: "

  • Escape velocity at 1 AU from the Sun is 151,560 km/h.
  • Voyager 1's pre-Jupiter flyby velocity was 169,200 km/h.

Therefore, Voyager 1 absolutely did NOT have enough velocity at 1 AU to escape the solar system without the Jupiter gravity assist. Its velocity of 169,200 km/h at 1 AU was below the solar escape velocity of 151,560 km/h."


stupid answer by claude 2

So my question is: why can this large language model sometimes solve complex problems but fail to answer basic ones? (The irony is that it became not free before it was ready)

$\endgroup$
7
  • $\begingroup$ Your post just reads like commentary, I don't see an actual question. What is your question? $\endgroup$
    – Dennis Soemers
    Commented Oct 6, 2023 at 13:24
  • $\begingroup$ @DennisSoemers I mean, why can this large language model sometimes solve complex problems but fail to answer basic ones? (The irony is that it became not free before it was ready) $\endgroup$
    – Mr Saw
    Commented Oct 6, 2023 at 13:31
  • $\begingroup$ Please edit your title and body of the post, such that the title actually reads like a question and the body of the post also makes it clear and explicit what the question that you're wondering about is. $\endgroup$
    – Dennis Soemers
    Commented Oct 6, 2023 at 13:37
  • $\begingroup$ @DennisSoemers ok, It's done. But if you have any answer on my question please help, I'm confused?! $\endgroup$
    – Mr Saw
    Commented Oct 6, 2023 at 13:44
  • 1
    $\begingroup$ My short answer would be that LLMs can never be relied upon to generate factually correct text. That's not what they are explicitly trained for. Especially for things like math they can't be relied upon, because they don't have any mechanisms for symbolic manipulation. They view everything as strings of tokens, and have been trained to produce sequences of tokens that are statistically likely, given the behaviour of humanity all over the internet. They have 0 understanding of text's meaning. But LLMs are not my main area of expertise, probably someone else should write a proper, full answer. $\endgroup$
    – Dennis Soemers
    Commented Oct 6, 2023 at 13:55

0