1

A user on Reddit was told by the artificial intelligence ChatGPT that solipsism is true. Why did he say that?

Is there any evidence of solipsism that ChatGPT knows about?

Should ChatGPT be trusted or is it wrong?

13
  • 6
    WSJ, ChatGPT Needs Some Help With Math Assignments:"While the bot gets many basic arithmetic questions correct, it stumbles when those questions are written in natural language. For example, ask ChatGPT “if a banana weighs 0.5 lbs and I have 7 lbs of bananas and nine oranges, how many pieces of fruit do I have?” The bot’s quick reply: “You have 16 pieces of fruit, seven bananas and nine oranges”". I wouldn't pay much attention to what ChatGPT has to say on more complex matters.
    – Conifold
    Commented Mar 3, 2023 at 14:46
  • 4
    The first thing to do would be to not listen to ChatGPT - in general, it's equally likely to generate misinformation as accurate information, due to the way it operates. Also check out: bigmessowires.com/2023/03/01/…
    – Frank
    Commented Mar 3, 2023 at 14:55
  • 5
    If you think solipsism is true, who do you think is going to answer this question? Russell once said that a woman wrote to him to say that she was a solipsist and she was surprised there weren't more of them.
    – Bumble
    Commented Mar 3, 2023 at 15:03
  • 3
    Does this answer your question? Are there any philosophical arguments to disprove or weaken solipsism? Commented Mar 3, 2023 at 16:35
  • 1
    @eirene infinitely far away, most likely.
    – Scott Rowe
    Commented Mar 3, 2023 at 21:00

1 Answer 1

4

Unequivocally, due to the way it currently operates, ChatGPT should not be trusted at the moment. It is about as equally likely to produce information as misinformation. In fact, it currently has no sense of what is true and what is wrong, something I have experienced personally, and which is surfacing more and more.

In my personal experience, I've seen ChatGPT produce:

  • Incorrect computer code, with eg. loop variable name changed midway through the loop in a nonsensical way (just an example)
  • Incoherent mathematical proofs, where the result to prove was used in the body of the proof itself
  • Philosophical verbiage that looked good on the surface but was either a barely logical collage (with a tinge of patronizing), including some slight misses in logical reasoning

and more ...

Here is a list of references about "solipsism" generated by ChatGPT just now: enter image description here

I was unable to find some of the books mentioned in that list on Amazon.

In the end, what ChatGPT does is only a collage of what it has seen in its training data, with no verification of whether the result is accurate, coherent, consistent, logical, meaningful or trustworthy. Check the article I point to, it's illuminating: ChatGPT will generate any scientific paper you want, with extensive list of ... entirely fake references. That should give pause to anybody who wants to use ChatGPT as an authoritative source.

It's possible those problems all get overcome in the future, but right now, they are glaring issues that can't be avoided.

9
  • That is, when the ChatGPT artificial intelligence says that solipsism is true, we should not pay attention to it, because it is an incorrect statement. But in the future, if artificial intelligence becomes more reliable, will we have to believe in solipsism? I can't understand it. That is, artificial intelligence will be able to find some evidence of solipsism in the future and we will have to accept solipsism? Commented Mar 3, 2023 at 17:49
  • If AI "finds some evidence", but currently, AI is not really set up to do that, except maybe in some domains where it is used to sift through data. It is certainly not the case in ChatGPT. ChatGPT would not be able to find anything that has not been fed into it previously, essentially. ChatGPT is not making any discovery. It's only regurgitating a patchwork of tidbits we have fed into it. These systems don't have any creativity or originality, except in the way they string together things we have fed into them. Well, even there, they just follow most likely statistics, that's all.
    – Frank
    Commented Mar 3, 2023 at 17:52
  • Thank you. I realized. Tell me, can artificial intelligence in the future convince us that solipsism is true? As far as I understand it is impossible to find evidence for solipsism, so all an artificial intelligence can do is generate some kind of argument in favor of solipsism. In that case, should we listen to this argument and accept solipsism? Commented Mar 3, 2023 at 18:22
  • I think that whether there is evidence for solipsism or not in the future is independent from who/what will find that evidence. If anybody or any AI system makes an argument, it will have the same value as any other argument, the fact that it would be generated by AI would not confer it any special authoritative value.
    – Frank
    Commented Mar 3, 2023 at 18:36
  • That is, there is no difference whether artificial intelligence will create an argument or some philosopher will create an argument? That is, they will have the same value? Commented Mar 3, 2023 at 18:59

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .