Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

6
  • 2
    "It would be silly to think that [an artificial entity that gives exam-grade answers in any of the major languages to a vast spectrum of spoken questions across academic fields as a result of] the large scale investment of capital into these technologies signals any kind of philosophical evidence"!? Silliness is in the eye of the beholder, apparently. Commented Apr 16 at 8:57
  • 7
    Re: " the theoretical infrastructure has been in place for a very long time" not at all. The idea to scale such models up dramatically has existed for a very long time. Nobody seriously suspected that it would produce results anywhere near as good as it now does. No theoretical infrastructure ever suggested it would.
    – user73763
    Commented Apr 16 at 17:48
  • 4
    @Fattie We all mostly regurgitate what we have read and heard; "truly new" ideas are few and far in between. And calling mastership of all major languages a party trick is a text book example for Ray Kurzweil's observation that all skills mastered by automates are immediately regarded inferior and unimportant and certainly not a sign of intelligence. I can only state, hopefully undisputed, that passing college exams in 20 languages and a handful of subjects plus being a chess and Go master would have been a sign of the highest intelligence until about mid-20th century. Commented Apr 16 at 21:42
  • 2
    @Peter-ReinstateMonica - cheers, it's pointless to debate such a massive topic in comments. The go and chess enterprises (which are astonishing and amazing) have no connection at all to the "it really usually often sounds just like a normal language" output of chat.gpt. As I said, that's an awesome "trick" (or whatever one wishes to call it). You must realize though that when you ask chat.gpt a (say) programming question, you literally get one of (say) my answers from SO (if it's an incredibly tough question, rarely discussed) or just any old answer from SO (if a common question)...
    – Fattie
    Commented Apr 17 at 0:38
  • 1
    @Fattie, It's well-known that GPTs can play chess games that have never been played before. GPTs have also been observed to infer the concept of an Othello or chess board purely from the text of moves. Not infer the state of the board. Infer the concept. Search engine? Architecturally, and behaviourally, it has more in common with a human brain (connections, weights, signals) than it does with a search engine (indexes, keywords). Commented Apr 17 at 2:39