Skip to main content
7 events
when toggle format what by license comment
Apr 17 at 2:39 comment added Paul Prescod @Fattie, It's well-known that GPTs can play chess games that have never been played before. GPTs have also been observed to infer the concept of an Othello or chess board purely from the text of moves. Not infer the state of the board. Infer the concept. Search engine? Architecturally, and behaviourally, it has more in common with a human brain (connections, weights, signals) than it does with a search engine (indexes, keywords).
Apr 17 at 0:38 comment added Fattie @Peter-ReinstateMonica - cheers, it's pointless to debate such a massive topic in comments. The go and chess enterprises (which are astonishing and amazing) have no connection at all to the "it really usually often sounds just like a normal language" output of chat.gpt. As I said, that's an awesome "trick" (or whatever one wishes to call it). You must realize though that when you ask chat.gpt a (say) programming question, you literally get one of (say) my answers from SO (if it's an incredibly tough question, rarely discussed) or just any old answer from SO (if a common question)...
Apr 16 at 21:42 comment added Peter - Reinstate Monica @Fattie We all mostly regurgitate what we have read and heard; "truly new" ideas are few and far in between. And calling mastership of all major languages a party trick is a text book example for Ray Kurzweil's observation that all skills mastered by automates are immediately regarded inferior and unimportant and certainly not a sign of intelligence. I can only state, hopefully undisputed, that passing college exams in 20 languages and a handful of subjects plus being a chess and Go master would have been a sign of the highest intelligence until about mid-20th century.
Apr 16 at 17:48 comment added user73763 Re: " the theoretical infrastructure has been in place for a very long time" not at all. The idea to scale such models up dramatically has existed for a very long time. Nobody seriously suspected that it would produce results anywhere near as good as it now does. No theoretical infrastructure ever suggested it would.
Apr 16 at 8:57 comment added Peter - Reinstate Monica "It would be silly to think that [an artificial entity that gives exam-grade answers in any of the major languages to a vast spectrum of spoken questions across academic fields as a result of] the large scale investment of capital into these technologies signals any kind of philosophical evidence"!? Silliness is in the eye of the beholder, apparently.
Apr 16 at 7:40 comment added Mauro ALLEGRANZA Correct: meaning of a word is to use it in different language games. For W meaning is not "mental" and it is not a platonic "world of thought" (contra Frege).
Apr 15 at 20:25 history answered transitionsynthesis CC BY-SA 4.0