Skip to main content
19 events
when toggle format what by license comment
Apr 19 at 13:43 comment added Peter - Reinstate Monica @ScottRowe Not quite sure what you mean but: Yes, something big is about to happen. It's slow motion as always, until it isn't. May be a crash, may be a tumor ;-).
Apr 19 at 13:18 comment added Scott Rowe As if we needed more imaginary icebergs to crash into. It's a regular crash blossoming!
Apr 18 at 12:47 comment added Araucaria - Not here any more. LLMs are random in the important sense, and just don't (usually) deviate from the patterns normally found. They're just clouds designed to look like faces for humans.
Apr 18 at 11:28 comment added Peter - Reinstate Monica @Araucaria-Nothereanymore. Cloud shapes are random; texts produced by LLMs are not. Words in those texts are where they are because the LLM is a model of their use and is able to reproduce that use.
Apr 18 at 11:19 comment added Araucaria - Not here any more. ... to derive relevance from it. It doesn't mean that the material had meaning or the items in it were 'used' in Witgenstein's sense. If you look at the clouds, you'll see faces or animals. Other observers will see them too. It does not mean that the sky was painting a picture for you or that the clouds were communicating to you.
Apr 18 at 11:17 comment added Araucaria - Not here any more. @Peter-ReinstateMonica There is no separation between use (in Wittgenstein's, or a linguist's sense) and intent here. What we have is a generation of symbols which look as if they have been generated by an intentional being. But the so-called 'sense' that they have for a reader is just an illusion. Human cognition is geared for relevance (ounce of information per watt of effort finding, deciphering and inferring it), and nowhere is this more the case than with linguistic material. You can give a human any kind of ungrammatical paralinguistic material and their linguistic aparatus will try ...
Apr 18 at 11:02 comment added Peter - Reinstate Monica @Araucaria-Nothereanymore. The point I'm trying to get across is that we see here, for the first time, facilitated by the mediation of LLMs, the separation of intent and actual use, and it's illuminating. (Perhaps obviously, I am a big fan of the Turing test. An intelligible sentence is an intelligible sentence, independent of its authorship (human or LLM). The words in it are words, and the function they have in the sentence is their use, human author or not.)
Apr 18 at 10:53 comment added Araucaria - Not here any more. @Peter-ReinstateMonica That's classic equivocation! (use is practical therefore paractical=use). "Use" is not the appearance of a word in a generated pattern, it's the act of using the word with a particular intended denotation (which may be ad hoc) with the wider intention of changing a listener's stock of assumptions in a particular way.
Apr 17 at 13:46 comment added Peter - Reinstate Monica @Araucaria Well, "usage" (as opposed to "meaning" or "intent") is clearly a practical, physical manifestation. That's a neat trick from Wittgenstein because it relieves us from "metaphysical" discussions (define intent!) and gives us something concrete to talk about: How is a word actually used? And this use is clearly its appearance in written or spoken, actual texts. This is not a redefinition of "use".
Apr 17 at 13:31 comment added Araucaria - Not here any more. @Peter-ReinstateMonica That's just redefining usage as string context, but that's not what Wittgenstein was talking about and it's not what usage or meaning have ever meant. The point about games is that 'game' means whatever users intend 'games' to refer to. But LLMs don't use words to refer to things and have no intentions.
Apr 17 at 13:14 comment added Peter - Reinstate Monica @Araucaria With respect to the one concrete criticism: Yes, it is true that LLMs do not "intend" anything. They can't. What happens here is something linguistically very interesting: We can now observe a separation of intent and domain knowledge on one hand, which only people or other sentient beings have, from usage. The LLMs have slurped up all the usage (largely: context) and it turns out that they can (without doubt) produce "meaningful" (intelligible, informative) texts just with that context knowledge. The human intent and understanding -- meaning! -- is preserved in the context!
Apr 17 at 13:07 comment added Peter - Reinstate Monica @Araucaria-Nothereanymore. With all due respect, you and I also "just" produce strings which look grammatical ;-). I am fairly sure that I have a basic understanding of what language is, which is why I am fairly sure that I am not completely misunderstanding that. I also have a somewhat more vague but, on the other hand, professionally grounded idea what an LLM does, which is again why I'm fairly sure that I'm not completely misunderstanding that. I'm least sure about Wittgenstein but I did read excerpts of the Investigation and believed I understood them. If you could be more specific?
Apr 17 at 11:35 vote accept ac15
Apr 17 at 11:29 comment added Araucaria - Not here any more. This is a complete misunderstanding of what LLMs do and a complete misunderstanding of what language is, and a complete misunderstanding of what Wittgenstein was talking about when he talked about 'meaning of a word is its use in the language'. Wittgenstein was particularly talking bout the referents of words. Because LLMs just produce strings which look grammatical, their strings have no referents. They never intend any particular word to refer to any particular thing, and the individual words cannot be understood to 'stand for' anything in particular on the LLM's part.
Apr 17 at 2:55 comment added Paul Prescod @Peter-ReinstateMonica, a relatively small chess model can play entirely novel games with an error rate of 0.2%, which would be in the same realm as humans. Humans also make errors.
Apr 16 at 21:35 comment added Peter - Reinstate Monica @Yakk (Current) LLMs are notoriously bad at things with clear rules like math and board games. They e.g. play impossible moves in chess because yes, they have context for "e4" but they do not "know" the game or its rules which cannot easily be deduced from context. "e4" has a very specific meaning in a context common to both players (and all chess players) which is not conveyed by mere proximity to other moves. The chess rules are largely external to game notations.
Apr 16 at 19:20 comment added Yakk "no facility for knowing in the machine, beyond word contexts." - so, you can train a LLM on transcripts of a board game like reversi. And then you can play reversi with that LLM. What more, we can read that LLM's mind and determine what it thinks the board looks like; we can go further, and perform mind surgery and change the state of the board in the LLM's mind, and it have it continue to play as if the board looked different. We can even write an impossible board state into its mind, and watch as it plays consistent with it. How is this not some evidence of "facility for knowing"?
Apr 16 at 16:33 comment added Idiosyncratic Soul +1 "Speaker and listener must share part of that context lest they could not communicate" AI like ChatGPT includes algorithms and statistical analysis to determine context based on the question asked.
Apr 16 at 9:33 history answered Peter - Reinstate Monica CC BY-SA 4.0