Timeline for Does the success of AI (Large Language Models) support Wittgenstein's position that "meaning is use"?
Current License: CC BY-SA 4.0
21 events
when toggle format | what | by | license | comment | |
---|---|---|---|---|---|
May 8 at 12:09 | answer | added | Søren Harder | timeline score: 1 | |
May 3 at 15:55 | comment | added | andrós | i started a question about my confusion about this apologies to anyone who finds that annoying | |
May 3 at 14:54 | comment | added | andrós | oh ok, i would take those examples as illustrative rather than supports, but ymmv | |
May 3 at 14:52 | comment | added | ac15 | @user66697 no no, i do mean 'support' | |
May 3 at 14:46 | comment | added | andrós | do you mean to ask whether LLM refutes the claim that meaning is use? i don't necessarily think that finding an exmaple of meaning being use would support his claim, as there are plenty of examples of that in wittgenstein | |
May 3 at 13:19 | answer | added | Reinhard Oldenburg | timeline score: 2 | |
Apr 19 at 13:00 | comment | added | Scott Rowe | People are easy to convince using nice language. I think Socrates said something about that. | |
Apr 18 at 18:10 | comment | added | nir | Wittgenstein writes "But surely a machine cannot think!" (PI §360) as part of his dialectical style - it does not reflect his opinion. | |
Apr 17 at 11:35 | vote | accept | ac15 | ||
Apr 16 at 20:50 | comment | added | Tim C | Too short to be a proper answer, but LLMs are actually really bad at corresponding language to non-linguistic objects. Absent either very careful training or a secondary technology for handling it, a generative LLM will answer, "Done, here you go" or similar to requests for some non-linguistic object (like ordering take-out and sending a receipt, or drawing a picture), while neither completing the task nor exhibiting any awareness that it hasn't completed the task. | |
Apr 16 at 19:27 | comment | added | David Tonhofer | Further read: Can Machines Be in Language? - Large language models brought language to machines. Machines are not up to the challenge. by Peter J. Denning and B. Scot Rousse | |
Apr 16 at 19:25 | comment | added | David Tonhofer | In Seven Pillars for the Future of Artificial Intelligence by Cambrai, Wang, Ho': The main goal of most tech companies is not designing the building blocks of intelligence but simply creating products that existing and potential customers deem intelligent. In this context, instead of labeling it as ‘artificial’ intelligence, it may be more apt to characterize such research as ‘pareidoliac’ intelligence. | |
Apr 16 at 17:13 | answer | added | andrós | timeline score: 2 | |
Apr 16 at 16:49 | answer | added | Idiosyncratic Soul | timeline score: 4 | |
Apr 16 at 16:44 | answer | added | LivesayEngineer | timeline score: 2 | |
Apr 16 at 16:14 | answer | added | uhClem | timeline score: 4 | |
Apr 16 at 9:33 | answer | added | Peter - Reinstate Monica | timeline score: 10 | |
Apr 16 at 3:52 | history | became hot network question | |||
Apr 15 at 23:08 | answer | added | Jo Wehler | timeline score: 1 | |
Apr 15 at 20:25 | answer | added | transitionsynthesis | timeline score: 22 | |
Apr 15 at 19:50 | history | asked | ac15 | CC BY-SA 4.0 |