Skip to main content

Timeline for Ban ChatGPT network-wide

Current License: CC BY-SA 4.0

30 events
when toggle format what by license comment
Apr 3, 2023 at 16:50 comment added Below the Radar @Xirema Nietzsche called Kant the "Great Chinese" because he is really hard to understand. A hack? Not really... His critique of the Pure Reason is considered the beginning of the modern science... Anyway, your answer was interesting. That's the reason why I made those comments. Cheers!
Apr 1, 2023 at 20:30 comment added Xirema @BelowtheRadar Kant was a hack.
Mar 31, 2023 at 20:35 comment added Below the Radar @Xirema maybe you could translate that article: philomag.com/articles/chatgpt-est-il-une-personne
Mar 31, 2023 at 20:26 comment added Below the Radar @Xirema haha thank you for that funny answer. What is lacking in the discussion is the philosophical part. You have to define what is a creation before pretending that only human can do such. If you read Kant you would understand that the human mind is pre-determined and it's the ignorance of causes that determine us that make us beliving we have free will and can do creation. For what I know, GPT-4 would be able to made answer just like yours if I ask it to do so. Does it means your answer is not coming from an Original Though because an AI can do the same?
Mar 31, 2023 at 20:10 comment added Xirema @BelowtheRadar As I said before, it's not really my purview to discuss how humans think or behave. It's just my perspective that people are strongly misrepresenting what these specific Neural Network type AIs are doing (and are capable of doing) and that's muddying the waters a great deal. A lot of the bigger philosophical questions you're alluding to are, in my opinion, putting the cart waaaaaay before the horse. Interesting topics for Sci-Fi writing, but not terribly germane to the practical reality of these software as they exist today.
Mar 30, 2023 at 18:22 comment added Below the Radar @Xirema When you create something can you make total abstraction of everything else you have seen and know before? I dont think so. The difference with the AI is that they use a huge memory that can remember almost everything that existed in a very precise way, before creating something. Something AI dont have and that is underestimated in the creation process is emotions.
Mar 30, 2023 at 17:55 comment added Below the Radar "the biggest reason that Neural Networks cannot produce original thought is because they're not trying to. Neural Networks are designed with 'emulation' of existing data as an end-goal." Can we say the same thing for the human neuronal system? It is designed to emulate in some way the organization of data our senses and tools can see in the universe, the nature? Maybe an intelligent machine always try to emulate its creator?
Mar 28, 2023 at 19:10 comment added Xirema @KarlKnechtel Well, more specifically, "not trying to" is colloquial synecdoche for "was not programmed to do". These neural networks can not try to produce original thought. And I'm dubious that any Neural Network type AI will ever be capable of original thought. If we create an AI someday that is capable of original thought, it probably won't be a Neural Network type AI.
Mar 27, 2023 at 18:52 comment added Karl Knechtel "the biggest reason that Neural Networks cannot produce original thought is because they're not trying to." This makes it sound as if they could try to, which is rather frightening.
Feb 10, 2023 at 14:23 comment added KorvinStarmast @JonathanReez Skynet is hiring.
Dec 29, 2022 at 22:40 comment added wizzwizz4 @JonathanReez IQ tests don't test critical thinking. If even Nobel prize winners, who have made novel contributions to the body of human knowledge, can fail to apply critical-thinking, how would "capacity to solve IQ puzzles" be indicative? (But I suppose I'm just making your point again. As a bonus, I haven't proof-read this comment, so I betcha I've made it a third time, too.) Regardless, experts tend to think about their subject matter, and those are the people who are answering questions on Stack Exchange.
Dec 29, 2022 at 20:12 comment added JonathanReez @wizzwizz4 humans sometimes think about things. How often and how successful depends on the subject matter and the IQ of said person. If humans actually applied critical thinking all the time, we'd be living in a completely different world.
Dec 29, 2022 at 20:02 comment added wizzwizz4 @JonathanReez Humans think about things. Transformers do not think about things. The precise mechanics of human learning aren't relevant, because this distinction is enough to explain a lot of the difference between human output and ChatGPT output. (Ask ChatGPT not to plagiarise, and it'll tell you it's not plagiarising while plagiarising just as much.)
Dec 22, 2022 at 17:10 comment added Xirema @JonathanReez Which is why we don't want to introduce a tool that will make the problem exponentially worse.
Dec 22, 2022 at 17:09 comment added Xirema @Kevin I wouldn't had to have added that section if I didn't keep getting comments from people insisting otherwise. There's already quite a few deleted comments on this answer from people doing that...
Dec 22, 2022 at 12:35 comment added JonathanReez @Kevin people rarely cite their sources on Stackoverflow and many sites like Politics have a rampant lack of source attribution by humans
Dec 22, 2022 at 9:55 comment added Kevin @JonathanReez: I think the point that is being made here (underneath all the "AIs are not like humans" chatter, which IMHO is frankly just irrelevant and distracting), is that humans are normally expected to cite their sources, and ChatGPT is currently unable to do so. If you cite your sources, then as a rule, that is generally understood to be enough to defeat a charge of plagiarism. (There may be copyright issues if the text is very similar to the original, but that's a separate issue as Xirema's comment acknowledges.)
Dec 15, 2022 at 15:42 history edited Xirema CC BY-SA 4.0
added 344 characters in body
Dec 13, 2022 at 20:21 history edited Xirema CC BY-SA 4.0
An essay about what Neural Networks actually do because I keep getting annoying comments insisting Neural Networks are "just like brains!!1!"
Dec 13, 2022 at 5:14 comment added JonathanReez The brain uses the biological equivalent of matrix multiplication to achieve the same result. My question is why it's fair for a human to read a few articles and then write their own "original" article on the subject but not fair for an AI to do the same thing. You seem to assume that the human brain does some "magical" process while in reality it "auto completes" text in a fashion not quite dissimilar to ChatGPT.
Dec 13, 2022 at 5:07 comment added Xirema @JonathanReez This isn't a debate about how Humans learn, it's about how AI learns, and specifically, how Neural Networks operate. Mass-scale Matrix Multiplication is not analogous to human learning.
Dec 13, 2022 at 1:46 comment added JonathanReez Um… humans don’t learn how to write text or code by reading the works of others? That’s certainly news to me. Humans need a significantly lower number of samples to learn something but the general principle is the same. There’s nothing magical about how our brain works, it’s just a neural network.
Dec 13, 2022 at 1:41 comment added Xirema @JonathanReez "AI writing text by learning from other texts isn’t any different from humans doing the exact same thing." This is not true. It's just flat-out completely false. Proselytizers for AI-generated content will sometimes make this claim because they want to capitalize on hype around AI and/or are jonesing for valuable Venture Capital funding, but the neural networks that power these algorithms are extremely unlike human thinking, and should not be treated as though they are performing original thought.
Dec 13, 2022 at 1:32 comment added JonathanReez We consider plagiarism to be bad because it allows credit to be stolen. No such concern exists for tools because they don’t require human input. If you can write amazing novels using ChatGPT, why should you give your tool any credit?
Dec 13, 2022 at 1:29 comment added JonathanReez AI writing text by learning from other texts isn’t any different from humans doing the exact same thing. You don’t have to attribute text you write to every single book on the subject that you’ve read. The tricky part is that prior to ~2021 any tool whatsoever was considered fair game to use to help you write text - spell checker, Google Translate, thesaurus, tools that help rephrase things, etc. But all of a sudden it’s claimed that this particular tool goes too far and no longer counts as a “tool”.
Dec 13, 2022 at 1:11 comment added Xirema @JonathanReez Beyond the question of whether any of the works used in the training algorithms for these AI are, in fact, subject to Copyright (they might or might not be; and the fact that we're not sure is kind of the root of the problem given that, again, these works are not properly cited), failure to properly cite those works would still constitute plagiarism regardless.
Dec 13, 2022 at 1:08 comment added Xirema @JonathanReez So Plagiarism and Copyright are orthogonal concepts. Works that are not subject to copyright can still be plagiarized (used without crediting the source), and you can commit copyright infringement without committing plagiarism (cited the source but used too much of the copyrighted work and violated Fair Use). Important Distinction.
Dec 12, 2022 at 20:33 comment added JonathanReez Does the term "plagiarism" make any sense for copying from a work not subject to copyright?
Dec 6, 2022 at 20:35 history edited This_is_NOT_a_forum CC BY-SA 4.0
Active reading [<https://en.wiktionary.org/wiki/de_facto#Adjective>]. Used more standard formatting (we have italics and bold on this platform). Endash is only for numbers, as least according to The Chicago Manual of Style. Toned down the formatting (use view "side-by-side Markdown" to compare).
Dec 6, 2022 at 18:32 history answered Xirema CC BY-SA 4.0