18

I was just reading a response to an Answer on this site, where the author of the Question said that the Answer sounded like a paste from ChatGPT.

I re-read the Answer and couldn't really tell if that might be true or not. I don't have enough experience with that sort of thing to tell. I have gone decades of my life where worded statements had to originate with humans, even if modified or plagiarized outright.

I suddenly felt a doubt, that I could no longer trust in human authorship of written material and I thought, basically, "I'm done here." Then I realized that I would have to be 'done' everywhere, and no longer trust anything except a face to face conversation.

Then I figured others might react the same way, and community would vanish. Is there a way to keep community when ChatGPT might originate statements in undetectable ways? Do we develop some sort of shibboleth or secret handshake? People might just become sceptical of all written material.

11
  • 1
    Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on Philosophy Meta, or in Philosophy Chat. Comments continuing discussion may be removed.
    – Geoffrey Thomas
    Commented Jul 25, 2023 at 9:10
  • 3
    From a philosophical perspective... AI seems to be a natural extension to the P-Zombie problem, no? How do you know that all online content hasn't always been generated by AI Commented Jul 25, 2023 at 10:29
  • 2
    @ScottishTapWater because I saw how primitive computing was at the beginning. Oh, hi Eliza! How are you today?
    – Scott Rowe
    Commented Jul 25, 2023 at 10:41
  • 2
    Making statements less significant? That's what social media are for.
    – user4894
    Commented Jul 25, 2023 at 17:47
  • 1
    @DmitryGrigoryev If you read — wiki or anywhere — an article, and quote it at length and dont attribute it to its source, its normally flagged as plagiarism and will usually cost an academic job if not worse. I dont think there's a problem with clearly attributed AI-bot content. Its the unattributed stuff in large masses that has resulted in a big strike
    – Rushi
    Commented Jul 26, 2023 at 12:18

9 Answers 9

8

If a few people pass off AI answers as their own, it is bound to make people more sceptical. But the quality of the answer is the main thing. There are two risks to SE, I think:-

  1. If the answers get to be really good as AI develops, people will go to the source.

  2. If the practice gets to be common, the meaning of the reputation points and badges will be undermined.

But there is more to SE than asking a question and getting an answer and certainly more than reputation and badges.

  1. There's very seldom just one answer to a philosophy question. The variety of answers is part of the point.

  2. There's a lot to be gained from seeing the questions that other people ask. (I suppose you could get an AI to ask questions as well, though it seems rather pointless - except as an exercise.)

  3. The comments and chat are at least as interesting as the actual answers. (If an AI started producing comments as well asking and answering its own questions, it would be no more interesting than someone talking to themselves.)

  4. I don't see how the existing kinds of AI could produce a meaningful answer to a question that has not been discussed in its database.

1
7

For me, it does not change anything. I have been in online communities since the late '80s. There have been very few indeed where I had a meaningful relationship to actual humans (small local/regional ones in the early days, when real-world user meetings were a thing - or of course online communities which started out as real-world communities, like friends and family).

On sites like StackExchange, while over time I may meet the same folks over and over again, I have nothing like an actual human relationship. I know none of my fellow writers beyond what they write here - and I do not need to know them either. They could be bat-shit crazy for all I know; as long as what they write passes our voting mechanism I have no reason to believe or disbelieve them any more or less.

In more casual online communities, I have stopped talking about private things ... decades ago. In the early days there was little regard for security or privacy, but these things have been different for a very long time now.

In social media, the phenomenon of large "fake" communities is well known (i.e., many people posting "facts" with some nefarious agenda). I see zero reason to trust any of them beyond their entertainment value.

Even well-meaning online communities breed in-culture or echo-chambers where the same "facts" are thrown around over and over again. I.e., you have members who learned most of their knowledge about some topic in the very community where they then teach that knowledge to newbies. This is all fine and well, but it is not so different from a GPT, which is also trained on the same info, and repeats stuff that has been fed into it. Humans can repeat wrong "facts" just as well as AI.

In every case, no matter if the text I'm reading has really been written by humans, or by an AI, I never can simply and blindly trust its veracity - I always have to think myself, or do further research or experiment, depending on what it is I am reading, before I take important decisions based on it.

In communities that are primarily for entertainment, I also have zero issue being entertained by AI. Whether I view pictures drawn by humans on a Reddit sub, or by AI - if the pictures are interesting to me, I could not care less. (Yes, I am leaving all issues regarding Copyright etc. aside right now.) It will be all merit-based; so far, AI-created media seem mostly to have novelty value.

Aside from that, a GPT is a tool like many others; I am absolutely happy to use it at work or in private; mostly as an advanced search engine sparing me the flood of advertisement, or (I'm working in IT) the exposure to awful "tutorial" sites on the first few result pages, long before the actual real information (i.e., reference documents etc.) come up. Or as some inspiration (give me X reasons to do Y). I do not have to assume that there is an actual "intelligence" in the box, or any kind of personality.

4
  • SE is the only social media I use, or have used. When I looked at other things, they seemed not worth my time. Maybe I will have to create a place where intelligent people can interact? "If you want something done right, you have to do it yourself." Ok, folks, I'm not pleased.
    – Scott Rowe
    Commented Jul 24, 2023 at 16:03
  • 3
    @ScottRowe, well you're not wrong. I would not even call SE social media - it's not so much about socializing, but about distributing knowledge in the form of Q&A. The actual social media (which today's users view at that) is a cesspool, as far as I'm concerned. Barely acceptable for folks who have a strong capability of filtering misinformation and averting the strong addictive mechanisms employed by the sites, but a real problem for everybody else. In the fold days, "forums" were what you're looking for, but as far as I can tell, few good ones are left.
    – AnoE
    Commented Jul 25, 2023 at 7:29
  • 1
    You say you're not pleased @ScottRowe. And then accept the answer that says The quality of the answer is the main thing Which to me seems like sayin' The source doesn't matter
    – Rushi
    Commented Jul 25, 2023 at 12:04
  • @Rushi Well, if you look at the part of the Answer explaining good features of SE, and see my Comment as to why I Accepted that Answer, it will probably be clear.
    – Scott Rowe
    Commented Jul 25, 2023 at 14:09
5

Excellent question. But, I think you're looking at it the wrong way.

If you view philosophy and language to be a tool that helps us to model reality, then what you are after is the best model you can find in language, not a conversation with another person. Language models are tools that might be seen analogously to arithmetical averages. If you take a corpus and feed a machine, and what it spits out is a weighted linguistic artifact, you are indirectly have a conversation with people (as I don't think we have to worry about feedback loops yet). You are reading language produced by a fictional person who is the weighted average of many people. But our models of reality aren't built on what other people say exclusively, but how our models evolve in the face of others' linguistic artifacts (testimony), what we perceive happens in the world (consciousness), our memories, and how our model of reason functions. If a language model produces a claim that challenges our belief, that should be taken just as seriously because we are vetting our model, not someone else's model.

For instance, if a language model produces the claim that 'our models of reality may not be true or false but adequate or inadequate', and that challenges us to think about whether it makes sense to call an entire theory true or false (does truth conditional semantics apply to theories in the same way it does mere claims, and why or why not?) then we can improve our model regardless of whether the linguistic artifact was written by one person, or by software using an algorithm to "sum up" the thoughts of many people. If we have an insight into our own thinking, what difference does it make how the claim or question was generated?

There are absolutely reasons to connect with people through conversations. I often lament how few people willingly engage in critical thinking and exegetic discourse (reality TV holds human attention in a way Kant does not), and I wish I had more relationships built around robust discourse of ideas. I think the agorra was just as much about community as it was debate, but if an LLM brings to light this deficiency in our routines, it is not the LLM, but the deficiency that is the problem. Now, if we answer questions to help others, and it turns out others are merely software systems, then it does undermine our motivations for contributing to a forum. But, if our motivations for participating in a forum are to improve ourselves, then any source of inspiration for improvement should be cherished. Quite frankly, LLMs seem to provide text more sophisticated than people just starting out on the path of critical thinking anyway.

Could ChatGPT etcetera undermine community by making statements less significant for us?

So, yes, absolutely, if our motivations have eusocial impulses like helping others, or forming bonds with those with whom we communicate, surely. It's a form of intellectual catfishing. But on a forum like this, where people are loathe to go beyond mere postings and where bonds aren't actually formed, what difference does it make? I would be disappointed to find out a pastor whose exhortations I am fond of relies on ChatGPT, but perhaps it should be lauded that someone who wants to inspire looks to cure a deficiency in his rhetorical prowess. I would be disappointed if I were to have a pen pal, only to find out that the riveting conversation was a ruse to eventually try to defraud me out of money, but here too, is the problem the language, or the intention of the person wielding the language model?

I would say that using a language model is a lot like how people present themselves visually to others. There are some who will go to great lengths to alter their appearance and conceal the alterations in a bid to get what they want, and others will ignore such alterations completely. Is that any different than from virtue signaling in conversation or using autocorrect? When you boil it down, what you are dealing with is a fundamentally unvarnished Wittgensteinian truth: language is a game, and you still have to figure out what game you want to play and why. An LLM is just another tool that can be used in the game, one that makes you think more critically. And thinking more critically sometimes is a good thing.

3
  • I'm here to improve myself by interacting with people about... Philosophy in this case. I spent a couple years at Buddhism SE, a while a Educators, occasionally other SEs. Take away the people and, you know, I grew up curled around encyclopedias and reference books. I prefer people.
    – Scott Rowe
    Commented Jul 24, 2023 at 15:59
  • one could manufacture all kinds of outlandish statements that "challenge our assumptions" yet lead nowhere productive. We must discern between meaningful philosophical skepticism and obfuscating sophistry.
    – user66933
    Commented Jul 24, 2023 at 16:36
  • 1
    @ScottRowe I do too. :D
    – J D
    Commented Jul 24, 2023 at 18:45
3
  • Tools like GPT are the culmination of decades of research spanning neuroscience, philosophy of mind and language, the mathematical theory of communication, as well as animal studies of learning which are in some ways the most relevant to reinforcement learning as a paradigm
  • Practically speaking we have reached a realization of some part of the dream of the earliest AI researchers; in particular some language models have significant knowledge about systems and programming domains. So we have developed a computer that can “teach people how it works”, subject to many of the same distortions and noise problems that plague all communication channels necessarily and which doesn’t mean they cannot be made reliable effectively, despite confabulation or noise on the wire. Error-correction is at the heart of these models, but these systems are also inherently creative in most of their present “primitive” configurations where they represent something like the raw output of a language-processing subsystem of the brain, a preconscious generator of likely semantically-consistent completions
  • There are some interesting and novel questions here from a theoretical point of view; and not just this narrow paradox here about the relativity of signification, but problems about the nature of the creative act and formal models of communication. But evaluation of all this philosophically is complex. I would invite you to consider the research of Derrida and Baudrillard in particular as possibly relevant also to the more conceptual aspects of communication here and the transformation that information technology is having on our culture and the nature of meaning.

Put very directly: we don’t know much yet on this, since the interpretability studies of GPT are still nascent. It may be that these systems are inherently black boxes but there are lots of practical things we can do to constrain network connectivity to be more coherent and interpretable, maybe at some performance costs. But the key thing I’m saying is we do need to keep an open mind about these systems until we understand a lot more. They do represent a step-change in terms of computer science but are still quite experimental and poorly-understood at the level of theoretical interpretability.

3
  • 2
    ChatGPT born out of neuroscience and philosophy? That's a new one!
    – Stef
    Commented Jul 24, 2023 at 14:53
  • 1
    @Stef You could review The Alignment Problem which, despite the name, presents a broad intellectual history of reinforcement learning — and yes the techniques and models have moved back and forth between neuroscience, animal studies and the mathematical theory of communication. Happy to talk more about this but it’s really a separate problem — analyzing the trajectory and antecedents versus conceptualizing the present generation of language model capabilities
    – Joseph Weissman
    Commented Jul 24, 2023 at 15:31
  • It’s interesting to observe how much of Shannon’s Mathematical Theory of Communication is investigating the nature of language as a system of frequencies and redundancies, motivated throughout by intriguing examples that are suggestive of language models
    – Joseph Weissman
    Commented Jul 24, 2023 at 17:39
3

The problem, in me humble opinion, lies elsewhere ... we're barking up the wrong tree. I'm not as worried about ChatGPT's prowess with words but with the rather disturbing ease with which philosophical discourse can be mimicked by the first bona fide prototype for an AI.

Wouldn't you be worried if, say, an ape could do everything you could ... and better!

Just curious, what's ChatGPT's IQ? Has anyone measured it? What would your guess be? Deep Blue, the first supercomputer that beat a Chess Grandmaster in a tournament, had an IQ of ___?

6
  • Whew somebody speaking sense at last!! If an AI were better than humans more and more near literally at everything humans can do the world would surely be better right? Maybe... but for someone other than humans methinks... And everyone here seems to be celebrating this... If you live steak and inherit a well endowed steer farm, it may be good news... For the steers?
    – Rushi
    Commented Jul 25, 2023 at 18:36
  • as all rich people know, a clever person is someone who gets what they want. LLM have no desires they cannot autocomplete, so are all geniuses.
    – user66760
    Commented Jul 25, 2023 at 20:17
  • @Rushi I was thinking about the Jeopardy tournament with Deep Think or whatever. But Jeopardy is still being played, right? People still play chess, run, etc. Photography didn't displace painting. An ape is probably already massively stronger and more coordinated than I could ever be. Dogs have a better sense of smell, cats are better hunters... What makes people people? Good to ponder on. Remember the ending of Oblivion?
    – Scott Rowe
    Commented Jul 25, 2023 at 22:18
  • 1
    Deep Blue had zero IQ. It could only have chess rating. As for GPT-4, you can't measure its performance by human IQ tests, whether it'll perform good or bad. You should measure it against other LLMs
    – user66933
    Commented Jul 26, 2023 at 12:49
  • 1
    😂 Intriguing to note. So high IQ creates 0 IQ that defeats high IQ. WTF?
    – Hudjefa
    Commented Jul 26, 2023 at 13:30
2

AI is an important and concerning invention. After it has its foot in the door, you can tell how it doesn't say anything vague, but it may be a valid means to communication. Many philosophers have asked if large language models are conscious and dangerous. In fact, LLM cannot communicate unless prompted. I think that means it lacks Dasein.

10
  • 3
    They are prompted all the time. It's not about LLM, it's about people using LLM
    – user66933
    Commented Jul 24, 2023 at 9:30
  • i had a dream huh? how's your ex?
    – user66760
    Commented Jul 24, 2023 at 9:50
  • @doot_s "Milk and cookies kept you awake, eh, Sebastian?" Here's Johnny!!!
    – Scott Rowe
    Commented Jul 24, 2023 at 10:54
  • 1
    Dr. Eldon Tyrell had been working late into the night in his expansive and opulent office atop the Tyrell Corporation. Suddenly, he was startled by the sound of a soft knock on the door. It was J.F. Sebastian, looking weary and disheveled. "Milk and cookies kept you awake, eh, Sebastian?" Tyrell asked, his voice echoing in the large room. Before Sebastian could respond, a chilling voice cut through the silence from the shadows of the doorway. "Here's Johnny!" The voice was followed by the chilling sight of Jack Torrance, his maniacal grin illuminated by the low light, holding an axe.
    – user66933
    Commented Jul 24, 2023 at 11:39
  • 2
    What do you mean with the second sentence ("after it has its foot in the door ...")? Also, the issue that makes GPTs/LLM a non-entity is not that it requires prompts, but that there clearly is nothing in there that is remotely close to anything inspiring a concept of "dasein" (existence), consciousness, intelligence etc. They are literally ("generative pre-trained transformer") just word generators, nothing more or less. Their success lies in how well they are programmed and trained to seem intelligent, but I don't think anyone can earnestly say that they have even a iota of intelligence.
    – AnoE
    Commented Jul 24, 2023 at 14:27
1

The prospect of AI like ChatGPT undermining human communication is an important concern. However, we must be cautious not to overstate the threat. As humans, we have a tendency towards what I call "techno-panic" when faced with new technologies. From the printing press to the internet, new inventions often spark alarm about the unraveling of society. Yet somehow, we adapt and life goes on.

No doubt, AI will continue advancing. One day, it may write seamlessly like a human. But human relationships run deeper than mere words on a page. Shared laughter, inside jokes, deep gazes of understanding - these are woven from lived experience that no AI can replicate.

Even as machine learning progresses, humans crave authenticity. We seek truth and meaning, not just clever turns of phrase. An AI may one day write beautiful poetry, but can it truly understand love lost or grief felt?

Of course, we must ensure technology uplifts our humanity. Guarding against deception will remain important. Ethics and wise regulation will guide us, as they long have.

No software, however advanced, can unwind the ancient social fabric of family, friends and community. Our bonds persist, however bytes may churn. This technology is built by humans, for human ends. With mindfulness, it need not divide us.

Chatbots come and go. But as long as we retain our compassion, creativity and reason, humanity's future remains brighter than any code. Our connections run deeper than any algorithm. This truth endures, world without end.

11
  • 2
    I've heard a lot of people say "AI" (LLMs) will destroy society; the truth is that LLMs are just highly advanced autocomplete. It is hard to destroy society with autocomplete. But... capitalists think they aren't. Capitalists will take the bad advice from LLMs and use it to destroy society. Commented Jul 24, 2023 at 10:46
  • Exactly. I don't feel like taking the time and effort to weed through the posts looking for humans. It is hard enough just walking down the street with them, but adding robots and those dog-things would make it overwhelming. Isn't there a No AI Allowed sign we could put up?
    – Scott Rowe
    Commented Jul 24, 2023 at 10:57
  • 7
    It is extremely disheartening to read through a long post before realizing it's well-written nonsense and a complete waste of my time. It's even worse when I'm reading something I'm not a subject expert on, where I might not realize it's well-written nonsense, since I may be misinformed by it. As chatbot answers become more and more prolific, sites like this become less and less worthwhile.
    – Chris
    Commented Jul 24, 2023 at 18:03
  • 1
    @LudwigV We will, but personally, I think it's more likely that the entire economy will cease to exist or radically change form, than that the elements which have control over it will be forced to relinquish their control. Commented Jul 25, 2023 at 5:44
  • 2
    The user @SergZ. has been flooding the whole network with answers spitted out by chatbots. See the whole discussion here: math.meta.stackexchange.com/questions/35868/…
    – Amelian
    Commented Jul 25, 2023 at 17:10
1

Let's explore both the potential negative and positive scenarios.

Negative Scenarios

  1. Loss of Authenticity: As AI language models like ChatGPT become more sophisticated, it could become increasingly difficult to distinguish between human-generated text and AI-generated text. This could lead to a situation where people lose trust in digital communication, as they cannot be certain of the authorship of any given piece of text. This could potentially undermine online communities and digital communication more broadly.

  2. Manipulation and Misinformation: AI-generated text could be exploited by bad actors to spread misinformation, propaganda, or to conduct social engineering attacks. If AI-generated text becomes ubiquitous and indistinguishable from human text, it could exacerbate these issues.

  3. Devaluation of Human Creativity: If AI can mimic human creativity effectively, it might lead to a devaluation of human-created content. There could be an erosion of the significance of human expression if it can be mimicked by machines.

Positive Scenarios

  1. Enhanced Communication: AI language models like ChatGPT can assist individuals in communication, helping those with writing difficulties to express themselves more clearly and effectively. This could lead to more inclusive online communities where more people are able to participate in conversations.

  2. Content Moderation: AI can be used to moderate online content and protect communities from harmful content, hate speech, and misinformation. If AI-generated text becomes ubiquitous, it might also become better at identifying harmful or misleading AI-generated content.

  3. Education and Information Availability: AI can be used to provide information, answer questions, and educate people on a wide variety of topics. This could make knowledge more accessible and foster online communities centered around learning and education.

  4. Verification Systems: In response to the challenges of AI-generated text, new verification systems could be developed to confirm human authorship. For instance, CAPTCHAs evolved as a way to confirm that a user is human. Similarly, new technologies could emerge to ensure the authenticity of human-written text.

So it will largely depend on how we choose to use and regulate these technologies. They are tools, not aliens who came to devour our brains.

1
  • Nice answer. I now have an uneasy feeling that it came from text generation (your point N.1) but alternately, even if true, maybe it was used by you to create a more readable answer of your own (your point P.1). I guess it was was easier when I could be sure whom (and what) I was dealing with, but c'est la vie. I'll stick around and see how muddy the water gets. (When chatbot can out do me in wierd jokes, crazy puns and strangely relevant quotes from old movies, I'll push off.)
    – Scott Rowe
    Commented Jul 26, 2023 at 10:33
1

GenAI is a threat, because language is compression.

It's valid to treat the exact same statement made by a middle-schooler and by a seasoned professional differently. Because one cannot possibly have the same substantive intent as the other - i.e. they decode to different things. Obviously depends on the statement, nothing over "I like chocolate".

Likewise with ChatGPT-etc. ChatGPT can reason, and has knowledge, but both are very poor. What matters is reasoning - it's not just lacking, it's "noisy", hence self-inconsistent. Humans have much a greater source of inconsistency, but in a fundamentally different way - emotion - that's better or worse than GenAI depending on subject.

The extent ("a great ill", "a bit dishonest") of descriptors is the most pertinent, to my mind, source of difference due in interpretation when reading GenAI content. Every adjective is an evaluation, and human evaluation is a combination of pure reason, emotional reason, and memory - memory formed by experience to great extent.

It is, so far, a small or non-issue to subject experts. But what if one's unfamiliar? We're forced to take the speaker's word for some things, and this is far less credible for GenAI. In the future, issues will grow more nuanced. Yet, in enough-future, they're bound to disappear and GenAI reasoning will be superior to humans' in nearly absolute terms. The shortcomings might be, for yet-unknown reasons, the supremacy of analog computation - e.g. human-like consciousness may be impossible digitally. Brain-computer interfacing or genetic engineering can also turn tides.

Of course GenAI is also very helpful, as other answers describe and-more.

5
  • Right. I have an analogy: to steal my car, someone has be there, where the car is. To steal my banking info, or even just the money, they could be anywhere, because of computer technology. AI is "a bigger problem" than normal human indiscretions because it is incorporeal, inscrutable, instant, able to leap tall buildings in a single bound... In other words, it can do things humans cannot and is invulnerable in ways humans are not. It has basically the attributes of God, but can be weilded by people. Or, could just do bad things on its own, accidentally or "on purpose". We are creating golems.
    – Scott Rowe
    Commented Jul 26, 2023 at 15:13
  • 1
    That it's convincing to many is indeed the much more serious "threat" that I've not mentioned since the question isn't about it. And that's without autonomy. Commented Jul 26, 2023 at 17:59
  • As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.
    – Community Bot
    Commented Jul 26, 2023 at 18:41
  • People can do 'gullible' all by themselves! Yeah, I didn't emphasize the convincing aspect, because if I feel I can't trust something overall (like a site, or a technology), I just ditch it. Thus the reference to "human flight" in one of my early Comments. "There goes the neighborhood"
    – Scott Rowe
    Commented Jul 26, 2023 at 19:59
  • @Community Please specify. Commented Jul 26, 2023 at 20:39

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .