7

With a featured post on Main Meta allowing sites to request warning banners about AI-generated content, I was wondering if Literature would implement that feature. However, I noticed a lack of posts discussing if we DO or DO NOT allow AI content in the first place...

So, what is our site's thoughts on allowing AI-generated content?

3
  • The goal of this post is if we are against AI-generated content to then ask if we want to enable the banner stating our policy.
    – Skooba
    Commented Jan 16 at 15:06
  • 1
  • 1
    Also worth pointing out: OpenAI's terms of use state that users may not "Represent that Output was human-generated when it was not." So even if we decided to allow AI-generated content (which is currently not likely), we could still say that AI-generated content must never be passed of as human-generated. That looks like a sensible rule, even though detection may sometimes be difficult.
    – Tsundoku
    Commented Jan 30 at 11:55

3 Answers 3

10

(Discussing answers only here.) I have seen one AI-generated answer on the site:

  • What kinds of drugs are mentioned and/or described in de Sade's stories? The answer (containing output from ChatGPT) was deleted, so only high-rep users can see it, but I spent a bit of time looking at the answer and can report on the experience. ChatGPT was able to produce a plausible-sounding answer to the question, in that it lists some of de Sade's works and asserts that drugs are mentioned in them, with plausible-sounding references including section numbers and page numbers. This is just what you'd expect an answer to look like, but on inspection nearly all of it was wrong.

    Here I'll just quote the first paragraph of the ChatGPT material in the answer and list the mistakes, but the other paragraphs are of a similar quality.

    "Justine" (1791): In this novel, opium is mentioned in several instances. For example, in Part Two, Section One (pages 93-94), the character Clairwil consumes opium to alleviate her suffering. Opium is also referenced in Part Two, Section Four (page 137), where another character experiences the effects of opium-induced dreams.

    (i) ChatGPT gives page numbers, but any human would know that when a work has multiple editions or translations, the page number by itself is useless, because you don't know which edition it refers to. (ii) ChatGPT refers to "Part Two, Section One" and "Part Two, Section Four", but Justine is divided only into tomes ("books" or "volumes", but sometimes "parts" in English translation), and not further divided into sections, so these references are bogus. (iii) ChatGPT says that Justine has a character called Clairwil, but this character is from Juliette and does not appear in Justine. (iv) ChatGPT says that opium is mentioned several times in Justine, but the word does not appear once in the text. (v) Note the vague description "another character"—if this had been a real reference, it would have been just as easy to name the character. (vi) The actual answer to the question, easily discernable from the text of Justine, but not mentioned by ChatGPT, is "alcohol and caffeine", for example, "Il but douze bouteilles de vin, quatre de Bourgogne, en commençant, quatre de Champagne au rôti; le Tokai, le Mulseau, l'Hermitage et le Madère furent avalés au fruit. Il termina par deux bouteilles de liqueurs des Isles, et dix tasses de café." (1792 edition, tome 2, p. 96).

So in this case the ChatGPT answer had negative value: its claims were mostly incorrect, but presented in a superficially plausible way so that substantial effort was required to determine that they were false.

As long as this remains the state of the art of language models, my opinion is that AI-generated answers to literary questions are a waste of everyone's time, and we should reject them by policy.

However, if we adopt this policy, that does not necessarily mean that we need to also turn on the warning banner immediately. According to the meta post, the banner only applies to answers, and we are not currently receiving many AI-generated answers, so existing moderation mechanisms suffice for now. We can always turn on the banner at a later date if the situation changes.

4

(Discussing questions only here.) We have seen a few questions come in that consist either entirely or largely of "ChatGPT says X, is ChatGPT correct?" For example:

There was another such example recently but I don't remember it precisely.

It does not help askers to post questions that discuss at great length what ChatGPT has to say about the topic at hand. Instead of getting answers, they get downvotes. That is unwelcoming.

Nor does it help regular contributors to have to reiterate each time that the asker needs to revise the question to remove references to ChatGPT as they are a distraction.

So I propose that we have a custom close reason for questions. It could be worded something like:

This question relies on the output of an AI large language model (LLM) such as ChatGPT, Google Bard, or Bing Chat. These tools are designed to emit plausible language, not to provide factual information. Asking for verification or analysis of the output of LLMs is off-topic for Literature Stack Exchange. If you can ask your question without the LLM content, please edit it to do so. The community can then vote to reopen the question.

The custom close reason will carry more weight than the comments of individuals. I had a long exchange, frustrating to both parties, with the asker of the second question as he kept interpreting my statements about his question as value-judgments rather than statements of policy. To be fair, in the absence of a stated policy, they were my opinions. Adopting a custom close reason will make clear that this is not one individual's opinion, but our policy.

We might also want to update What types of questions should I avoid asking? to include some variation of this close reason.

7
  • 1
    I think we should add the help center thing first (where it would still be a stated policy), and then create a close reason if there are belligerent OPs or if the problem persists. I agree that such questions are problematic, but I don't think they're frequent enough that they need to have their own close reason.
    – CDR
    Commented Jan 28 at 23:17
  • 2
    I'm concerned that such a close reason might be applied overzealously. In the first example you linked, the question didn't actually depend on ChatGPT at all; the ChatGPT output was included as an optional parenthetical, along the lines of "I asked ChatGPT and it couldn't answer, so I came to SE instead". But the question still got downvotes, and even a snarky comment after the ChatGPT part was edited out. That felt over-the-top to me: easy to remove, or even ignore, the part of the post that mentioned ChatGPT, as it didn't affect the actual question itself.
    – Rand al'Thor Mod
    Commented Jan 29 at 8:12
  • @Randal'Thor And yet it was good the ChatGPT reference was entirely removed from that question as it did not add anything and ony made the question look ungrounded. Adding nonsensical references to your question simply makes your entire question moot. Closing it as such would not have been overzealous but in due course. Of course editing is usually better than closing if possible. But one or the other was necessary and sometimes even an edit can't salvage a question if its premise has been tainted by a robot making it up out of thin air. Commented Jan 30 at 17:56
  • 1
    @Cahir I'm not sure if there was a misunderstanding somewhere, but my understanding was that ChatGPT was never part of the inspiration for that question. Sure, it's fine to remove it, but it didn't really affect the question either way, and closing it would have been unwarranted. There's a bunch of other questions that say something like "I asked ChatGPT and it didn't help, now I'm coming here instead" and have been well received.
    – Rand al'Thor Mod
    Commented Jan 30 at 18:00
  • I really don't understand why people can't bear ChatGPT. We know it's not perfect, but we will have to admit that it does provide a lot of useful answers and some of its answers, though not perfect, help us to find better solutions. Also it's not true that ChatGPT only provide false message. You have to admit that it does provide a lot of factual information. After all, the source of ChatGPT's answers still come from human knowledge. If you don't like ChatGPT, just ignore the answers it gives and provide your own. Is it that hard? Commented Mar 6 at 4:20
  • @SilentSojourner We've had an extensive discussion about this in LitSE chat already, and it is not fruitful to rehash it here. Hopefully reading it will help you understand why "people can't bear ChatGPT."
    – verbose
    Commented Mar 6 at 4:29
  • You might want to add/link the questions that were posted recently to this list; they're short enough to quote in their entirety, in case they are deleted. I think they illustrate the issue pretty well.
    – CDR
    Commented Mar 6 at 13:32
-8

AI generated content should be treated like any other content. If it's good you upvote it, if it's bad you downvote it, if it doesn't answer the question you delete it, if it's an off-topic question you close it.

AI might sometimes provide useful content, and will probably provide some wrong answers as well. So will regular users.

We should not police the tools that participants use to construct questions and answers. Some tools are better than others, but there are probably none that are perfect. We don't have rules that you can't use Wikipedia even though it might contain incorrect information; we don't have rules that you can't use a translation of a book that might be flawed; we don't have rules that you can't ask your friend a question and post their answer.

The only appropriate way to judge a post is by its content. An answer generated by AI might be a great answer. It shouldn't be deleted just because of its origin. A question generated by AI might be interesting, thought-provoking, and lead to answers that provide new insight on a topic. It shouldn't be deleted because it was created by AI.

Furthermore, there is usually no foolproof way to be sure if a post was generated by AI. By having a no-AI policy, we risk falsely identifying posts as AI content, and/or getting into protracted debates about a post's origins. Accusing users of such tactics may lead to unwanted drama, and discouragement of some participants.

As of now, we do not seem to have a problem of being inundated with heaps of bad AI posts. It is not such a big deal if there are a few posts here and there with AI-generated bad content. And if there is a particular user that continually posts such answers, they can be dealt with the same way any other user would be dealt with for similar infractions unrelated to AI.

Prohibiting AI generated answers sets a precedent in the realm of censorship. The site should be a place for free and open engagement with all people and all ideas that meet the site's scope. It is not a huge leap from deleting AI content to deleting content from sources we deem invalid, to deleting answers that contradict an author's statements, to eventually deleting anything we don't agree with.

12
  • 6
    Did you read Gareth's analysis of a ChatGPT-generated answer on this site? The only way a human answer could be that bad would be if someone was maliciously trolling, inventing spurious chapter/page numbers to back up false claims in a plausible-looking way. I'd consider that sort of behaviour potentially grounds for suspension even if no AI was involved.
    – Rand al'Thor Mod
    Commented Jan 29 at 8:15
  • 3
    "Prohibiting AI generated answers sets a precedent in the realm of censorship." AI tools don't enjoy civil rights.
    – Tsundoku
    Commented Jan 29 at 23:55
  • 4
    Your opening premise is contestable: "AI generated content should be treated like any other content". AI-generated content is entirely unlike human-generated content, as it's simply a series of probability-based word combinations. There's no intelligence or thought involved. AI might be helpful if you're looking for an answer to an obscure NYT crossword clue, but it's currently too immature to be allowed anywhere near a site dedicated to building a library of authoritative answers. Commented Jan 30 at 4:55
  • 1
    Your last paragraph really makes zero sense to anyone remotely understanding that AI has nothing to do with "intelligence" at all. It's random word guessing! It's simply not a source for anything. It can't contradict anyone's statements because it does not make statements at all. There is nothing to agree or disagree with because there was zero thought or intent put into it. It's random gibberish that looks like it could make sense but just doesn't. So yes, it's a humongous leap therefrom to anything resembling "censorship". We also "censor" spam on these sites and have done so for decades. Commented Jan 30 at 17:42
  • 1
    With regular users you know that they tried. Their answers might be wrong, but you know they actually put intent into them and why they're wrong. Yes, we don't always know to 100% if an answer actually has been AI-generated. Sometimes we can just trust that it's an actual answer and engage it as such. But if the poster says it's just robot barf (or it has been revealed to be so), we know engagement and looking for sense in the answer is futile. The ramblings of a madman might spawn interesting literary discourse, too. But should it be put up as an answer? Commented Jan 30 at 17:50
  • I agree. Intelligent or not, you have to admit that AI do present some useful answers, and it's sometimes better than human. I also agree that AI is still far from perfect. But why not take ChatGPT as a useful assistant or helper and correct its mistakes by our humans? Commented Mar 6 at 3:56
  • @Tsundoku AI tools don't enjoy civil rights. Right. But people who use ai tools do enjoy civil rights. So prohibiting people citing ai actually means restricting their rights. Commented Mar 8 at 14:24
  • @CahirMawrDyffrynæpCeallach You may have some misunderstanding about AI. AI does simulate human's thinking,e.g. through the neural network technology.AI doesnt generate its answers out of nothing; it reads from human knowledge, makes statistics, and processes the knowledge using some modeling techniques or algorithms. E.g., AI doesnt know War and Peace is a great novel. But it takes 1000 articles on the novel, makes statistics and uses some algorithms, and then conclude that War and Peace is a great novel. So AI can make sound decision (though not always),just not in the way you think it does. Commented Mar 8 at 14:38
  • @CahirMawrDyffrynæpCeallach Regarding the ‘authoritative answer’ matter. I do agree that generally speaking AI is less authoritative than human-made stuff like encyclopedias. But first AI also get knowledge from human, not out of nothing, which I explained in my previous comment. Secondly few things are 'authoritative' in literature. Even for renowned authors like Shakespeare, people can have different opinions and there're usually no unanimously accepted 'facts' regarding literature works. It's not like math. Commented Mar 8 at 14:43
  • @SilentSojourner I don't think I said anything about authority. It doesn't even factor into it. Asking who's more authoritative is a futile question that distracts from the actual problem. Commented Mar 8 at 16:54
  • @CahirMawrDyffrynæpCeallach That's what you said: ‘AI might be helpful if you're looking for an answer to an obscure NYT crossword clue, but it's currently too immature to be allowed anywhere near a site dedicated to building a library of authoritative answers. �� Chappo Hasn't Forgotten Jan 30 at 4:55’ Commented Mar 9 at 0:36
  • @CahirMawrDyffrynæpCeallach Sorry. Just found that I made a mistake and found the wrong person. You're right you didn't mention authoritative. Commented Mar 9 at 18:29

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .