11

It seems there is a new feature that lets sites enable a banner to let users know that AI answers are not allowed or need to be properly cited when used. Do we have a stance and want to enable one of these banners?

The post on meta mentions making a question and getting an consensus on what the site thinks.

Sites can now request to enable a banner to warn about their policy on AI-generated content

3 Answers 3

21

I posted my opinion on AI-generated content about a year ago. My opinion hasn't changed since then. In fact my experience over the past year only cemented my view: People use AI to answer questions on subjects they know nothing about, and post them without fact-checking. And thanks to large language models being very good at making their generated content sound eloquent, the usual telltale signs that someone doesn't know what they are talking about are often missing. Which results in the spread of reliable-sounding misinformation.

This is harmful to the site and harmful to society as a whole.

Therefore I believe we should request this banner in the "not allowed" variant.

7
  • 1
    That is my opinion as well.
    – Joe W
    Commented Jan 8 at 18:49
  • 1
    This is especially true in politics, where disinformation abounds. I've seen several articles recently about how people use AI deliberately to spread political disinformation on social media in an attempt to influence elections, and I'd hate to see that happening here. Commented Jan 11 at 14:16
  • Accepting this answer as there doesn't appear to be any new votes or opinions coming in.
    – Joe W
    Commented Jan 13 at 21:20
  • I think it’s important to acknowledge that misinformation and disinformation are also generated aplenty by humans, including every government in the world and most cultural/scientific institutions. AI isn’t really a game changer here. Commented Jan 14 at 0:36
  • @JonathanReez But in those cases we are at least dealing with intentional malice, instead of simple carelessness while trying to collect Internet points.
    – Philipp Mod
    Commented Jan 14 at 12:28
  • Yes and no. With human mis/disinformation it’s likewise often the case that careless individuals spread it to others without bothering to double check the facts or apply some logical reasoning. One big example was with journalists spreading information about Iraq having WMDs back in 2004 which turned out to be fake news of the highest caliber. Commented Jan 14 at 14:38
  • 2
    The "not allowed" variant of the banner how now been enabled.
    – Sasha StaffMod
    Commented Mar 14 at 18:58
4

I’m a huge fan of ChatGPT and use the paid version daily. I’ve argued with many people on SE about AI and don’t believe that the current LLMs are merely stochastic parrots - if anything, they often make more sense than the average persons reasoning capabilities. But my answer is still:

No, AI answers should not be allowed here

While ChatGPT is better than the average human at many tasks, it fails miserably when it comes to any sort of political questions. This is happening due to biased training by model creators, pollution with numerous text sources providing poor answers to political questions, as well as limited reasoning, quoting and research abilities. ChatGPT can maybe give users a good starting point for an answer but it’s not a substitute for the human reasoning capabilities of the average Politics.SE user.

I expect ChatGPT 7 or 8 to be sufficiently good to change my mind but we’re still years away from those being launched, so it’s a moot point for now.

1
  • I think you have a good point about them being potentially viable in the future as the technology improves though it still raises the question of if you should be posting content that you did not generate.
    – Joe W
    Commented Jan 14 at 0:21
-1

I am in general agreement that AI generated content is not acceptable. Generative AI has a tendancy to hallucinate and produce credible sounding nonsense, which is of course not what we want.

I feel there is one exception. For non-native English speakers who have written an answer in poor English, it can be useful to use AI merely to reword the answer and make it more idiomatic.

Given a strict enough prompt, the chances of an LLM materially changing the meaning are acceptably low.

That being said, given the difficulty / impossibillity of reliably detecting AI generated content, any policy that we make is likely to be largely unenforcable.

5
  • 6
    If the user is a non native English speaker they are unlikely to catch any errors or changes that get introduced in the translation and the result is more likely to hide those errors from others.
    – Joe W
    Commented Jan 9 at 14:26
  • In addition to Joe's comment, they may not have a great translation / understanding of the question to begin with, something the likes of GPT is likely unable to fix either Commented Jan 10 at 15:35
  • 1
    How can moderators tell the difference between AI content that was created out of whole cloth and AI content that is merely a reword of a human-written answer? The rules have to be enforceable. (Also, SE already has a mechanism whereby correct but poorly worded answers can be smoothed out by native speakers through the "edit" feature.) Commented Jan 11 at 14:19
  • 1
    @A.R., good point, but keep in mind that we have no accurate way to spot AI content at all. Tools that claim to be able to differentiate bewteen between AI content and human content have been shwn to be highly faulty.
    – Ben Cohen
    Commented Jan 12 at 9:36
  • 1
    @JoeW someone speaking English at the B2 level has enough knowledge to catch AI’s mistakes but not enough to write great English on their own. No need to deal in absolutes. Commented Jan 13 at 23:06

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .