3
$\begingroup$

While it seems still undetermined if AI-generated posts on BSE are outright banned, should be marked as such or no action is needed:

Should posts generated with the help of AI be marked as AI-generated content?

I find it reasonable to discuss possible design of such warnings, even for the purpose of proposing a template for post authors to use willingly if no such requirement is in power at the time of writing the post.

First, I propose some general principles:

  • should be concise - might not be very important, as GPT answers are lengthy anyway,
  • if not concise it should be structured to encourage reading the parts of the warning related to the reader,
  • it should be visible (pop out), as not always do we read from the beginning,
  • it should be accessible:
    • can it use emojis, or can they display incorrectly on some devices?
    • likewise can it use other formatting enhancing like mathjax?
  • it should explain or link to the explanations of the problems associated, the links probably should point to pages within SE framework,
  • while a warning is useful to everyone, the explanation should target less informed part of the potential readers, who are unaware of AI-related problems.
$\endgroup$
2
  • 1
    $\begingroup$ I'd encourage all interested members of the community to post their proposals so the most voted one is elected as "official". $\endgroup$ Commented Jun 5, 2023 at 21:09
  • $\begingroup$ I, for one, welcome our new AI overlo- $\endgroup$ Commented Jun 5, 2023 at 23:57

3 Answers 3

3
$\begingroup$

Agreeing on a unified warning design for AI generated content and making it standard and available to all users seems like the right way forward.

Markus makes great points, I agree it should be descriptive, succinct, and briefly explain to less informed the dangers of AI generated content, linking to in-network posts for more in-depth explanations.

The point to use an image or icon "so that who has already read it in the past can easily recognize and skip it" as Markus points out seems like an excellent idea I'll also plagiarize.

I'll make one important distinction to Markus's post though. As already well justified by other members in the linked meta post I don't personally think AI generated content has any place in our network. As such, I'd take a stricter approach and vote for entirely removing content that is clearly AI generated.

But one could then ask: "If AI generated content is entirely removed, then why do we need these warnings?". I'd say we reserve them for posts where it isn't immediately clear the content was AI generated or not, or there is reasonable doubt about its origin, like cases where the content was heavily edited, or there isn't enough content to ascertain the origin.

With that in mind I'd go for something along the lines of

AI Icon Warning, Potential AI generated content⚠️

This post is suspected to have been generated with the help of an artificial intelligence, chat bot, or other language model tools without proper attribution.

There is reason to believe this post was not written by a human due to its structure, and grammatical construction, and has been marked by our community.
Current AI chat bots are a language models, that means they are like a parrot, they can mimic human language well, but they are built to write seemingly logical, seemingly factual grammatically valid text, rather than actually be correct, like a human would.
The veracity of the following content cannot be ascertained. See the Concerns regarding AI generated content.

Copy-pastable snippet:

> ### ![AI Icon](https://i.sstatic.net/zNrQD.png) Warning, Potential AI generated content⚠️
> **This post is suspected to have been generated with the help of an artificial intelligence, chat bot, or other language model tools without proper attribution.**
>
> There is reason to believe this post was not written by a human due to its structure, and grammatical construction, and has been marked by our community.<br>
> Current AI chat bots are a language models, that means they are like a parrot, they can mimic human language well, but they are built to write **seemingly** logical, **seemingly** factual grammatically valid text, rather than actually **be correct**, like a human would.<br>
> The veracity of the following content cannot be ascertained.
> See the [Concerns regarding AI generated content](https://meta.stackexchange.com/q/384396).
$\endgroup$
1
  • 1
    $\begingroup$ Since I have yet to see AI-generated content that is useful, I'm inclined to agree with this, the pictogram is also a nice upgrade. $\endgroup$ Commented Jun 5, 2023 at 22:00
2
$\begingroup$

Here's my most recent iteration of such a warning.

It uses the blockquote to wrap the entire warning, starting with a robot emoji, so that who has already read it in the past can easily recognize and skip it, then it tries to explain in simple terms how AI-generated content is misleading in how it seems much better than it actually is, and points to a thread that can elaborate on it…

🤖 The below content is AI-generated

GPT outputs text with similar statistical relations to training data; in practice this means text that seems valid, seems confident, seems logically consistent etc. While humans write text to be correct → and therefore it seems correct, AI writes text to seem correct → but it doesn't necessarily make it correct. Consider this analogy: an engineer puts wheels on a car in order to achieve the goal of the car moving. An AI will put wheels on a car, because it's trained on cars with wheels. But the wheels won't necessarily be connected to the engine, and so the car, while seemingly functional, is worthless. More reading: Should posts generated with the help of AI be marked as AI-generated content?

$\endgroup$
1
$\begingroup$

First of all thanks for caring this topic, expanding over the internet. I just read your links for a few hours and this seems to be endless hole of thoughts and opinions.

My post here is a reaction as contributor, that his post was marked as AI generated warning (seen in Duarte's post).

I agree - notification is needed, but current paragraph is really scary for me and it occupies a lot of space. One negative icon, word "warning" and warning sign at the end seems to me too much.

Wouldn't be enough just a single warning icon and short sentence with a link to a page with explanation of such warning? Link also gives you a more freedom for later adjustment of the explanation and synchronicity with older posts marked by this warning.

⚠️ This post is created with the help of an artificial intelligence.

Thanks for your opinion :)

$\endgroup$
5
  • 1
    $\begingroup$ I agree with you exactly! My wish would have been to do without even a short text hint of this kind and use a colored marker or tag instead, but unfortunately that's not technically possible here. I think your suggestion is currently the most useful in this respect. Thanks for your contribution! $\endgroup$
    – quellenform Mod
    Commented Jan 29 at 10:50
  • $\begingroup$ Hmmm... I don't agree and I don't disagree. Maybe the warning should be scary and explicit... The size otherwise doesn't matter - it's not really a problem that you need to scroll a few lines more, right? The text is designed so that you can easily skip reading it entirely if you want. So a question to be answered here is: how much do we care to alarm a reader, who wouldn't necessarily click the link? Also the point about normalization is sound and in spirit of other SE conventions... $\endgroup$ Commented Jan 29 at 11:28
  • $\begingroup$ Also keep in mind the "suspected" part. You can't be sure if the post author doesn't attribute the AI. $\endgroup$ Commented Jan 29 at 11:33
  • $\begingroup$ @MarkusvonBroady ... sure it should state as "there was a possibility (potentional) of AI use" ... I'm not saying I found the best ... but current version is such toxic, that I wanted abandoned the post immediately, but I don't thing that is the purpose (especially if answer is correct), I think we want to warn users (not scare) ... no problem to scroll? For me it is ... I became fun of BSE for tendencies to create straight forward Q with clean and clear structured explanation in A ... such intense warning message works in opposite way. $\endgroup$
    – vklidu
    Commented Jan 29 at 19:35
  • $\begingroup$ From the position of graphic designer, the warning title is repeating the same thing, the same info. It takes you longer to identify and recognise all parts, than one simple triangle that is saying all the story at once... $\endgroup$
    – vklidu
    Commented Jan 29 at 19:38

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .