1

It seems there is a new feature that lets sites enable a banner to let users know that AI answers are not allowed or need to be properly cited when used. Do we have a stance and want to enable one of these banners?

The post on meta mentions making a question and getting an consensus on what the site thinks.

Sites can now request to enable a banner to warn about their policy on AI-generated content

Update:

The linked question now states that they are updating help center articles network wide stating that all AI answers require proper citation meaning that the only option now is if they are allowed or not.

3
  • Did we really not discuss this or did we assume that all LLMs were too terrible to write an answer that appears to meet Skeptics standards with referencing? Google Bard and Bingbot both have the ability to cite sources from the internet, and I have seen one of them (Bing?) being used here without attribution.
    – Laurel
    Commented Jan 8 at 21:04
  • @Laurel I think we may have discussed it in the past and I am bringing it up again as there are now options for new banners on answers with an official statement in regards to AI answers
    – Joe W
    Commented Jan 8 at 21:06
  • 2
    You know, I think we didn't discuss it except in questions. For the record, I am against having AI content in answers, since it's bad even with sources. Sometimes the information is plain not in the source, other times, it confidently restates BS from someone who has no evidence themselves.
    – Laurel
    Commented Jan 8 at 21:11

2 Answers 2

2

No, we should not accept AI generated or edited answers on this site as there is an issue with them in that they can be well written and appear to be correct but are in fact incorrect or logically flawed. The general problem being is that if you need the AI to generate or clean up the answer you are generally not going to be qualified to verify that the output is accurate and correct.

While I do acknowledge that we get answers that start out as poor but can be salvaged into excellent answers with the help of a good editor I don't feel that AI is the one do to it at this time.

The problem with translation and improving grammar comes from the fact that it can suggest and make changes that actually change the meaning of the content but makes it look more well written overall. This is something I notice a lot with semi technical witting where a tool I use (Grammarly) will make suggestions for improvement but it drastically changes the meaning of the sentence. When this happens with AI assistance the user won't get the choice of accepting or declining the change and even if they did they are unlikely to spot the difference.

While I do think that in the future AI will develop to the point where it can be useful for generating or fixing up answers I don't feel that it is ready at this time.

Therefore I am suggesting that we adopt a no AI generated content policy and apply that message to all answers that are being posted on the site.

21
  • 1
    "appear to be correct but are in fact incorrect or logically flawed" This is the bit that confuses me, and I would appreciate expanding. I appreciate some OPs are not skilled in English and may not notice subtle meaning changes. I appreciate some OPs are not skilled in argument and may not notice formal and informal fallacies. But why wouldn't our community see through this, like it does with a significant percentage of answers right now?
    – Oddthinking Mod
    Commented Jan 13 at 0:32
  • 2
    So, what is the point of using the AI to ruin a good draft when the community can support editing to make it a great final answer? And if it's not a good draft, and the AI can't improve it, what is the point of having the post at all, when we would just be deleting it immediately? Put simply, there's no case for the AI generated result to be posted. @Oddthinking
    – Nij
    Commented Jan 13 at 1:08
  • @Nij: That's a loaded question. I reject that the AI must necessarily ruin a good draft.
    – Oddthinking Mod
    Commented Jan 13 at 1:12
  • @Oddthinking The problem is that it isn't serving anyone if we have people putting a question into a chatbot and repeating the answer that was given. If that was all that was required the person asking the question could do the same. These AI services are designed to provide a well written response which can make it harder for some to determine that it is inaccurate. Also as I have stated before when it comes the the translation/rewrites they can easily change the meaning and no one would be wiser. In the long rung I think content should be generated by the users not by an AI.
    – Joe W
    Commented Jan 13 at 1:19
  • @Oddthinking It isn't that it will ruin a good draft but the very real possibility that it can ruin a good but poorly formatted answer and no one would ever know because the person posting didn't understand the issues it introduced. I am not saying that AI should be banned forever, just that it is not ready for use on the site at this time.
    – Joe W
    Commented Jan 13 at 1:20
  • @Oddthinking I would say at a bear minimum anything AI generated, either from scratch or editing help, should be clearly labeled so that users understand how it was generated.
    – Joe W
    Commented Jan 13 at 1:24
  • 2
    If the AI was actually able to take a good draft and make it a great output, we would not be in the situation we are, having the discussions we do, about this. The very fact we have to talk about it as a possibility instead of an expectation means the likelihood of ruining a good answer is too high to be trusted.
    – Nij
    Commented Jan 13 at 1:28
  • @Nij: I worry we are making rules to protect ourselves against the AIs of November 2022, rather than the AIs of 2024. I don't want a blanket ban because I think AIs can help make okay answers good, where you think they can only make terrible answers terrible.
    – Oddthinking Mod
    Commented Jan 13 at 2:23
  • 1
    @JoeW: Should I have to explicitly say which grammar checker I used? Whether I left the draft sit overnight and looked at it again in the morning with fresh eyes? What my blood alcohol level was when I wrote it? I am being facetious, but my point remains: We should judge the result, not the process.
    – Oddthinking Mod
    Commented Jan 13 at 2:25
  • "These AI services are designed to provide a well written response which can make it harder for some to determine that it is inaccurate." Ooh, I missed this, and I find it concerning. If the English is poor, there can be a barrier where you can't tell what was intended, so it is hard to distinguish between "That is wrong." versus "That is not even wrong; it is word salad." If the English is clear, the logic errors should stand out more, not less. It is important we don't use "non-native English speaker" as a proxy for "bad post", and any tools we can use to avoid that should be embraced.
    – Oddthinking Mod
    Commented Jan 13 at 2:34
  • Yes, we absolutely should be basing decisions on what is actually happening, which is broadly the use of almost-free services. I have not ever seen an answer go from okay to good through the use of an AI without further editing that rendered the AI "improvements" void and pointless.
    – Nij
    Commented Jan 13 at 3:25
  • 1
    @Nij: What is actually happening? We've had less than a handful of answers where people have complained that they might be autogenerated. It's just hidden in the noise at the moment.
    – Oddthinking Mod
    Commented Jan 13 at 3:57
  • 1
    @Oddthinking just because we haven’t had a lot doesn’t mean it isn’t a potential problem waiting to happen. We might not need to take action yet but we should at least think about the issue.
    – Joe W
    Commented Jan 13 at 4:58
  • @JoeW: I don't want us to end up in circles. The AIs of 2023 have had minimal bad impact and minimal good impact. The AIs of 2024 have the potential for bad impact and the potential for good impact. I have consistently argued that we should not throw out the potential good with a blanket ban, and that my position might change if the bad became overwhelming. A ban right now would lead to a worse outcome: having to investigate and litigate each accusation. I would rather avoid that quagmire.
    – Oddthinking Mod
    Commented Jan 13 at 8:47
  • 1
    @Oddthinking Should I just point to the various questions that have been posted on the various meta sites talking about this issue? I can do that but I am not sure what value that would really add.
    – Joe W
    Commented Jan 13 at 18:55
-1

We, as a community, need to decide on an AI Policy (perhaps similar to StackOverflow's.

This is just one opinion.

What AI can and can't do

I think the last year or so of generative AI has shown that it is a great tool at producing facsimiles.

It can produce something that looks like an invoice, or looks like a book report, or looks like a resignation letter written by Dracula.

For some applications, facsimiles are all you need. If you are learning English as a second language, making it look like formal English may well be sufficient.

For other applications, it isn't sufficient at all, and "looking like" isn't enough.

Generative AIs can certainly produce something that looks like a Skeptics.SE answer, but it seems that they mostly fail. Citing sources (including pulling out quotes) are one area they are poor at. Generally understanding the question and directing the answer seems to be another.

There is at least one scenario where I think using generative AI is a win for everyone: Taking a draft answer with a solid structure (appropriate references and solid conclusions) but poor English, and turning it into an easy-to-read final draft.

For this reason, I am loathe to have a blanket AI ban.

Do we tackle the cause or the effect?

I learnt a lot about moderating from @Sklivvz. Early on, we had a discussion about a troublesome user.

I was concerned that, based on their writing, they seemed to be a paranoid schizophrenic. I also felt that I was in no way qualified to make that call, and if I were it would be unethical to make that call based on a few bits of writing. Further, it would be a complete violation of the Codes of Conduct to make such an accusation in a comment. I didn't know how to deal with them.

Sklivvz redirected my concern: I can't tell what is going on in their head. I can tell that there are problems with the answer. I should focus on fixing that, not fixing the user.

This has influenced my attitude to dealing with allegedly AI generated answers. We can't figure out what was going through the head/CPU of the answerer. We can tell that the answer is bad, and address those reasons.

Other reasons that AI might be bad

  • The StackOverflow policy suggests AI-generated answers might not be what people are expecting (or they came here because they don't want an AI-generated answer). I don't find that very compelling. People should expect quality answers. The mechanisms used to get there aren't relevant.

  • The StackOverflow policy suggests AI-generated answers may have excessive noise and may include false or misleading information. This is a problem, but our existing voting/closing systems are supposed to handle that anyway. There is a risk that those systems might be overwhelmed, but I am not seeing that yet. My position might well change if that was happening.

  • The StackOverflow policy suggests AI's are bad at citing resources. This is a big concern. History shows it is a concern that also applies to humans.

    I have long been suspicious of the "all the references at the end" system people used in high school History essays. The references should be explicitly linked to each claim, rather than simply pointing afterwards at bunch of books and saying "You'll find support for what I said somewhere in there." I am currently on even higher alert, because this seems to the be the preference of AI-generated answers, which means the references may not even exist and if they do, may not contain relevant supporting material. We should be more insistent that quotes supporting the argument be extracted from the sources.

  • I suspect some people are going to be upset at people who use AI generation because it is "cheating". I don't hold that opinion. There are lots of techniques used that beginners don't seem to know about: Google Scholar, Sci-Hub, Google itself, Wikipedia, Cochrane Collaboration, etc. This is one more.

Conclusion

I do not want to see blanket ban on generative AI. I do not want to be in an environment where we have to have investigations on whether a particular answer might have been generated.

We have long had lots of poor answers written by people who can't recognise what makes a poor answer. This will continue with people using AI. We already have mechanisms to deal with them, and unless they become overwhelmed, I would like to continue.

If we get agreement here, and someone wants to propose an FAQ answer warning that answers purely generated by AI tend to be awful, and will generally get downvoted and if repeated will earn automated answer bans for the users, I will be in favour.

I don't think it should be a banner unless it turns into a common problem. We should be welcoming of new users, and this seems a bit off-putting to innocent newbies.

Disclaimer

I have never used generative AI on any answer. I have no plans to. I frequently use spell-check and grammar checkers (and still errors creep through).

9
  • 1
    Even using the draft text with a direction to just formalise the writing will lead to changes, likely making the text as a whole inaccurate. You could tell the AI to make the text perfect, then put the result back in with the same instruction, and it will make largely the same amount of change it did before, because it does not understand what it received, what is intended, or what is given out, it only knows what probably should come after the last word.
    – Nij
    Commented Jan 11 at 6:38
  • @Nij: Yes, you need to proof-read what the AI output is, just as I need to check what the spell-checker is recommending.
    – Oddthinking Mod
    Commented Jan 11 at 9:45
  • 1
    I would agree with Nij on this, AI can easily change a correct but poorly written answer into an incorrect but well written answer that is written in a way that makes it hard to see that it is incorrect.
    – Joe W
    Commented Jan 11 at 13:20
  • 1
    @JoeW: I encourage you to post a counter-answer to give people a chance to choose. What does it mean to say an answer is hard to see as incorrect? We always need a certain percentage of readers to be checking references and logic, and commenting about problems, and I don't see this will be worse.
    – Oddthinking Mod
    Commented Jan 11 at 14:31
  • On Skeptics "correct but poorly written" and "incorrect but well written" should be treated exactly the same if they don't have sources, and providing sources is something that products like ChatGPT simply weren't designed to do. That alone should make AI-written answers easy to identify and downvote like any other unreferenced answer.
    – Giter
    Commented Jan 11 at 21:17
  • @Giter: If a non-native English speaker submits a well-reasoned, referenced article that is grammatically incorrect or difficult to read, it is generally rescuable (and I have spent a lot of time trying to rescue them, so people aren't penalised for their native languages). If they can use an AI to clean it up, I am all for it. (Whether that is possible seems to be a contention here.) Meanwhile, a well-written answer with zero references (and no attempt to fix after being called out) should be deleted. So, I don't think I agree they should be treated the same.
    – Oddthinking Mod
    Commented Jan 11 at 22:50
  • 2
    I will post one in the next couple of days when I get a bit of free time.
    – Joe W
    Commented Jan 11 at 22:55
  • 1
    Perhaps a thought experiment explains my views more clearly: If someone developed a generative AI that was actually good - it produced high-quality, well-referenced, well-written, logically-constructed answers - do you think it should still be banned? If your answer is Yes, you differ from me, and I welcome your counter answer explaining for people to decide. If your answer is No, then we agree it isn't source that is the problem, but just the quality. I would rather our rules focused on the quality, not the source. If you think that won't work, please give a counter answer explaining.
    – Oddthinking Mod
    Commented Jan 12 at 4:25
  • I got an answer posted now. As for the thought experiment I do agree that the source isn't as important as the quality of the answer however being well written or not isn't enough to say if it is a quality answer or not. The main issue is that AI can create a well written answer that makes people think it is correct but it is completely incorrect.
    – Joe W
    Commented Jan 12 at 22:59

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .