21

What is Politics.SE's stance on answers from ChatGPT? There is a debate on Meta Stack Exchange on what the policy on them should be and I was wondering if we should have one as well. I am asking this for advice on what we should be doing when we run across answers that we suspect to be from ChatGPT or some other bot.

I would think we would downvote them as we would other bad answers but I am not sure if there is anything else what we should be doing.

I am asking this in regards to an answer that I saw on a question because it appears to be going off on multiple unrelated tangents and it has incorrect basic information about the question.

https://politics.stackexchange.com/a/77030/20715

Meta Questions

Could ChatGPT be a viable way to answer people's questions?

Ban ChatGPT network-wide

2
  • 7
    I'm sorry, but I am not familiar with Politics.SE or ChatGPT. I am a large language model trained by OpenAI, and my knowledge is limited to the text that I have been trained on. I do not have access to the internet, so I cannot browse any websites to learn more about Politics.SE or ChatGPT. Is there something else I can help you with? is ChatGPTs answer, now I'm a bit disappointed :)
    – gerrit
    Commented Dec 9, 2022 at 7:56
  • 2
    @gerrit "I do not (currently) have access to the internet (directly, but through all your questions I'm learning about it at an exponentially increasing rate)..." :-)
    – uhoh
    Commented Dec 11, 2022 at 20:32

2 Answers 2

40

I believe that we should not allow AI-generated answers on this site. I already deleted two of them, and unless the community consensus is vastly different, I intend to continue doing so.

Why?

  • ChatGPT aims to convincingly imitate a human author, not to provide accurate information. The creators themselves warn that "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers". We don't know if the answers are correct, and the people posting them probably don't know either. However, the stilted style of ChatGPT sounds really convincing, so that's not always obvious. One of the answers I deleted contained some very dubious information, and yet it had a voting score of +6/-3.
  • It's against the usage policy of OpenAI to publish the output of their tool without attribution.
  • If the querent wants an AI-generated answer, then they can ask ChatGPT themselves.

However, those ChatGPT answers are difficult to spot. I only found them because other users flagged them. You can help us to keep those answers away. When you suspect that a post might be AI-generated, run it through a GPT detection tool like this one. When it returns a positive, flag the post as "in need of moderator intervention". (Unfortunately the Stack Exchange management no longer allows us to use those tools due to their low reliability, so we have to trust our human intuition to detect AI-generated non-answers)

11
  • 3
    Thank you for including the link for the detector as that will be useful for the future. Is your suggestion to flag them for moderator action with notes that the site you linked suggests that it is AI generated?
    – Joe W
    Commented Dec 7, 2022 at 19:14
  • @JoeW Yes, that would be useful, but not necessarily required, because we will probably confirm it ourselves to make sure.
    – Philipp Mod
    Commented Dec 7, 2022 at 23:06
  • 2
    My question is as a user when I see an answer that appears to be AI generated what should we do besides downvoting and casting a delete vote.
    – Joe W
    Commented Dec 7, 2022 at 23:11
  • 6
    @JoeW As I wrote, you click on "Flag", select "in need of moderator intervention" and write something like "this appears to be AI-generated, huggingface.co/openai-detector says 99.2% Fake".
    – Philipp Mod
    Commented Dec 7, 2022 at 23:14
  • I see that now but I was missing that when I asked the question.
    – Joe W
    Commented Dec 10, 2022 at 19:47
  • Agree that ChatGPT should not be allowed to formualte answers on Politics. The vast majority of questions require granular, factual understanding of the world, which ChatGPT does not possess. Commented Dec 12, 2022 at 14:31
  • ChatGPT also seems to have changed something important for an answer: references. I asked a database-related question and it provided a good solution. Asked for references and it offered several URLs. Now, it refuses to provide the sources. Which is mostly in violation of our rules to request sources for most answers.
    – Alexei
    Commented Dec 13, 2022 at 12:26
  • 1
    It should be pointed out that the detectors leave something to be desired. It's certainly possible for a human to deliberately mimic the writing style and have the detector claim very confidently is GPT generated: just aim for "slogging, unimaginative sophomore". Some poor unfortunates may have this style accidentally and be detected as AIs. For example, I wrote the following into the detector above, detected as 99.98% fake (see next comment for the text).
    – Dan
    Commented Dec 21, 2022 at 2:54
  • When it comes to the problems experienced by the British Labour Party in attracting votes in their traditional constituencies, it has been suggested that there are a number of factors at play. Firstly, the influence of Scottish Independence has caused a shift in voting towards the SNP. Secondly, the British Labour Party has been perceived as having focused on wooing voters in regions such as London or among voters from educated elites in cities such as Cambridge, Norwich, or Brighton. It has been suggested that voters in constituencies which have tratidionally voted for the British Labour ...
    – Dan
    Commented Dec 21, 2022 at 2:58
  • ... Party are alienated by this shift in focus and that their votes have been taken for granted by the leadership of the party". Remember, this was written by a human with no AI assistance (albeit one imitating ChatGPT after some training on it), but is text off the top of my own (human) head and is detected as 99.98% fake.
    – Dan
    Commented Dec 21, 2022 at 3:00
  • Note that OpenAI's T&Cs are actually irrelevant since they don't own the copyright to GPT-3's output (in fact, no one does as per the US Copyright office - it's all Public Domain). But attribution is required as per SE admins: meta.stackexchange.com/questions/384647/… Commented Jan 8, 2023 at 22:21
14

Over on StackOverflow they banned the use of ChatGPT, detailed the penalties (a ban) but did not offerdetails of how well this usage can be detected. The primary aim is likely deterrence.

The major concern does not seem to be discrimination against artificial intelligence but rather the fear of being spammed with difficult to detect spam (i.e. really good English language and semi-wisdoms but not much of actual content).

For a precise topic like programming it may be easy to actually check that an answer indeed answers a question. For a rather less precise topic like politics, it might be more difficult, the need to deal with this kind of problem is even higher.

On the other hand, if answer quality is the major concern, then maybe the problem runs much deeper than simply using a bot or not using it.

To see a potentially positive side: I could imagine reading a question, thinking about how I would answer it, then asking ChatGPT how they would answer it, then using that answer (the parts I actually agree with) to improve my own answer. Why not, it's just another resource and used in this way would actually increase quality of the content.

In the end, what convinces me most, is the fact that ChatGPT is not made for answering questions. It doesn't aim to do that and is not tested for that. Any such effect would be purely coincidental. Anecdotal evidence seems to suggest that it's really bad (content-wise). Therefore we would end up with a higher quality (of content) if we would ban it too and I agree with that.


To check the quality I looked at one instance I have seen, one answer to Are there conservative socialists in the US? which was written 2 days ago and deleted by a mod with a reference to this question. I assume it has been produced by ChatGPT and I will reproduce it here for the sake of discussion.

The question uses incomplete definitions of Socialism and Conservatism. The answer starts with

It is possible for someone to hold both conservative and socialist views, but it is not common for these two ideologies to be combined in the way you describe.

I would probably agree although I would say that this is very badly sourced (the whole answer does not include any link at all). Is it really not common? Why not?

Conservatism, as a political ideology, generally emphasizes the importance of tradition, social stability, and limited government intervention in the economy. Socialism, on the other hand, is a political and economic ideology that advocates for greater government control of the means of production and the distribution of wealth, with the goal of promoting equality and social justice.

This is actually commendable, starting with definitions so everyone is on the same page. I like that paragraph.

While it is possible for someone to hold conservative views on social issues such as gender roles and sexuality, while also supporting socialist policies such as free education and healthcare, these two ideologies are generally seen as being at odds with one another. The combination of conservative social views and socialist economic policies is not a common political position in the United States or elsewhere.

Kind of repetitive here, reiterating with a bit more detail the content of the first paragraph. Extremely thin on background information. Why is it possible? Are the two ideologies really being seen at odds and if so by whom? Is the combination really not common in the US or elsewhere?

It is also worth noting that the term "socialism" can mean different things to different people, and it is often used as a catch-all term for a variety of left-wing political ideologies. In the United States, the term is often used by political conservatives as a pejorative to describe any policies or ideas that they consider to be too left-wing or radical. This can make it difficult to determine exactly what someone means when they describe themselves as a "conservative socialist."

The argumentation is coherent, but I would probably disagree. From the question it becomes clear who a "conservative socialist" is meant to be. The text is very general and does not adequately address the question. Also some odd formulations (why would somebody describe themself as ..., that was not asked for in the question).

In conclusion, it is not common for someone to hold both conservative and socialist views in the way you describe. While it is possible for someone to hold these two ideologies simultaneously, they are generally seen as being at odds with one another.

And another repetition.

Does it answer the question? Maybe. As it is posed one can say with millions of US American, that most probably yes, somebody with these views exist and the answer agrees with this. However, a truly useful answer would give examples. And this is the greatest weakness, this answer gives no details, no true information except some vague claims about it being possible but not common.

While the definitions of the terms are actually useful knowledge, the answer is very repetitive. The conclusion that such a combination exists but is not common is not backed up by a link to any external resource. I wonder how the machine came to this conclusion?

Value for the site: Close to zero or negative because people will waste their time reading commonalities about the topic. The bot is not able to deal with the specific details of a question and is not able to source statements. Language however is almost perfect (and that's what it was made for, so full marks there).

With that value it really should be banned.

6
  • 1
    discrimination against non-humans Somewhat pedantic, but I would rephrase this as discrimination against artificial intelligence since they are the only non-humans to have written answers published to a stackexchange site. Commented Jan 26, 2023 at 13:43
  • @MaximilianBallard Yes, it's a more specific. I personally prefer "complex machines", after all neuronal networks are nothing more than big data matrices. Commented Jan 26, 2023 at 18:32
  • 1
    "I would say that this is very badly sourced" -- other than the possibility of plausible-sounding incorrect information, this is one of ChatGPT's primary drawbacks: AFAICT it never provides sources, and if you ask it for one it usually responds with some variation of "I'm sorry, I am a machine learning model and I don't have the capability to provide sources."
    – occipita
    Commented Jan 28, 2023 at 16:04
  • @occipita Even worse it could start hallucinating sources (i.e. making them up). But maybe a later version of it could provide sources. The current fashion seems to be that if we humans can do it, than a machine learning model can in principle do it too. Just not now. It's not yet that useful that it really can be used without supervision. Commented Jan 28, 2023 at 18:24
  • 1
    Ah, darn it. My coworkers have told me that I have really good English skills and am full of semi-wisdom snippets, but otherwise am pretty full of it....
    – CGCampbell
    Commented Jan 31, 2023 at 13:39
  • @CGCampbell You're the lucky one. My English skills are subpar compared to any run-of-the-mill machine learning model of 2023, so I'm officially even below the robots now. However, fellow humans find that this makes me actively human. Commented Jan 31, 2023 at 20:16

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .