21

Although the problem is not yet as big as it is on Stack Overflow, there have been some confirmed attempts of accounts using AI-powered solutions, like ChatGPT, to generate and post several answers to questions on Software Engineering, often very quickly. This is not consistent with our expectations for members of this community.

Here on Software Engineering, we expect answers to draw on the knowledge and expertise of the people writing those answers. AI-powered tools can synthesize text that appears to be written by a human but does not base the output on education or experience. When people generate and post AI-generated text without review, the text may include inaccuracies or misleading statements, which may not be clear to people who do not have expertise in the subject and may be taken as factual or correct. There may also be biases in the training data that lead to lower quality or problematic outputs.

Using AI-powered tools to generate multiple answers, especially in rapid succession, will likely lead to the suspension of your account or the destruction of an unregistered account. Posting single answers, especially without disclosing that automated tools were used or failure to response to critiques or correct errors, will likely result in post deletion with continued attempts to post low-quality generated posts facing a suspension.

I'll also recommend that all users use caution when reading answers. Although there may be some "tells" that a post is machine generated, some of those tells can also be easily edited out.


18Dec22 Update.

The official Stack Exchange policy is that, when machine-generated text is used as or in an answer, attribute is required as it would be for any answer that is not your own.

This does not change the fact that posting machine-generated answers is highly discouraged and using tools to generate and post multiple answers, especially low quality answers, quickly will likely result in a suspension, even with attribution.

4
  • Sometimes we get questions which would require a discussion (hence will be closed). In the past, we sometimes recommended the askers to use our chat rooms instead, when they asked where to post elsewhere. But to be honest, chances are pretty low for the askers to find an expert in our chatrooms who is willing to discuss their matters with them. However, as long as ChatGPT is available, I think it could be viable to recommend the askers to try asking that bot instead. Of course, askers still will have to evaluate the quality of the answer they get there, ...
    – Doc Brown
    Commented Dec 19, 2022 at 11:52
  • ... but same is true for every answer they get on the SE sites as well. Do you see any reason why a comment like "you could try to ask this question at ChatGPT" could be flagged or deleted as inappropriate?
    – Doc Brown
    Commented Dec 19, 2022 at 11:54
  • @DocBrown That's interesting. I don't think it's necessary inappropriate. But at the same time, I don't know if it's appropriate. I doubt it's flag worthy, but I'm not sure that a user who is struggling to formulate a question would be able to assess the correctness of anything generated from ChatGPT or equivalent. Plus, you'd have to consider usage limits and such that may be added, making it impossible for anyone else to get value in the future. So, probably not worthy of mod action, but not necessarily the most helpful comment.
    – Thomas Owens Mod
    Commented Dec 19, 2022 at 14:21
  • 1
    Of course, we should not send someone there with a question which might be salvaged here. But questions which are deleted here anyway are of no value to anyone else, regardless if they ask them to ChatGPT or not. On the other hand, I made some experiments with ChatGPT over the last fey days, and sometimes the answers are really amazing, so they might be of value to the asker. But don't get me wrong, I am fully behind SE's policy on not accepting any answers generated by that bot.
    – Doc Brown
    Commented Dec 19, 2022 at 20:03

3 Answers 3

1

I fully agree to this policy.

However, I was wondering if there is still potential for a system like ChatGPT to become useful for this site or other Stack Exchange sites without breaking the rules. So far, I could come with two ideas:

  1. Ask ChatGPT to check some text for spelling, grammar and wording. This could greatly improve the formal quality of questions and answers, especially for posts by non-native speakers like me. Of course, one must be careful that any rewording of a post does not change its meaning.

  2. Recommend askers of unsalvageable "discussion" questions to try ChatGPT as an alternative to sending them to the site's chat rooms. On the Whiteboard, most times, there is also just another chatbot ("Duga") waiting for them, one which is not capable of giving answers of comparable quality.

I don' think this will violate the SE policy, and knowledge and expertise will still come from real people, not from an AI.

Addendum: after I wrote the my initial post above, I made an experiment and asked ChatGPT about suggestions on how to use it on SE for other purposes than generating answers. Interestingly, it came up with four "ideas", if that is a term applicable for suggestions from a language model.

I could give a summary, but I guess you may be interested in the literal answers:

  1. Providing suggestions for related questions: ChatGPT could be used to suggest similar or related questions to the one being asked, which could help users find the information they are looking for more quickly.
  2. Generating summaries of long or complex questions or answers: ChatGPT could be used to generate a summary of a long or complex question or answer, which could be helpful for users who don't have the time or inclination to read through a lengthy text.
  3. Generating sample code: ChatGPT could be used to generate sample code for a particular programming task, which could be helpful for users who are trying to learn a new programming language or framework.
  4. Generating tags or keywords for questions: ChatGPT could be used to generate tags or keywords for a question, which could help to improve the searchability of the question and make it easier for other users to find.

Here is my opinion on this: #1 may be used by people who have trouble to find the "right question" to ask. However, when such a question will be posted, attribution might be required.

#2 is fine as long as the answers are not posted as "new answers" on any SE site.

#3 is somewhat dangerous, even with attribution, and is clearly against your recommendation. Even when an answerer tests the example code to validate its correctness, they may violate some license terms accidentally.

#4 is a nice idea and I think it is probably unproblematic.

4
  • 2
    Both are valid. #1 may require attribution - that's a global SE policy to require attribution, although performing copyediting is different than contrbuting content, so perhaps attribution isn't necessary since we don't need to provide attribution to Grammerly. #2 is a bit riskier since some information coming out of systems like OpenGPT is wrong but looks plausible and some people may not tell the difference. But as the systems improve, it may drastically improve and offer benefits over traditional search engines.
    – Thomas Owens Mod
    Commented Dec 23, 2022 at 11:02
  • @ThomasOwens: disclosure, before I posted this, I asked ChatGPT to improve my wording in parts of this answer, it seemed to worked well.
    – Doc Brown
    Commented Dec 23, 2022 at 13:01
  • So far, I've been most impressed with the copyediting, translation, and formatting aspects of ChatGPT. Especially the more esoteric ones, like converting text into poetry or song. But fixing grammar, spelling, and improving the voice seem to be very strong capabiliites right now.
    – Thomas Owens Mod
    Commented Dec 23, 2022 at 13:40
  • 1
    @ThomasOwens: absolutely. Furthermore, I added some results from a new experiment. And I hope it is ok in this case that I posted some literal output from ChatGPT here.
    – Doc Brown
    Commented Dec 23, 2022 at 16:09
0

It's funny that ChatGPT appears to be self aware about this:

Prompt: How to detect if an answer on Stack Exchange was written by an AI?

There is no definitive answer to this question, as there is no surefire way to determine whether or not an answer was written by an AI. However, some clues that may suggest that an answer was written by an AI include:

The answer was posted very quickly, often within minutes of the question being asked.

The answer is identical or very similar to another answer on the site.

The answer is very short and concise, with little or no explanation or elaboration.

Another thing to look for is the level of detail in the answer. An AI might not provide enough detail to answer the question fully, or it might provide too much detail that is not relevant to the question.

Additionally, if an answer is full of errors or contains nonsensical phrases, it may have been written by an AI.

5
  • 5
    I will say that these are...wrong. These aren't the good indicators or tells of an AI-generated answer, at least something with the kind of training data as ChatGPT.
    – Thomas Owens Mod
    Commented Dec 6, 2022 at 3:33
  • 8
    I don't know if most chatbot answers fit into this description, but I know for sure sometimes human answers do. When we encounter such answers, it does not matter whether a chatbot or a human wrote them, if the quality is that low, they will be downvoted (and sometimes deleted).
    – Doc Brown
    Commented Dec 6, 2022 at 7:02
  • 5
    ... Moreover, when we look at this answer, which consists mainly of by ChatGPT content: this answer alone disproves pretty much any of the arguments listed above.
    – Doc Brown
    Commented Dec 11, 2022 at 18:51
  • True, SE's quality filter is done via votes, reviews and comments. It's irrelevant if an answer is written by AI or not, if its low quality based on collective expertise of all users it should be removed. However, this does just add load to the already full review queue
    – Yarek T
    Commented Dec 13, 2022 at 11:29
  • 1
    Using ChatGPT output as an example like this has got to be one of the few "valid" usages of ChatGPT in an answer that doesn't fall under the "highly discouraged" category mentioned in the post…
    – M. Justin
    Commented Jan 4, 2023 at 8:25
-4

Questions are stagnating without answers. You either get automated answers or you don't get answers at all.

5
  • 2
    What evidence do you have that questions are stagnating? Software Engineering has less than 5% of our questions unanswered (using the criteria of "no answers", not "no upvoted answers"). Stack Overflow has almost 14% unanswered. If you have concrete ideas to attract new experts or getting older unanswered questions in front of people who may have joined or gained the expertise to answer them since they were asked, please do share those.
    – Thomas Owens Mod
    Commented Jun 25, 2023 at 9:59
  • It's a significative percentage. It's thousands of questions. There will never be enough people. Commented Jun 25, 2023 at 10:27
  • 3
    automatically generated texts are not answers
    – gnat
    Commented Jun 26, 2023 at 22:25
  • They're better than yours. Commented Jun 27, 2023 at 15:57
  • that's comparing apples to oranges
    – gnat
    Commented Jun 27, 2023 at 18:43

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .