-12

The announcement of ChatGPT's ban only talks about the potential incorrect answers the AI could add to the site if allowed.

However, I can think of more uses for the AI than answers. You focused on answers, but a site's reputation can be hijacked by asking the AI to generate questions, for example.

I asked ChatGPT:

Can you generate a question for Stackoverflow?

Sure, here's a question:

"What's the best way to implement authentication and authorisation in a Node.js and React application using JWT?"

I asked too:

Can you generate a question for Stack Exchange Earth Science?

Sure! Here's a question:

"What is the relationship between ocean currents and climate change, and how do changes in ocean currents affect weather patterns?"

It only gives a title and doesn't generate a wrong code for Stack Overflow. However, the Earth Science question is good, and a user could hijack the reputation system by posting successive acceptable questions on the site.

Shouldn't the ChatGPT policy be more specific about where you can and can't use AI?

Does the policy apply to every Stack Exchange site, or should every site define its own policy?

6
  • 9
    Since there is no central policy on ChatGPT content and you linked to the StackOverflow policy, this question is actually off-topic here. AI policies are defined per site and should be discussed on each site's Meta. Commented Mar 14, 2023 at 12:11
  • 5
    Note that the Stack Overflow policy isn't specific to anwers. It applies to questions too, please read the whole post. So much so that we had to explicity state that questions about ChatGPT generated output are okay provided they are otherwise on-topic. Commented Mar 14, 2023 at 12:13
  • 1
    I've voted to close this as a duplicate, because the proposed duplicate makes it explicitly clear that there is no network-wide policy on AI generated content other than that it is always "the work of others" and, thus, requires that the referencing requirements be followed. Thus, a question like what you've asked here cannot be answered here and must be asked on a per-site basis.
    – Makyen
    Commented Mar 14, 2023 at 13:18
  • @Makyen Yes, I have edited my question and voted to close.
    – user1242306
    Commented Mar 14, 2023 at 14:15
  • 3
    @Universal_learner Wanting the policies to be more clear is a good thing. Unfortunately, as you've come to know, the company has declined to make a network-wide policy and left it up to the individual sites. You can find a list of them in the answer to "Is there a list of ChatGPT discussions and policies for our sites?", which is linked in the answer you accepted. So, getting clarity on the policy for a particular site is something that needs to be asked on that site's child meta.
    – Makyen
    Commented Mar 14, 2023 at 15:00
  • @Makyen Done
    – user1242306
    Commented Mar 15, 2023 at 13:12

2 Answers 2

1

TL;DR: Wrong reference. Instead, use Ban ChatGPT network-wide and Is there a list of ChatGPT discussions and policies for our sites?


While Stack Overflow is the flagship, and by large the site with the higher statistics in everything (I think), the policies posted as Stack Overflow Meta posts aren't Stack Exchange network-wide policies.

Yes, we can learn a lot from them, but in this specific case at this specific time, changes to that policy should be discussed on Meta Stack Overflow.


AFAIK, at this time, ChatGPT might be helpful to save time for people who already have some base knowledge about how Large Language Models work, prompt engineering and the topic of "chatting" with the chatbot as it might hallucinate. See the OpenAI ChatGPT Introduction.

IMHO, sites that are about facts should not allow generated-text content that has not been properly verified or seriously endorsed by a SME. Probably, this type of content in SE's fact-centered sites should be peer-reviewed. This implies that each site should decide if they should allow this type of content and how the verification will be done and reported, or who the SMEs are to endorse this type of content, etc.

Anyway, the current SE network policy about this topic is that each community should decide their own policy.

2
  • So each site defines its policy. I asked it in my main site Earth Science to see if we can define one. Code is not the same than Science.
    – user1242306
    Commented Mar 14, 2023 at 13:29
  • The last statement is not neaded, at least not in this answer. I sincerely hope that you get a good participation on your post on Earth Science.
    – Rubén
    Commented Mar 14, 2023 at 13:47
4

No, that question is not good; it does not show any research effort, and may be downvoted because of that. Also, it seems to me (I'm not a subject matter expert, though) that you can write entire books on that subject, so it's rather broad as well.

Anyway, as long as there is no complete ban on AI-generated content on the site (some sites only explicitly disallow it in answers, on Stack Overflow it's banned in questions too), you are allowed to post it, but all content you didn't write yourself, including questions, should be properly attributed.

20
  • That's what I am asking in Earth Science, if I can quote ChatGPT for trivial data that I would have searched in Google instead. So I can quote ChatGPT? Is that allowed? Ok, thx. As you are an active user in Earth Science you can discuss in Meta Earth Science Stack Exchange post if you wish.
    – user1242306
    Commented Mar 14, 2023 at 11:27
  • 1
    Too broad yes, you are rigth.
    – user1242306
    Commented Mar 14, 2023 at 11:36
  • 1
    @Universal_learner "So I can quote ChatGPT?" why would you be quoting ChatGPT? It's literally just made up text. It told me once that Germany is a landlocked country. And in the same sentence explained that Germany borders with things like France, the Netherlands, and the North Sea. It's not some trustworthy source. So your question is like "Can I quote Jeff who I only know from the pub". You could but quoting Jeff is hardly authoritative. Especially if Jeff is known for spouting random nonsense at times.
    – VLAZ
    Commented Mar 14, 2023 at 12:47
  • @VLAZ It passes just a a baccaeulerat exam. I asked in Earth Science that, if it could be considered a reputable source I am not sure, I just tested it yesterday. For trivial data that don't need intelligence, just access to the internet and ability to judge the sources, it can be better and quicker than searching in Google I think.
    – user1242306
    Commented Mar 14, 2023 at 12:51
  • @VLAZ It can be used a lot of times (and I think I am gonna do) for a first approach that you are going to check in other sources. I asked it "To the question Why is Mars red? is "Mars is red due to the presence of Mg." a good answer?" and it correctly says it is incorrect, so it migth be used as a tool to upvote/downvote and also to check if you have written something incorrect before posting an answer.
    – user1242306
    Commented Mar 14, 2023 at 12:56
  • 1
    @Universal_learner "ability to judge the sources, it can be better and quicker than searching in Google I think" but you cannot judge the sources. Again, it's literally just made up text. Yes, it might be right but also might be wrong. Any and all information you're given you have to verify anyway. Which means that it's not really any quicker. Since you cannot assign value to the information blind. Unlike, say, some textbook or an established person in the field where you can at least have some confidence that information they give out is correct.
    – VLAZ
    Commented Mar 14, 2023 at 13:12
  • 1
    @Universal_learner "so it migth be used as a tool to upvote/downvote" no it cannot - the information is inherently unreliable. If Jeff from the pub shouts out a slurred "You're wrong" I hope you wouldn't immediately decide Jeff is correct. "also to check if you have written something incorrect before posting an answer." Ditto. Jeff from the pub is just there to drink beer. You're trying to use him as a trustworthy authoritative verification. I don't think you quite grasp what ChatGPT does if you want to trust blindly anything it says.
    – VLAZ
    Commented Mar 14, 2023 at 13:12
  • @VLAZ Maybe not to massively voting, but to detect mistakes and downvote. It correctly detects the red color of Mars is due to iron oxiides and not Mg. This is how chess motor engines are used. The chess master still analyze more deeply the position (there is not an engine that analyzes, they just move) but he uses the engine to be sure he is not missing anything. Maybe no to this version, but ChatGPT is actually trainning. Stack should adapt to the AI, that can become a friend and not an enemy.
    – user1242306
    Commented Mar 14, 2023 at 13:19
  • ...maybe not now, but in the future (if not it will be the end of the site). For example the AI can clean up the site of poor questions based on code mistakes that are not being answered. It can also correct potential mistakes in published content, and that would be positive to programmers community (they all use Stack Overflow).
    – user1242306
    Commented Mar 14, 2023 at 13:26
  • 2
    @Universal_learner you're missing the fact that the information is not trustworthy. Again, according to ChatGPT Germany is a landlocked country which borders the North Sea. Which is totally and completely incorrect. Which is wrong information it spouts. It can also claim something completely wrong is actually correct. Furthermore, it some times takes a completely nonsensical question and answers it. When the only response is "what you are asking is impossible" or similar. Again - you are trying to convince me ChatGPT can work for this task by demonstrating you don't know how ChatGPT works.
    – VLAZ
    Commented Mar 14, 2023 at 13:30
  • @VLAZ ChatGPT is actually trainning. There is an option to upvote or downvote its answer. Give it time.
    – user1242306
    Commented Mar 14, 2023 at 13:32
  • 1
    I'm a bit confused as to why this question was answered rather than closed. The company has made it clear that there is and will be (at this time) no network-wide policy other than requiring attribution. Thus, the policies which are being asked about in the question must be addressed individually on the per-site child metas (which, as you know, is a close reason here on MSE, or potentially a migration reason). Because the question asks about the policies on two different sites, it can't be migrated to one of them, so could be considered too broad/needs focus.
    – Makyen
    Commented Mar 14, 2023 at 13:35
  • 1
    Note: this answer might be not wrong if it corrected the question's assertion that the Stack Overflow ban only applied to answers, because then its qualifying statement "as long as there is no blanket ban on AI-generated content on the site" would be clearly excluding Stack Overflow. As this and the question are currently written, the answer implicitly does not exclude Stack Overflow, because the question's assumption that SO's ban isn't complete is not refuted. That's probably an issue of different unspoken assumptions, but it's very important in this case to make those assumption clear.
    – Makyen
    Commented Mar 14, 2023 at 13:47
  • 2
    @Universal_learner ChatGPT doesn't have direct access to the Internet. Sure, it was trained on data from the Internet, but that data has been "assimilated" into the weights of its neural network. It can't directly retrieve any of its training data, but it can generate text resembling that data. However, it doesn't really "know" what that text means: it operates on syntax, not semantics.
    – PM 2Ring
    Commented Mar 14, 2023 at 16:55
  • 2
    As I said here ChatGPT does not attempt to make truthful or even logical statements. Its job is to create "completions" of the text you feed it. Yes, it can say true things, but it can also say complete nonsense, and it can't tell the difference.
    – PM 2Ring
    Commented Mar 14, 2023 at 16:55