Today, AI output, like ChatGPT, is usually recognizable. The writing style is different from how most people write. Near-future refinements are likely to make the writing style less recognizable. So a policy on AI-generated content might soon be unenforceable, suggesting that might not be the best way to frame the issue.
The real problem with AI-generated content is the content quality, and AI doesn't have a good way to improve on that. AI has the same problem as a visitor who is unfamiliar and inexperienced with the subject of the question, and tries to answer by Googling it.
Much of what can be found in a search was created by people who have no real expertise and is often wrong. The source information isn't curated or rated (voted/commented on), by subject matter experts. So AI answers tend to be low quality. If not outright wrong, they are often inaccurate, miss important considerations, too generic, etc.
Rather than basing a policy on what the source was, it may be more useful to just focus on answer quality. Users familiar with the subject can identify how well a post answers the question. Voting may be the best solution (the uncurated source information would get curated here).
- Is the information correct and accurate?
- Does it identify and focus on the appropriate and important considerations?
- Does it recognize and address possible variations due to ambiguity in the question (either by the author having requested clarification from the questioner or covering the alternate possible cases within the answer)?
- Does it reflect actual user experience?
- Is it actionable information targeted to the conditions in the question, or general truisms and hedging?
- Does it get directly and definitively to the heart of the answer, or dump a bunch of tangential fluff?
Experienced users who can write well will produce high-quality answers. AI won't. The AI answers will tend to be very polished garbage.
That said, AI can still be useful. It's a tool that can do a better job of searching and summarizing than most people have time for. The AI output can be a good starting point for creating an answer.
People have also raised the issue that posting ChatGPT output without attribution is plagiarism. It could be argued that it could be considered an automated and comprehensive collection of writing tools, researching and generating content to your specifications. So the output would be considered your work (only humans have copyright protection), similar to commissioning someone to ghost-write your content.
I don't think we need to engage in that debate. The more relevant aspect is whether an answer represents the user's personal experience and knowledge, or is simply summarizing what other people reported. That is critical to interpreting the answer. If it is just uncritical regurgitation of Internet debris, it doesn't really matter whether ChatGPT created it or an unknowledgable user did their own search.
Rather than requiring attribution for AI-generated answers, it might be more useful to suggest/require that (all) answers specify whether they are based on the writer's actual experience if that is not obvious from the content of the answer.