We, as a community, need to decide on an AI Policy (perhaps similar to StackOverflow's.
This is just one opinion.
What AI can and can't do
I think the last year or so of generative AI has shown that it is a great tool at producing facsimiles.
It can produce something that looks like an invoice, or looks like a book report, or looks like a resignation letter written by Dracula.
For some applications, facsimiles are all you need. If you are learning English as a second language, making it look like formal English may well be sufficient.
For other applications, it isn't sufficient at all, and "looking like" isn't enough.
Generative AIs can certainly produce something that looks like a Skeptics.SE answer, but it seems that they mostly fail. Citing sources (including pulling out quotes) are one area they are poor at. Generally understanding the question and directing the answer seems to be another.
There is at least one scenario where I think using generative AI is a win for everyone: Taking a draft answer with a solid structure (appropriate references and solid conclusions) but poor English, and turning it into an easy-to-read final draft.
For this reason, I am loathe to have a blanket AI ban.
Do we tackle the cause or the effect?
I learnt a lot about moderating from @Sklivvz. Early on, we had a discussion about a troublesome user.
I was concerned that, based on their writing, they seemed to be a paranoid schizophrenic. I also felt that I was in no way qualified to make that call, and if I were it would be unethical to make that call based on a few bits of writing. Further, it would be a complete violation of the Codes of Conduct to make such an accusation in a comment. I didn't know how to deal with them.
Sklivvz redirected my concern: I can't tell what is going on in their head. I can tell that there are problems with the answer. I should focus on fixing that, not fixing the user.
This has influenced my attitude to dealing with allegedly AI generated answers. We can't figure out what was going through the head/CPU of the answerer. We can tell that the answer is bad, and address those reasons.
Other reasons that AI might be bad
The StackOverflow policy suggests AI-generated answers might not be what people are expecting (or they came here because they don't want an AI-generated answer). I don't find that very compelling. People should expect quality answers. The mechanisms used to get there aren't relevant.
The StackOverflow policy suggests AI-generated answers may have excessive noise and may include false or misleading information. This is a problem, but our existing voting/closing systems are supposed to handle that anyway. There is a risk that those systems might be overwhelmed, but I am not seeing that yet. My position might well change if that was happening.
The StackOverflow policy suggests AI's are bad at citing resources. This is a big concern. History shows it is a concern that also applies to humans.
I have long been suspicious of the "all the references at the end" system people used in high school History essays. The references should be explicitly linked to each claim, rather than simply pointing afterwards at bunch of books and saying "You'll find support for what I said somewhere in there." I am currently on even higher alert, because this seems to the be the preference of AI-generated answers, which means the references may not even exist and if they do, may not contain relevant supporting material. We should be more insistent that quotes supporting the argument be extracted from the sources.
I suspect some people are going to be upset at people who use AI generation because it is "cheating". I don't hold that opinion. There are lots of techniques used that beginners don't seem to know about: Google Scholar, Sci-Hub, Google itself, Wikipedia, Cochrane Collaboration, etc. This is one more.
Conclusion
I do not want to see blanket ban on generative AI. I do not want to be in an environment where we have to have investigations on whether a particular answer might have been generated.
We have long had lots of poor answers written by people who can't recognise what makes a poor answer. This will continue with people using AI. We already have mechanisms to deal with them, and unless they become overwhelmed, I would like to continue.
If we get agreement here, and someone wants to propose an FAQ answer warning that answers purely generated by AI tend to be awful, and will generally get downvoted and if repeated will earn automated answer bans for the users, I will be in favour.
I don't think it should be a banner unless it turns into a common problem. We should be welcoming of new users, and this seems a bit off-putting to innocent newbies.
Disclaimer
I have never used generative AI on any answer. I have no plans to. I frequently use spell-check and grammar checkers (and still errors creep through).