There should be a rule (not sure if it already exists) that only humans are entitled to have user accounts. We should not allow any bots on this site to post questions and answers (as well as other user interactions such as up-votes, down-votes or accepting answers).
There might be legitimate reasons for a human member to use OpenAI or similar to generate a question, answer or comment, for example a language barrier. We should balance this with the interests for protecting the integrity of this site.
AI-generated questions do not seem to be a huge problem, as long as we can be certain that it would always be a human who posts them. The questions obviously reflect an interest or problem of the person posting the question.
AI-generated answers and comments pose a big problem, because they dilute the legitimate content created by human experts with additional texts of high word-sequence-probability but limited meaningfulness and significance. Increasing number of AI-generated answers will make it difficult for visitors and members to find actual answers for their problem. Therefore we need a rule against AI-generated answers.
But can we assume that members are good citizens and follow the rules and don't use AI here? I am not so optimistic. It is similar to users not reading or ignoring the 'How do I ask a good question'. People don't research existing questions and answers, it is a tl;dr mentality, they want quick answers. The anonymity of this site makes it easy to break rules (pls note, I do not want to question/change the anonymity).
AI-generated translations of an original human text might show similar characteristics as a fully AI-generated text. So when there will be tools available to identify OpenAI generated text we might find it difficult to differentiate these from AI-generated translations (such as DeepL).
So how could we cope with it? One option is to add a check box that needs to be clicked each time a member posts an answer, and by clicking the check box they confirm that they wrote it without the help of an AI tool. And yes, the Terms of Use need to allow moderators to remove AI-generated content, and if a user is repeatedly posting it there needs to be the options for yellow and red cards. Thinking into the future: We might want to consider a new 'privilege' at a certain reputation level, which allows a member to flag an AI-content suspicion, which could then lead to an automatic submission of that answer to an AI-detection tool.