I will play devils advocate here, and recommend a stated policy against the posting of AI generated answers here.
Our main users may attempt to enforce a no-sources/no votes policy, but when a question goes HNQ, we get a large influx of non-site regulars that pop in and vote, often with no knowledge of the policies we try to uphold here. We have all seen poor questions, which later get closed, first reach the HNQ list. If the non-moderators can point to a specific policy in comments, perhaps it might slow down these random votes until moderators can step in. (this would assume of course the AI/content can be recognized as such; I look forward to seeing the tools mentioned by @MCW)
The problem with assuming it will be closed by the users goes back to an old discussion we had here once about bad questions/answers. If it has no sources, but looks ok otherwise, users may not up vote it, but also may not down vote it or move to close.
I would recommend a strict policy in fact, requiring such content be deleted and the suspensions mentioned by T.E.D levied. Remove any incentive to 'play games' with the system simply for the purpose of earning rep.
I watched one video about detecting this type of content, which fed the info into another Ai to evaluate, here. In the comments below the video there were already discussions on how to beat the system. Please set a policy now, and get ahead of those individuals.
For those that think this will not be a problem, there is an answer (since deleted by @MCW) from a few days ago, which had 2 upvotes, and only one down vote. It looked ok, so most of the users ignored it. It tests at the above linked site as 99.97% probability as fake. (deleted answer is here, only visible if you have enough rep to view deleted answers)
Another recent answer (12/17) posted (and then self-deleted) here also fails the test. (Only users with sufficient rep to see deleted answers will see this post).
So you can see this is an ongoing issue. (It is also worth noting both of the examples I cite came to the site with association bonus, so this abuse is not limited to unregistered/new users.)
I will add Steve Bird's comment from below, which also points out another danger of allowing this type of content to go on unchecked:
If the ChatGPT experiment starts to produce "good enough" answers, it
would also be a tool that could be open to abuse by trolls. As we've
seen in the past, posting a few 'good' answers to build up rep can be
misused to sock puppet and upvote push questions. Being able to
quickly generate an acceptable answer with little effort would be a
boon to them.