Let me first say I agree to all the arguments presented in Thomas Owen's post. Question's and answers on this site should be based on people's experience. However, I think we need to be a little more specific about what we mean by AI generated content, by based on and how much space this leaves for an acceptable and responsible use of AI. The current version of the AI policy on Stackoverflow is not specific enough for my taste, so as long as this policies wording does not get an update, I think we should not use this as a template.
To my understanding, "based on an author's experience" does not necessarily mean "literally written word by word by the post author". For example, machine translation services like Google Translate or DeepL are using artificial neural networks today. Hence translations generated by them can be seen as "AI generated content". Still, I think that is a perfectly acceptable use of AI, as long as authors give their own ideas as input, and proofread the result to validate the translation is accurate. Of course, I expect a user of such a service to be able to understand English to the degree they can reliably do the latter. This is in line with this Q&A on Meta.Stackoverflow.
Another case would be the usage of an LLM (and not just an arbitrary AI) for improving a post linguistically, in terms of phrasing, grammar and spelling. A few days ago, I asked a question on Meta.SE about this. My conclusion from that Q&A is that it is not a good idea to run a post through an LLM and copy/pasting the result right here - that would definitely count as "prohibited AI generated content", and it should be prohibited for good reasons. If, however, one interprets the AI generated (or "improved") text as a list of possible suggestions for changes, and then goes through that list one-by-one, picking the suggestions which really look as an improvement, whilst ignoring the others, that would be IMHO ok.
Still, I am a little bit unsure at what point such a post should be marked with a sentence "this post was written by the OP and linguistically improved using ChatGPT". This is surely a grey area and may have to be decided on a by-case basis. On the other hand, if one feels having to add this sentence to a post, they probably have already gone too far with using the LLM.
About the banner: in general, I think that's a good idea (as long as we don't get too many of such banners at the same time). Ideally, it should contain a link to a page which describes our expectations and what we mean (or not mean) precisely by "prohibited generative AI".
However, thinking a while over the banner, I am not convinced this banner will be more important than the other banner we often see as a reminder to the CoC when answering to a new users. If I understood this right, only one of the two banners will be shown, so we have to choose which one we will prefer over the other.
Looking at the current AI policy page of Stackoverflow (which is is the link on the banner on Stackoverflow), what I don't like about it is that it talks about generative AI in general, but then only mentions LLMs as example and is completely silent about other kind of generative AI in the broad sense, like translation services. In case we get a banner, I hope our policy page will be less vague.