I've now seen more than 2,000 posts that I suspect are AI on Stack Overflow alone. I've flagged over 1,200 of these. The remainder were already handled, handled under one of the other flags, or in rare cases declined or not acted on (it definitely happens - the double-verification system helps reduce the chance that action is being taken improperly on the basis of false-positives). Yesterday, I added post #2000 to my "ChatGPT Output" "Saves" list.
There are (at least) six categories of AI-posts that I've seen:
Users who have used it (mostly) responsibly. I don't have a problem with this. My personal opinion is that responsible use of AI should be allowed on SE. That said, the current policy for Stack Overflow is still that this is not allowed (but now impossible to enforce, of course).
Then we have users who I honestly believe are trying to be helpful in answering a question, but they don't understand the perils of unvalidated AI output, and post without verifying (and often not even fully understanding) their answer. This is not responsible use.
We have those who are simply attempting to "farm rep" for one reason or another (some fall into the categories below). This is also not remotely acceptable, as the trust and reputation earned on SE should represent the community's assessment of the user's expertise.
We've had obvious spammers who have taken AI-generated answers and added a spam URL to the bottom of the answer or as a link to a bit of punctuation in the answer. These are easily caught early by Smoke Detector.
But then there's the subject of this question - We've seen AI generated answers (almost certainly unvalidated) from recently created accounts with spam profiles. I've found two of these already today in one search, and there are almost certainly more. These are simply AI-generated answers, so now we're not supposed to use heuristics to detect them (or at least Mods aren't allowed to take action based on that info). But unless someone follows the profile, most users aren't going to see a problem. The answer is allowed, and can't be removed until (presumably) someone notices the profile.
And these spammers are taking the "patient" approach. The accounts I'm seeing here are created several weeks in advance, before the answers begin to be posted:
- Example 1 (tourism/hospitality industry) (edit/update: oddly still there after a week, and the same spam-profile account added a second GPT answer to the same question!)
- Example 2 (an SEO company)
Both are likely to be quickly deleted after I point them out, but I haven't flagged them myself, since I've joined the strike and said I won't raise flags for now. Of course, this post will inevitably count as a "form" of a flag, but I feel it's important to point out the problem and ask what the intended solution is.
If I hadn't found these through the heuristic analysis that is now disallowed for mods to act on, it's unlikely they'd be found. And it's unlikely that enough other users would notice them rapidly to accumulate the number of Spam flags needed for them to be deleted like "normal" spam.
As it is, one has been in place since yesterday, and the other for 2 hours already, and it likely wouldn't have been found for a while.
And what happens when they start posting the AI answers first, then add the spam to their profiles after some of the answers get upvotes? What happens when these spammers gain enough rep on AI answers to be able to Edit and Comment?
There's even another form of this spam that is even more difficult to act on. The "obvious spam profiles" will likely be removed under existing rules (once found), but where does the line fall on "spam" when it is from an individual? I won't link to the the account or the user's answers here, but I did point it out to moderators (pre-strike) who felt that there was nothing they could do about it under the new policy.
This user posted 5 answers in a short span of time that I feel were almost certainly generated by ChatGPT (4 now deleted). When I look at their user profile, they are pushing themselves for hire as a contractor (oddly the profile is not AI generated!). Further, the link in their profile is to an Amazon page where they are selling three books they claim to have authored. Since even the abstract for these books looks to be AI-generated, I think we can assume that the books themselves were even written with heavy assistance from AI.
At what point do AI-generated answers from individuals who are selling themselves, their "expertise", or even products become "spam"?
And if the moderators are not allowed to take action based on heuristics, and this type of "profile" as a user is allowed, how does SE intend to handle it? It's getting worse already with the new policy (clearly) and will only continue to do so.