93

I've now seen more than 2,000 posts that I suspect are AI on Stack Overflow alone. I've flagged over 1,200 of these. The remainder were already handled, handled under one of the other flags, or in rare cases declined or not acted on (it definitely happens - the double-verification system helps reduce the chance that action is being taken improperly on the basis of false-positives). Yesterday, I added post #2000 to my "ChatGPT Output" "Saves" list.

There are (at least) six categories of AI-posts that I've seen:

  • Users who have used it (mostly) responsibly. I don't have a problem with this. My personal opinion is that responsible use of AI should be allowed on SE. That said, the current policy for Stack Overflow is still that this is not allowed (but now impossible to enforce, of course).

  • Then we have users who I honestly believe are trying to be helpful in answering a question, but they don't understand the perils of unvalidated AI output, and post without verifying (and often not even fully understanding) their answer. This is not responsible use.

  • We have those who are simply attempting to "farm rep" for one reason or another (some fall into the categories below). This is also not remotely acceptable, as the trust and reputation earned on SE should represent the community's assessment of the user's expertise.

  • We've had obvious spammers who have taken AI-generated answers and added a spam URL to the bottom of the answer or as a link to a bit of punctuation in the answer. These are easily caught early by Smoke Detector.

  • But then there's the subject of this question - We've seen AI generated answers (almost certainly unvalidated) from recently created accounts with spam profiles. I've found two of these already today in one search, and there are almost certainly more. These are simply AI-generated answers, so now we're not supposed to use heuristics to detect them (or at least Mods aren't allowed to take action based on that info). But unless someone follows the profile, most users aren't going to see a problem. The answer is allowed, and can't be removed until (presumably) someone notices the profile.

    And these spammers are taking the "patient" approach. The accounts I'm seeing here are created several weeks in advance, before the answers begin to be posted:

    • Example 1 (tourism/hospitality industry) (edit/update: oddly still there after a week, and the same spam-profile account added a second GPT answer to the same question!)
    • Example 2 (an SEO company)

    Both are likely to be quickly deleted after I point them out, but I haven't flagged them myself, since I've joined the strike and said I won't raise flags for now. Of course, this post will inevitably count as a "form" of a flag, but I feel it's important to point out the problem and ask what the intended solution is.

    If I hadn't found these through the heuristic analysis that is now disallowed for mods to act on, it's unlikely they'd be found. And it's unlikely that enough other users would notice them rapidly to accumulate the number of Spam flags needed for them to be deleted like "normal" spam.

    As it is, one has been in place since yesterday, and the other for 2 hours already, and it likely wouldn't have been found for a while.

    And what happens when they start posting the AI answers first, then add the spam to their profiles after some of the answers get upvotes? What happens when these spammers gain enough rep on AI answers to be able to Edit and Comment?

  • There's even another form of this spam that is even more difficult to act on. The "obvious spam profiles" will likely be removed under existing rules (once found), but where does the line fall on "spam" when it is from an individual? I won't link to the the account or the user's answers here, but I did point it out to moderators (pre-strike) who felt that there was nothing they could do about it under the new policy.

    This user posted 5 answers in a short span of time that I feel were almost certainly generated by ChatGPT (4 now deleted). When I look at their user profile, they are pushing themselves for hire as a contractor (oddly the profile is not AI generated!). Further, the link in their profile is to an Amazon page where they are selling three books they claim to have authored. Since even the abstract for these books looks to be AI-generated, I think we can assume that the books themselves were even written with heavy assistance from AI.

    At what point do AI-generated answers from individuals who are selling themselves, their "expertise", or even products become "spam"?

And if the moderators are not allowed to take action based on heuristics, and this type of "profile" as a user is allowed, how does SE intend to handle it? It's getting worse already with the new policy (clearly) and will only continue to do so.

11
  • 1
    Related? meta.stackoverflow.com/q/421453 Commented Jun 8, 2023 at 14:32
  • 4
    @snake maybe remotely. This one here is very specific, only about AI oriented spam profiles. Commented Jun 8, 2023 at 14:39
  • 2
    Your second example is actually an example of the fourth category, where it's ChatGPT output with a spam link on the end. It's just that the post is formatted so badly that the link is hidden at the bottom of a code block and doesn't render as a link. I've flagged it as spam.
    – F1Krazy
    Commented Jun 8, 2023 at 15:05
  • 4
    @F1Krazy Good catch, so Smokey would likely have gotten that one if it were running, and likely even before it got an upvote Commented Jun 8, 2023 at 15:09
  • 1
    Yeah - Smokey finds spam links, even when they're 'hidden' in code blocks. Commented Jun 8, 2023 at 15:24
  • 23
    I suspect the answer is: they don't plan on stopping it. SE doesn't seem particularly concerned about spam on profiles, so I don't see how it matters if those profiles are linked to from answers which don't themselves fall afoul of the rules (which GPT-generated answers now de facto don't).
    – kaya3
    Commented Jun 8, 2023 at 15:32
  • 1
    @kaya3 what about spam comments? is there any way for those to get dealt with short of people flagging? Sounds like now people can write completely autonomous spam bots that gain enough rep to comment.
    – Esther
    Commented Jun 8, 2023 at 20:26
  • @Esther If the comments themselves are spam or useless then people are supposed to flag them (though currently, those of us who are on strike are refraining from doing so). If the comments themselves are appropriate then SE, Inc. probably don't mind if the username link leads to a spam profile.
    – kaya3
    Commented Jun 8, 2023 at 21:11
  • @kaya3 is there a system to auto-delete comments with enough spam flags? I doubt a bot would be able to detect spam comments. My point is that with GPT-fueled bots, we can potentially end up with automated users that have enough rep to post comments, and potentially spammers will realize this. If it ever gets as common as spam questions/answers, I don't know if we' have the automated systems to be able to deal with it.
    – Esther
    Commented Jun 9, 2023 at 15:05
  • @Esther Yes, all of those are real problems with the position I would expect Stack Exchange, Inc. to take on this issue. I still expect it to be their position.
    – kaya3
    Commented Jun 9, 2023 at 15:31
  • Related: Massive Spam Attacks since couple of days (posted 2023-06-16). Server Fault. Commented Jun 16, 2023 at 17:58

1 Answer 1

3

As for spam in profiles, see I have found 16,809 similar, inappropriate(?) user profiles. There are more. Now what? (an MSO post of mine).

What's okay might be in some ways influenced by the site you're on and its scale.

Ex. Quoting Makyen from the linked MSO post, referring to several huge pools of yucky, spammy profiles that I found:

Please don't flag for these, unless they are exceptionally egregious (e.g. child porn). [For those that are exceptionally egregious, please do flag.] If moderators want to spend time on these, then we can find hundreds of thousands of them on our own, and it's a lot faster for us to handle them that way than the one or small number which you might fit in a flag.

Lot's of us are quite frustrated with these, but to actually deal with the problem requires changes to the system which would need to be made by SE. So far, SE has not spent time on making such changes, although there are hints that SE might take a small step in the direction which would allow moderators to be effective in at least getting rid of them on the individual sites where they are a moderator, if the moderator wants to spend quite a bit of time handling them.

At least for the ones identified here for which I've used Google Translate on the profile contents, they sound like typical spam for escort services, which we see all the time, both in profiles and in posts. There's a notable difference between those and the ones which appear to be trafficking. If it appeared to be trafficking, then, yes, that would be something for which we'd like to see a flag, even for it being in a profile. If it's in posts and it's spam of any sort, then a spam flag on the post is appropriate (and the more egregious ones can get a custom flag).

Whereas on Stack Apps, rene and other mods of that site delete spammy profiles. I'm not sure on the exact details of what's qualified as a "spammy profile" there.

I get some mental dissonance here, since some of the current state of things seems to actually go against the ToS, which I've touched on in my MSO post.

On the point of mods needing better tools, see Spam Profiles are getting my goat. Could we have better tools for mods to deal with profile spam?.

To balance this out, if you actually contribute useful content and aren't overly self-promoting in your posts, promoting yourself and your own content isn't a problem in your own profile page. You can do that as much as you like there, as long as you don't do anything else bad there that's inextricably tied to the self-promotion, such as promoting a child porn site- for example. See also the Terms of Service for other "bad things".

Given that, if you think a spam profile should be flagged, flag one of their posts. If they have no posts, flag one of your own posts as a workaround, or just flag some random post. Link to the profile and explain what you see and give a summary of what's bad there.


As for spam in posts, the mod policy is a policy for mods, and even if you find such spam through your AI-detection heuristics, the way you found it would not change the fact that it is spam (if it is- in fact- spam). Spam is spam. Flag that as such (yada yada there is a strike going on right now- yes I know).

3
  • 2
    Every single mod on SE would be happy to see spam profiles like that gone. But it's not fair to make volunteers spend hours upon hours doing it manually (as some especially bored mods have done). I think that they should be deleted via database query, but I doubt that would happen.
    – Laurel
    Commented Jun 28, 2023 at 0:20
  • 2
    The rest of this answer seems to miss the point. The spam is entirely in the user profile for some of the users OP mentioned, so there's no real option to flag their posts, even though the posts should be deleted for being AI generated. These are not spammers who have enough time or experience in [site topic] to actually write answers themselves.
    – Laurel
    Commented Jun 28, 2023 at 0:22
  • @Laurel not having anything to flag is its own problem. You could flag one of your own posts and link to the spam profile with an explanation of what you observe and why you think it's bad, or flag any random post in the same way. With big pools of accounts, I suppose you could try your hand to bring it to the attention of staff on meta.
    – starball
    Commented Jun 28, 2023 at 0:25

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .