325

The VP of Community for SE gave the following statement to the press:

Stack Overflow ran an analysis and the ChatGPT detection tools that moderators were previously using have an alarmingly high rate of false positives. Usage of these tools correlated to a dramatic upswing in suspensions of users with little or no prior content contributions; people with original questions and answers were summarily suspended from participating on the platform. These unnecessary suspensions and their outsize impact on new users run counter to our mission and have a negative impact on our community.

This statement is not correct. Specifically, it conflates the use of ChatGPT tools with suspensions, implying that moderators use those tools to decide about suspensions. This is not correct, while these tools were potentially part of such a decision, the decision itself is a manual analysis.

The statement also claims that the suspensions were unnecessary even though SE told us moderators explicitly that they don't know that. What SE said before was that they cannot be sure those suspensions were correct, which is a very different situation than what SE describes.

This statement misrepresents the facts and thereby disparages the moderators that have acted on AI-authored content. It would be nice if SE could be more accurate with facts when speaking to the press, and avoid disparaging the moderators on these sites.

Correction: The article previously attributed the statement to the CEO, now the attribution is changed to the VP of Community.

19
  • 17
    Gizmodo says it's Philippe: "“A small number of moderators (11%) across the Stack Overflow network have stopped engaging in several activities, including moderating content. The primary reason for this action is dissatisfaction with our position on detection tools regarding AI-generated content,” Philippe Beaudette, VP of Community at Stack Overflow, said in a statement emailed to Gizmodo. “We stand by our decision to require that moderators stop using the tools previously used. We will continue to look for alternatives and are committed to rapid testing of those tools.”"
    – Mithical
    Commented Jun 5, 2023 at 14:37
  • 5
    @Mithical so either one publication got the author wrong, or SE sent this statement with different authors or with ambiguous authorship. Commented Jun 5, 2023 at 14:39
  • 9
    Philippe has given statements on behalf of the CEO in the past. Annoying that they were vague, but that is a possibility
    – cocomac
    Commented Jun 5, 2023 at 14:41
  • 3
    Most likely the CEO write a general statement and Phillipe is forwarding it to every news site with minor adjustments
    – mousetail
    Commented Jun 5, 2023 at 14:47
  • 53
    This is pointless. Contact the media, and make them aware SE is lying to them. SE is not going to do it themselves. Commented Jun 5, 2023 at 14:54
  • 86
    How does this not violate point ii of stackoverflows commitments in the moderator agreement: Get your explicit written permission before commenting to any media (including media outlets controlled by Stack Exchange Inc.) or independent reporters about you or your moderator actions as per our Press Policy. Commented Jun 5, 2023 at 15:51
  • 51
    I guess they learned nothing from the incident with Monica. Figures. Commented Jun 5, 2023 at 15:53
  • 5
  • 23
    @user1937198 loath as I am to defend the company, in the interest of fairness, that was about mentioning any individual and no individual was mentioned here. They haven't commented about any moderator's actions, they've commented about moderators as a group and that isn't against the agreement.
    – terdon
    Commented Jun 5, 2023 at 16:55
  • 25
    That's nonsense. If you say "The chess club is responsible for vandalizing the bathroom!" and everyone knows that Alice, Bob, and Eve are the members of the chess club, then you are commenting on Alice, Bob, and Eve's (alleged) actions.
    – jscs
    Commented Jun 5, 2023 at 19:30
  • 11
    The agreement does not specify that they are only forbidden from commenting on actions taken by individuals. If that was their intention, they could have written that. In contract interpretation, "where a promise, agreement or term is ambiguous, the preferred meaning should be the one that works against the interests of the party who provided the wording."
    – Ryan M
    Commented Jun 6, 2023 at 0:47
  • 7
    @RyanM: As my answer suggests, I'm not interested in playing semantics with the CMs. Regardless of whether they violated the letter of the policy, this is exactly the sort of hasty, ill-considered statement that the policy was originally intended to prevent. I think that's the more important point here, and in comparison, I don't really care whether they're technically within some interpretation of the policy or not.
    – Kevin
    Commented Jun 6, 2023 at 4:28
  • 13
    @Kevin To clarify, I am arguing that they did violate the letter of the agreement, in that any ambiguity that may exist in the agreement as to whether it prohibits commenting without consent only on individual actions or on any moderator action should be interpreted in the favor of the non-drafting party (here, in favor of the mods). Thus, the text does ban this action. That said, I agree that even if it did not, this is in fact the sort of thing that the policy was designed to prevent, and that with a little more care, a statement that did not have these issues could have been written.
    – Ryan M
    Commented Jun 6, 2023 at 4:51
  • 9
    Suggest you change "please stop" to "cease and desist."
    – Chris
    Commented Jun 6, 2023 at 14:37
  • 4
    @EJoshuaS-StandwithUkraine On the contrary, they learned plenty. They learned that they can get away with it, so they're doing it again. Commented Jun 8, 2023 at 20:39

5 Answers 5

135

Moderators worked with Stack Exchange to develop the GPT policy

I fully expect that page to disappear soon so I'll paste it below.

A few important things to take away from this policy:

  • "While this is the position of Stack Overflow staff, it’s meant to support the prior work done by moderators (namely, the temporary policy issued to ban contributions by ChatGPT)."
  • "Currently, contributions generated by GPT most often do not meet these standards and therefore are not contributing to a trustworthy environment."
  • "In its current state, GPT risks breaking readers’ trust that our site provides answers written by subject-matter experts."
  • "Moderators are empowered (at their discretion) to issue immediate suspensions of up to 30 days to users who are copying and pasting GPT content onto the site, with or without prior notice or warning."

Moderators didn't just dream up a policy here. We worked with the community team to get it added to the help center. We worked with the company to codify suspensions for utilizing GPT. We worked with them to validate the few suspensions that were appealed and (importantly) brought back to us for discussion. It is my understanding that no GPT suspensions have been overturned.

In short, the moderators had the rug pulled out from under them with this recent change. No prior communication indicated we needed to adjust how we were moderating. No prior concerns were raised about any way we were detecting and removing GPT content.

This Help Center article provides insight and rationale on our policy regarding the usage of GPT and ChatGPT on Stack Overflow. While this is the position of Stack Overflow staff, it’s meant to support the prior work done by moderators (namely, the temporary policy issued to ban contributions by ChatGPT).

Stack Overflow is a community built upon trust. The community trusts that users are submitting answers that reflect what they actually know to be accurate and that they and their peers have the knowledge and skill set to verify and validate those answers. The system relies on users to verify and validate contributions by other users with the tools we offer, including responsible use of upvotes and downvotes. Currently, contributions generated by GPT most often do not meet these standards and therefore are not contributing to a trustworthy environment. This trust is broken when users copy and paste information into answers without validating that the answer provided by GPT is correct, ensuring that the sources used in the answer are properly cited (a service GPT does not provide), and verifying that the answer provided by GPT clearly and concisely answers the question asked.

The objective nature of the content on Stack Overflow means that if any part of an answer is wrong, then the answer is objectively wrong. In order for Stack Overflow to maintain a strong standard as a reliable source for correct and verified information, such answers must be edited or replaced. However, because GPT is good enough to convince users of the site that the answer holds merit, signals the community typically use to determine the legitimacy of their peers’ contributions frequently fail to detect severe issues with GPT-generated answers. As a result, information that is objectively wrong makes its way onto the site. In its current state, GPT risks breaking readers’ trust that our site provides answers written by subject-matter experts.

Moderators are empowered (at their discretion) to issue immediate suspensions of up to 30 days to users who are copying and pasting GPT content onto the site, with or without prior notice or warning.

3
89
+50

Also from that article:

In a statement sent to Dev Class, Stack Overflow’s Beaudette told us:

“A small number of moderators (11%) across the Stack Overflow network have stopped engaging in several activities, including moderating content. The primary reason for this action is dissatisfaction with our position on detection tools regarding AI-generated content.

This is simply false. It has been made abundantly clear to staff that the issue is not a desire to rely on detection tools, unless they are including moderators' judgement and expertise in "detection tools". Some public descriptions, published since the article, of how this is false can be found here and here. They explain the issues very capably, and so I will not retread them here. This is not a new explanation; it has been moderators' position from the start that we know the detectors are inaccurate and require other evidence to support any moderation action. What we want is to be able to apply our judgement in the manner we do when handling any other moderation issue on the site, as well as to be able to discuss potential problems with our handling of issues instead of having blanket policies rolled out with no warning or previous discussion.


Also, it's now just shy of 17% of moderators across the network, but according to Zoe the 11% number was accurate at the time.
(yes, I know that's still potentially misleading as to the percentage of moderator activity on strike; no, I don't want to discuss that in the comments here.)

84

Frankly, I find this quite frustrating. Stack Exchange already promised us a blanket no comment policy in 2019, then revoked that and replaced it with a more elaborate "no comment on individuals" policy in 2020 (see the accepted answer to the linked question). While the new policy was well-received at the time, I think this incident clearly demonstrates that the company does not understand why the original "we won't shoot off the other foot" policy (as Shog9 described it) was necessary in the first place, and is instead content to play semantic games with the more elaborate policy of 2020.

Talking about individuals is just one kind of PR malpractice. It is a particularly egregious one, so it is important to prohibit, but it is just as important to ensure that your statements are true, or at least true to the best of your knowledge. Here, we have a straightforward case of SE, Inc. saying one thing to the moderators in private (which I have not read), another thing to the rest of us on Meta, and yet a third thing to the press. Even though I can't see the private message, there are blatant and obvious differences between the second and the third statement. For example, the on-meta statement says that "[t]his standard would exclude most suspensions issued to date," which is nowhere to be found in the statement to the press. Obviously, I don't expect them to quote the entire policy at the press (although linking them to it would probably be a Good Idea), but failing to mention that you're undermining pretty much all prior enforcement of the existing policy is a huge omission.

In short: They can't even keep their public statements consistent with each other, and this is why commenting to the press was a bad idea.

1
63

NOTE: This is addressed to SE

Stack Overflow ran an analysis and the ChatGPT detection tools that moderators were previously using have an alarmingly high rate of false positives. Usage of these tools correlated to a dramatic upswing in suspensions of users with little or no prior content contributions; people with original questions and answers were summarily suspended from participating on the platform. These unnecessary suspensions and their outsize impact on new users run counter to our mission and have a negative impact on our community.

Please do not violate your own misinformation policy. Let's read through it. You manage to violate nearly ever section. Nice job.

Is likely to significantly contribute to the risk of physical or psychological harm to a person or a group of people

Many mods have their real name as their username and them being part of this group repeatedly disparaged in the press by a billion dollar company makes life more than a bit harder and causes harm.

Promotes widely disproven claims...

Your claim that the moderator suspended users unfairly because of GPT detection tools is false. You are unable to even give a single example. If, out of something like 30 million users, you can't find a single one that was unfairly suspended because of a GPT detection tool, then that really shows that even you know your claim is false.

Promotes the specific views of a party, government, or ideology by using false claims regarding others

You are currently promoting your own views by using false claims regarding moderators. Literally a fill in the blank.

The end

This means that with that paragraph alone, you have managed to violate 3 out of 7 of the clauses in your own policy. I can't find a single example that does worse. Stack Exchange, you will have no credibility if you don't even try to follow your own policy. So, yes, please stop disparaging the moderators in the press. Thank you.

5
  • 3
    While true, it's not a democracy. The policy does not apply to, and is not binding, the company itself, we have no case here. Commented Jun 6, 2023 at 9:42
  • 29
    What I am saying is please follow your own policy, not you are legally obliged to @ShadowWizardStrikesBack
    – Starship
    Commented Jun 6, 2023 at 9:43
  • 6
    It is also worth noticing that the previous attempt at disparaging someone publicly didn't exactly go well. Commented Jun 6, 2023 at 15:31
  • 10
    @ShadowWizardStrikesBack I see no reason why it would not be binding. They made a firm commitment about media statements, and recieved significant free moderation services in return. Sounds a lot like a contract to me. At the very least, it's a solid rational basis for demonstrating one of the key requirements for a defamation case. Either way, it's utterly ridiculous to talk about liability and the "binding" nature of policies here. Regardless of legal obligation, they do have a clear moral/social obligation, and criticism of their failure to meet that is fully valid.
    – BryKKan
    Commented Jun 7, 2023 at 10:16
  • And the person who poster this answer was then suspended. Great job, SE! Commented Nov 8, 2023 at 18:47
43

They don't care about power users or mods or even take us seriously.

This is illustrated by the fact that they don't reply properly to any concerns or queries raised whenever a situation like this occurs.

They care about bringing in new users or finding new ways to monetize what they currently have, no matter the cost.

It's completely fine to malign power users and mods.

It's completely fine to outright lie to the press.

But we dare you to be "unwelcoming" to new user users.

They've forgotten that it's the community that makes these sites thrive, not the empty suits. But it seems that with time - and money no doubt - they've forgotten this.

This isn't the first time they've thrown us or our opinions under the bus, and it won't be the last.

I'll be surprised if they even reply to this post.

Walk away. It's not worth it.

Let's see SO invest in proper moderator teams instead of relying on the good graces of volunteers.


Going forward, we will be working with the community to overhaul how we gather input and feedback from our moderators and members of the community to make sure that your voices are heard and involved in the process, not just informed after decisions have been made.

An apology to our community, and next steps

We want the relationship between the company, the community and its moderators to be based on open, transparent communication that will be made in good faith. I believe the deterioration of communication and trust has been a problem for quite some time. I believe that re-establishing transparency and open, two-way communication will be a key ingredient in rebuilding the relationship between the community, moderators, employees and the company.

To all of the moderators who have resigned or suspended your activities over the past few months: your presence and impact is missed. We value all of your work to keep your sites clean and communities healthy. We understand the many reasons why you felt that it was necessary to step down and that it was a painful decision. We are working on many of the issues that influenced your decisions to leave, and we aim to back these intentions up with actions, accountability, and consistent open communication. If you feel that your issues continue to go unaddressed, I invite you to post about them on Meta in a respectful way. And if you choose to apply for moderator reinstatement, we look forward to hearing about this as well and to seeing you back on your sites.

The company’s commitment to rebuilding the relationship with you, our community

https://meta.stackoverflow.com/questions/342903/well-always-endeavor-to-do-whats-right-well-try-to-do-it-better-next-time

We care about the concerns of the community [...]

https://meta.stackexchange.com/a/349473/218388

I can't be bothered finding more examples. Feel free to add them in. Maybe this could be added to The Many Memes of Meta.


Who knows, if I'm still around I'll catch you all at the next inevitable fiasco and we'll rehash it all, once again.

6
  • 6
    "Let's see SO invest in proper moderator teams instead of relying on the good graces of volunteers." I would argue that us volunteers are quite proper moderator teams! Just stop abusing and denigrating us, and we're happy to do it. Commented Jun 7, 2023 at 9:53
  • 1
    @CodyGray-onstrike I meant in a world where SO had to actually pay people moderate.
    – Script47
    Commented Jun 7, 2023 at 10:07
  • 4
    Yes, just use the word "paid" then; don't overcomplicate it. But I still don't understand why you think paid moderators would be better. I don't. That's why I do it as a volunteer. Commented Jun 7, 2023 at 10:28
  • 4
    @CodyGray-onstrike you're misunderstanding my point. When I say proper it's not a replacement for better or worse. It's a direct comparison that SO has the goodwill of contributors who are contributing freely, not because it's a paying job but rather because it's a passion and if they were to lose that and had to invest in proper (as in official, or paid) moderator teams for each site it'd probably cost a lot more than simply engaging in a respectful manner with the community.
    – Script47
    Commented Jun 7, 2023 at 10:43
  • 1
    Re Let's see SO invest in proper moderator teams instead of relying on the good graces of volunteers. if by "proper" you mean "paid" that's the exact opposite of what we need. We need Mods that stand up for the community. Staff can be directed what they can and cannot say, as has happened before, and I am sure is happening again now. I'll draw an assumption and say staff have been fired for standing up for what they believe in. Commented Jun 7, 2023 at 10:45
  • 6
    @chrisneilsen I don't disagree with you. I'm taking a jab at SO in the sense that they're doing so much to try and squeeze every last cent out of these sites but imagine if all of a sudden because everyone stopped moderating they then had to foot the bill of hiring staff to actually do these tasks. I'm not talking about whether it'd be good or bad for the wider community, just a point on the fact that they're overlooking the huge work power users and mods do and how much that could translate to in actual cost.
    – Script47
    Commented Jun 7, 2023 at 10:49

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .