1192

Update

On August 2nd, 2023, negotiations between community representatives and representatives of the company concluded with an agreement being reached. The details of the agreement can be found at Moderation strike: Results of negotiations.

On August 7th, 2023, based on the result of several polls held by various sections of the community, the coordinated call to strike concluded. Further details can be found at Moderation strike: Conclusion and the way forward.


Introduction

As of today, June 5th, 2023, a large number of moderators, curators, contributors, and users from around Stack Overflow and the Stack Exchange network are initiating a general moderation strike. This strike is in protest of recent and upcoming changes to policy and the platform that are being performed by Stack Exchange, Inc.1 We have posted an open letter addressed to Stack Exchange, Inc. The letter details which actions are being avoided, the main concerns of the signees, and the concrete actions that Stack Exchange, Inc. needs to take to begin to resolve the situation. Striking community members will refrain from moderating and curating content, including casting flags, and critical community-driven anti-spam and quality control infrastructure will be shut down.

However, the letter itself cannot contain all of our concerns, and we felt it was important to share some of the background and details that were not included in the letter in the interest of brevity. We also wanted to touch upon several points at the same time that are related to Stack Exchange, Inc.’s recent behavior.

Background

A history of the Artificial Intelligence policy

On December 5th, 2022, Stack Overflow moderators instituted a “temporary policy” banning the use of ChatGPT in particular on the site. This was instituted due to the general inaccuracy of the answers, as well as that such posts violate the referencing requirements of Stack Overflow. The moderator team kept an eye on community feedback to guide it, and support welled beneath this policy. Similar policies were enacted across the network.

Within the next several days, thousands of posts were removed and hundreds of users were suspended for violating this policy.

Over the next few months, Stack Exchange, Inc. staff assisted in the enforcement of this policy. This included adding a site banner announcing the ban on these posts as well as editing and adding Help Center articles to mention this policy. Moderators were also explicitly given permission to suspend for 30 days directly in such cases, skipping the escalation process that is generally encouraged.

On May 29th, 2023 (a major holiday for moderators in the US, CA, UK, and possibly other locations), a post was made by a CM on the private Stack Moderators Team2 (now publicly published). This post, with a title mentioning “GPT detectors”, focused on the rate of inaccuracy experienced by automated detectors aiming to identify AI- and specifically GPT-generated content - something that moderators were already well aware of and taking into account.

This post then went on to require an immediate cessation of issuing suspensions for AI-generated content and to stop moderating AI-generated content on that basis alone, affording only one exceptionally rare case in which it was permissible to delete or suspend for AI content. It was received extremely poorly by the moderators, with many concerns being raised about the harm it would do.

On May 30th, 2023, a version of this policy was posted to Meta Stack Exchange and tagged , making this a binding moderator policy according to the Moderator Agreement. The policy on Meta Stack Exchange differs substantially from the version issued in private to the moderators. In particular, the public version of the policy conspicuously excludes the “requirements” made in private to immediately cease practically all moderation of AI-generated content.

The problem with the new policy on AI-generated content

The new policy, establishing that AI-generated content is de facto allowed on the network, is harmful in both what it allows on the platform and in how it was implemented.

The new policy overrode established community consensus and previous CM support, was not discussed with any community members, was presented misleadingly to moderators and then even more misleadingly in public, and is based on unsubstantiated claims derived from unreviewed and unreviewable data analysis. Moderators are expected to enforce the policy as it is written in private, while simultaneously being unable to share the specifics of this policy as it differs from the public version.

In addition to these issues in how Stack Exchange, Inc. went about implementing this policy, this change has direct, harmful ramifications for the platform, with many people firmly believing that allowing such AI-generated content masquerading as user-generated content will, over time, drive the value of the sites to zero.

A serious failure to communicate

Throughout the process of creating, announcing, and implementing this new policy, there has been a consistent failure to communicate on the part of Stack Exchange, Inc. There has been a lack of communication with moderators and a lack of communication with the community. When communication happened, it was one-sided, with Stack Exchange, Inc. being unwilling to receive critical feedback.

An offer by Philippe, the Vice President of Community, to hold a discussion in the Teachers’ Lounge moderator-only chatroom took days to be realized. During that conversation, certain concerns were addressed3, but the difficult questions remained unanswered – particularly about the lack of communication ahead of time.

The problem with AI-generated content

This issue has been talked about endlessly, both all around the Stack Exchange network and around the world, but we feel it’s important to highlight a few reasons why several communities, not just Stack Overflow, decided to ban AI-generated content. These reasons serve as the backbone not only for our moderation stance against AI-generated content, but also why we feel confused and betrayed by Stack Exchange, Inc.’s sudden decision to halt our efforts to enforce our community-supported decision to ban it.

To reference Stack Overflow moderator Machavity, AI chatbots are like parrots. ChatGPT, for example, doesn’t understand the responses it gives you; it simply associates a given prompt with information it has access to and regurgitates plausible-sounding sentences. It has no way to verify that the responses it’s providing you with are accurate. ChatGPT is not a writer, a programmer, a scientist, a physicist, or any other kind of expert our network of sites is dependent upon for high-value content. When prompted, it’s just stringing together words based on the information it was trained with. It does not understand what it’s saying. That lack of understanding yields unverified information presented in a way that sounds smart or citations that may not support the claims, if the citations aren’t wholly fictitious. Furthermore, the ease with which a user can simply copy and paste an AI-generated response simply moves the metaphorical “parrot” from the chatbot to the user. They don’t really understand what they’ve just copied and presented as an answer to a question.

Content posted without innate domain understanding, but written in a “smart” way, is dangerous to the integrity of the Stack Exchange network’s goal: To be a repository of high-quality question-and-answer content.

AI-generated responses also represent a serious honesty issue. Submitting AI-generated content without attribution to the source of the content, as is common in such a scenario, is plagiarism. This makes AI-generated content eligible for deletion per the Stack Exchange Code of Conduct and rules on referencing. However, in order for moderators to act upon that, they must identify it as AI-generated content, which the private AI-generated content policy limits to extremely narrow circumstances which happen in only a very low percentage of AI-generated content that is posted to the sites.

This isn’t just about the new AI policy

While a primary focus of the strike is the potential for the total loss of usefulness of the Stack Exchange platform caused by allowing AI-generated content to be posted by users, the strike is also in large part about a pattern of behavior recently exhibited by Stack Exchange, Inc.

The company has once again ignored the needs and established consensus of its community, instead focusing on business pivots at the expense of its own Community Managers, with many community requests for improved tooling and improving the user experience being left on the back burner. As an example, chat, one of the most essential tools for moderators and curators, is desperately out of date, with two-minute, high improvement changes being ignored for years.

Furthermore, the company has repeatedly announced changes that moderators deem would cause direct harm to the goals of the platform, this policy on AI-generated content among them. The community, including moderators and the general contributor base, was not consulted nor asked for input at any point before these changes were announced, phrased in a manner that indicated that there was no possibility of retraction or even a trial period.

Some of these planned changes have been temporarily held off due to controversy, this strike influencing those decisions, but that does not change the recent tendency of Stack Exchange, Inc. to make decisions affecting the core purpose of the site without consulting those most affected.

The events of the last few weeks seem like history repeating itself. Stack Exchange, Inc. ventures into a new pursuit, this time, generative AI, in contrast with the community’s interests, makes a decision at odds with all feedback available to them, ceases communications with us, and we go on strike. This is similar to what happened last time the community moderators prepared to go on strike.

How we resolve this

Even though the strike may end, many community members are not comfortable with returning to the status quo before the AI policy itself, if nothing else changes. The strike’s focus on the AI policy is not downplaying the significance of SE’s other actions. We deserve much more than just retracting the AI policy. Stack Exchange already made promises after the 2019 debacle that they have since failed to meet. We are worried that Stack Exchange will continue down the same path once the situation calms down.

While the recent actions by Stack Exchange, Inc. are in conflict with the community and take a significant step backward in terms of the relationship between the company and the community, we do not think that our relationship is beyond repair. We do however worry that we are nearing the point at which it cannot be repaired anymore.

While it certainly may be true that the company wants to meet our needs, and wants to care for us, the reality is that this is not happening. It is time to wake up and realize what must be done. Stack Exchange, Inc. is not acting in our interest. It is time to do so.

What the striking users want

For the strike to end, the following conditions must be met:

  • The AI policy change retracted and subsequently changed to a degree that addresses the expressed concerns and empowers moderators to enforce the established policy of forbidding generated content on the platform.
  • Reveal to the community the internal AI policy given directly to moderators. The fact that you have made one point in private, and one in public, which differ so significantly has put the moderators in an impossible situation, and made them targets for being accused of being unreasonable, and exaggerating the effect of the new policy. Stack Exchange, Inc. has done the moderators harm by the way this was handled. The company needs to admit to their mistake and be open about this.
  • Clear and open communication from Stack Exchange, Inc. regarding establishing and changing policies or major components of the platform with extensive and meaningful public discussion beforehand.
  • Honest and clear communication from Stack Exchange, Inc. about the way forward.
  • Collaborate with the community, instead of fighting it.
  • Stop being dishonest about the company’s relationship with the community.

A change in leadership philosophy toward the community

We need business leadership to meet minds with the community members and community managers, because currently, it appears that leadership ignores them.

Immediate financial concerns appear to drive feature development. The community also has feature development wants and needs, but no substantial consideration is given to those needs, let alone resource allocation. The lack of merit leadership gives to the community and CMs even leads to its own business decisions being reckless and harmful, like the AI policy.

Leadership needs a change in philosophy to one that treats the community as more than a product, and values its needs and expertise. Such a philosophy is evidently currently missing, and leadership takes the expertise of its own product for granted. Leadership needs to represent this philosophy in actually allocating resources based on community needs as well as its own, and informing its feature development using the expertise of the community. Development can be guided by both business and community needs!

In conclusion

The sites on the Stack Exchange network are kept running smoothly by countless hours of unpaid volunteer work, and, in some cases, projects paid for out of pocket by community members. Stack Exchange, Inc. needs to remember that neglecting and mistreating these volunteers can only lead to a decrease in the goodwill and motivation of those contributing to the platform.

A general moderation strike is being held until the concerns laid out in the open letter and this post are addressed. Moderators, curators, contributors, and users, you are welcome to join in by signing your name in the strike letter.

If you would like to sign the open strike letter, but do not have a Stack Overflow account, please reach out to @mousetail (start a new room).

Updates

This post contains strike updates (representatives elected, reactions to the GPT data analysis posted by Philippe, conditions for ending the strike).


1 While we’re aware that the legal name of the company is “Stack Exchange, Inc.”, the name “Stack Overflow” is more recognizable, and thus used in the open letter. The “Inc.” serves to demonstrate that our concerns lie with the corporate entity, and not the site itself, its moderators, or individual employees.

2 Stack Exchange, Inc. provides a free Stack Overflow for Teams instance for Stack Exchange moderators, allowing moderators to store and share private information, bug reports, documentation, and communication with SO staff.

3 This includes another planned change to the foundational systems of the platform that has the potential to facilitate unprecedented levels of abuse. (This was referred to as “the second shoe” during the planning stages of the letter and the strike, as in “waiting for the other shoe to drop”.) This has been delayed indefinitely while parts of the plan are reconsidered.

49
  • 50
    Hey, I'd love to sign the open letter but I can't without a Stack Overflow account. Is there any possibility to make some alternative sign-in method? I'd be happy to use my SE account or my Meta SE, for example. I'm active in moderation on three sites to varying degrees, and I anonymously edit many posts (mostly ELL and HNQ), so that's what I'll be stopping.
    – bobble
    Commented Jun 5, 2023 at 4:35
  • 25
    @Andreasdetestscensorship Please add an option to indicate that someone supports removing the policy but doesn't want to participate in the strike. I'd like to sign the letter but due to its current phrasing it would state that I'm participating. Commented Jun 5, 2023 at 7:46
  • 32
    I am surprised nobody has leaked the stated-in-private policy yet. Commented Jun 5, 2023 at 9:34
  • 166
    Stack Exchange moderators take their commitments seriously, @user3840170, and leaking private information is a massive breach of trust. Just because the company has broken trust does not mean that we will stoop to the level of leaking confidential information.
    – Mithical
    Commented Jun 5, 2023 at 9:37
  • 33
    @Mithical I think the OP would consider leaking the information as whistleblowing, which has different ethics from just sharing confidential information.
    – Sklivvz
    Commented Jun 5, 2023 at 12:00
  • 80
    Although I agree with you, I'm really skeptical about the strike's effectiveness. Based on the company's actions in the last few years, it's pretty clear they don't care about quality or the community anymore. And the community members who still care are seen as a burden they have to deal with. With the strike, SE has the perfect excuse to remove all diamonds and make new elections, making sure that all new mods will be aligned with their goals.
    – hkotsubo
    Commented Jun 5, 2023 at 13:50
  • 45
    @Caimen - Moderators do not rely on AI detectors to determine if a post is AI-generated. There are other heuristics used instead, and action is never taken solely because a purported AI detector claimed it was ChatGPT or whatever.
    – Mithical
    Commented Jun 5, 2023 at 14:16
  • 47
    @Caimen Nobody has been banned. Suspensions have been doled out based on the basis of human moderator review of posts that have been flagged as potentially LLM-generated.
    – Ian Kemp
    Commented Jun 5, 2023 at 14:16
  • 27
    @Caimen See the comments under their new policy answer. SE staff is lying to you. Do not trust the point of view which they have presented in that answer. Commented Jun 5, 2023 at 14:22
  • 14
    If you want to sign the letter but don't want to create a stack overflow account, DM me. I can add you manually @bobble
    – mousetail
    Commented Jun 5, 2023 at 16:59
  • 25
    While I certainly agree that SE could and should have done better in 2019, that "debacle" does not rest solely on their shoulders - a substantial number of community members and moderators were being openly transphobic, to the point that the only way to make the sites friendly to nonbinary people (among others) was for the company to step in. I'm absolutely all for demanding better on the actual core issues here, but let's not weaken our position by tying those important issues to cases where the company was ultimately the grownup in the room, and it was us that couldn't be trusted.
    – Cascabel
    Commented Jun 5, 2023 at 18:45
  • 29
    @Cascabel Most of what you said makes perfect sense - I saw some blatantly transphobic posts here on this site at the time. However, I disagree with your point that the company was "ultimately the grownup in the room" and that the community "couldn't be trusted" - many of those posts were being nuked by the community, not by staff. Commented Jun 5, 2023 at 18:49
  • 24
    @SonictheAnonymousHedgehog Many of those posts were being made by moderators. I was on the receiving end of a significant amount of abuse from moderators. Attempts to shift policy starting from discussion among moderators were uniformly shot down. Yes, there were also community members moderating well, but the overall point stands.
    – Cascabel
    Commented Jun 5, 2023 at 18:50
  • 9
    @user202729 I don't really feel comfortable getting too specific, but to try to rephrase my previous statement: collectively, both full communities and the internal moderator community came to conclusions that were inconsistent with the policy that was ultimately published. To try to steer back to the original point: holding SE accountable, great. Suggesting that it's always just been them who's out of touch, and the issue is purely "they're not doing what we say": meh.
    – Cascabel
    Commented Jun 6, 2023 at 1:32
  • 13
    @Someone to be more precise, everyone on strike does what they choose to, there is no official "guideline" on how to behave (apart from the general rule of not doing something that would be considered a punishable offence under normal circumstances anyway) - we are not SE, after all. Some chose to abstain from any and all activity, some - only from moderation duties (VTC/VTR/VTD/reviews/edits). Some chose to stop on all sites, some decided to continue moderating meta sites. Commented Jun 8, 2023 at 2:01

14 Answers 14

506
+100

Background from a striking SO mod who handled 10000+ GPT flags

I’m a Stack Overflow moderator coming here to give more details on what management has said that’s helped caused me to decide to join the strike, and to give details from my own personal experience handling several thousand of the warnings and suspensions we’ve done at Stack Overflow for ChatGPT-copy-paste abuse.

And I’m framing this as a response to a comment elsewhere from company management way back in December that now looks to have been an unintentional “tell” from management that they may have already started musing way back then about plans to frame it all as a misguided “witch hunt”.

So, to be very clear: The massive effort and hours the curators/flaggers and elected moderators spent together over the last 6 months very successfully stemming the ChatGPT copy-paste tide and shielding our community’s users from all that junk never in, any way shape or form, came anywhere close to actually resembling anything like a “witch hunt” or any other kind of malicious or hysterical misguided analogy that management might want to try to smear us with now.

The reality is instead this:

We’ve been very accurately identifying the ChatGPT posts with an extremely low number of false positives. And I say that as someone who handled a very very large number of the flags.

For support of the assertion above, please really do take time to read on here.

(Non)Use of any detection tools

I want to take a moment here now to respond specifically to the continuing nonsense insinuations (smears) from management (repeated to the media even today) that we have been over-relying on various detection tools that “have an alarmingly high rate of false positives” and their “decision to require that moderators stop using that tool [sic]”.

So, let me make clear:

In the thousands of cases I handled, I never ever used any of the detection tools.

That is, not only did I not “over rely” on detection tools, I never relied on them at all.

And in cases where the flaggers had included scores from any detection tools in their flag comments, I even completely ignored whatever score information the flaggers provided — to the point that I would actually stop reading the flag comments at the point where they were providing score information, because I found no use at all for it.

So not only could it be said of me that I am one of the moderators whose “own analyses were in use as well” — I can say that my “own analysis” was exclusively in use, with zero reliance on any detector information.

Numbers and assertions

To help clear up any doubts on whether I know whereof I speak, here are some numbers:

  • Conservatively speaking, for the last 6 months (since the beginning of December 2022) every single day I have been spending 40 minutes a day just on moderating ChatGPT flags. That works out to 120 hours in total that I’ve personally spent on them so far.

  • And in those hours I’ve spent on this week after week, I’ve looked at multiple thousands of ChatGPT-flagged posts: in the order of 10000 or even 15000 at this point.

And so I do now assert that in the many hours I've spent actually looking at the ChatGPT-flagged posts, and from the many thousands of those posts that I have actually closely and carefully scrutinized in detail, I have in fact learned to recognize the ChatGPT copy-paste cases (including many of the ones whose content the posters had edited before pasting in, to intentionally obscure the provenance) and I can in fact say this:

I assert: I can, with a very high level of confidence and a very low level of false positives, very effectively identify answers whose provenance is ChatGPT or other AI.

And all that leads me to finish with an invitation…

An invitation to management

To anyone in management who claims the elected moderators have been getting it wrong for the last 6 months, with an unacceptably high rate of false positives, and who wants to challenge my personal assertions about how accurately I can identify the ChatGPT cases: I invite you to work together with me over several days looking at least some several dozens of ChatGPT flags as they come in.

Alternatively, rather than looking at new flags as they come in, I invite anyone to go together through the logs of my moderation activities, and examine some significantly-sized random sample at least of the thousands of ChatGPT flags I have handled.

Either way, let’s spend time scrutinizing the actual content of the flagged posts together, and actually honestly talking together in good faith about which posts we think we can agree are highly likely not the poster’s own original work.

Until anybody who questions how well we’ve been handling the ChatGPT flags actually does something concrete like what’s described above, waving vague suggestions about possible “witch hunts” and other slanders around under the noses of the elected moderators and flaggers is completely irresponsible at best — but in reality, in combination with other indirect accusations and aggressions that key company reps driving company messaging around this have slung at the elected moderators, it is completely unconscionable and completely unprincipled.

39
  • 120
    As a moderator on a smaller site, seeing the numbers you posted about the amount of AI-flagged posts is... incredible.
    – Timothy G.
    Commented Jun 5, 2023 at 20:04
  • 62
    I've flagged quite a few ChatGPT answers, and sideshowbarker has indeed handled a huge proportion of them. From my end, I can also say I've never flagged based on any detection tool -- after you look at a number of ChatGPT responses, it becomes obvious what is ChatGPT (and if I'm wrong about this, I'd like to see examples of false positives). I guess SO's fear is that when users push back and insist "No I didn't use ChatGPT" then SO has no tangible proof that the user in fact used ChatGPT.
    – tdy
    Commented Jun 5, 2023 at 20:14
  • 16
    This answer is awesome. Great read. One could possibly create a training set for humans to learn to recognize AI generated content. Commented Jun 5, 2023 at 20:43
  • 41
    It would be interesting to set you up in a blind trial and see what your accuracy rate is on content known to be human generated vs content known to be AI generated. I'll bet you fare way better than the detectors they are calling out. Commented Jun 5, 2023 at 20:55
  • 36
    @StephenOstermilleronStrike IMO this is a big bit of what SE is missing in the current moment. Many times, both privately and publicly, mods have asked staff to run a trial like that. No such trial of moderators' abilities has been conducted.
    – nitsua60
    Commented Jun 5, 2023 at 20:57
  • 42
    "[...] striking SO mod who handled 10000+ GPT flags" - I knew this was you immediately 😅 Like tdy, you handled all (or most) of my ChatGPT flags, that also didn't use any tooling, but a combination of gut feeling, previous post history (if any), timeline of activity (multiple multi-paragraph posts in the span of a few seconds), etc. I'm glad to hear you (and I hope other moderators) used your best judgement and not some tool, and wanted to take a second to thank you; since ChatGPT came onto the scene, moderation here has probably been... difficult, but we, the users, appreciate you 🙂
    – Tim Lewis
    Commented Jun 5, 2023 at 21:57
  • 12
    I- like tdy- never used ChatGPT scanners / detectors. When I flag something I think is ChatGPT, I just use my own sense and subject-matter expertise.
    – starball
    Commented Jun 6, 2023 at 0:22
  • 5
    After sleeping over it: it could still be that AI scanners indirectly played a tiny role here. After all you say that many flagged posts you trained on included a score in the message. One could assume that the flagged posts were selected partly by scanners, which means the composition of the training data is affected by AI scanners. Another thing is the ground truth. How does anyone know the ground truth here unless it's a test like proposed by Stephen Ostermiller in a comment above? I mean where does the "very high level of confidence" come from if the ground truth is not known? Or is it? Commented Jun 6, 2023 at 6:36
  • 9
    FWIW, I have flagged hundreds of AI posts on SO and always detected and flagged those posts solely using my own brain, not tools. Occasionally, I have used detection tools, but only to see how well those tools work rather than means of detection or confirmation. Commented Jun 6, 2023 at 8:13
  • 11
    Like for regular plagiarism, I flag ChatGPT posts when I encounter them by chance (I didn't hunt for them (past tense intentional)). They are easier to spot than regular plagiarism due to ChatGPT's very distinct writing style (for instance, the echoing back of the question (in a slightly different form)). I have never used any tools for this, only my own brain. And I always look for supporting evidence before I flag. And thank you, sideshowbarker, for handling all those flags! The word has become devalued by overuse, but it is very much appreciated. Commented Jun 6, 2023 at 12:13
  • 15
    I have submitted numerous ChatGPT flags that were promptly and effectively handled by sideshowbarker. I feel we've worked together effectively to help protect StackOverflow. Now I will join the strike by ceasing all flagging and letting the site stew in its own juices.
    – matt
    Commented Jun 7, 2023 at 13:19
  • 8
    I submitted many chatGPT flags that were handled by you. Although I used the detection tools just to see how they worked, I never gave them any credence. Firstly, it's easy to recognize when something likely had its start with chatGPT. It jumps out at you. Then I'd check their answer history, frequently there is a gap with mostly monosyllabic answers followed by more recent wordy flowing chatGPT-looking answers. Finally, I actually ran the questions through chatGPT, and only when one of its answers was almost verbatim copy-and-pasted did I flag the answer. No witch hunt here. Commented Jun 8, 2023 at 0:28
  • 7
    @AdamRubinson Here's one example. Feel free to have a look at the author's previous contributions and take an educated guess at what might have happened in the 6 months since their last post that turned them from someone that doesn't care about grammar, punctuation, or formatting into someone capable of writing perfect English text, perfectly paced and formatted, not missing a single punctuation mark.
    – Tim
    Commented Jun 11, 2023 at 12:47
  • 12
    @TomWenseleers Your examples aren't relevant to the ChatGPT policy. We've always been allowed to use CGPT, test its output, and write a verified solution based on that output. The ban/strike is about thousands of users copy-pasting CGPT output and basically spamming unverified junk.
    – tdy
    Commented Jun 12, 2023 at 23:37
  • 13
    We're on strike because we want to be able to keep doing this, to keep making the sites better, @dougp. If we're not allowed to do that, the next step will be quitting altogether, but we thought we should at least try some last-ditch efforts before abandoning ship altogether. We're not concerned that we'll be replaced with moderator bots. They might as well replace us with random number generators or nothing. At that point, the site will be dead anyway. We're attempting to head off the site we know, love, and have poured thousands upon thousands of hours into over the years from collapsing Commented Jun 14, 2023 at 11:50
209

Here's a list of per-site meta discussions related to the strike (largely gathered from searching "is:q created:2023-05-30.. strike [discussion]" on stackexchange.com, sorted based on site traffic, and updated on a best-effort-basis. stackexchange.com search seems bugged and shows fewer items than the count it returns. I've seen enough bugs now to just not be surprised.):

Honourable mentions:

7
162

So far, this is the closest thing that we have to an official response from the company, found in an article on Dev Class.

Speaking for myself, this is not a good start.

The trigger for the current crisis was an instruction on Monday last week (a public holiday) to Stack Overflow moderators in an official but private forum, “Moderators were informed, via pinned chat messages in various moderator rooms (not a normal method), to view a post in the Moderator Team that instructed all moderators to stop using AI detectors (as outlined above) in taking moderation actions,” said a post. The details of the instruction are not public. VP of Community Philippe Beaudette posted that “AI-generated content is not being properly identified across the network,” that “the potential for false positives is very high,” and “internal evidence strongly suggests that the overapplication of suspensions for AI-generated content may be turning away a large number of legitimate contributors to the site.” He said moderators had been asked to “apply a very strict standard of evidence to determining whether a post is AI-authored when deciding to suspend a user.” However, the moderators claim that a description of the policy posted by Beaudette “differs greatly from the Teams guidance … which we’re not allowed to publicly share.”

There is no evidence that this is true.

Although there is evidence that some detectors have false positives, this shouldn't be news to moderators and is something that has been discussed. It's why we don't rely exclusively on the detectors, but on other moderator tooling as well as our experience and expertise with the content on each of our communities.

I also don't see evidence that the people posting generated content are legitimate contributors. Of the people that I personally suspended, 1 had previous positive contributions. When someone is suspended, they can also appeal by responding to the moderator message. No one did. I can't speak for all moderators on all sites, but my understanding is that the number of accounts with prior positive contributions and accounts that responded to suspensions is low.

In a statement sent to Dev Class, Stack Overflow’s CEO Prashanth Chandrasekar told us:

“A small number of moderators (11%) across the Stack Overflow network have stopped engaging in several activities, including moderating content. The primary reason for this action is dissatisfaction with our position on detection tools regarding AI-generated content.

“Stack Overflow ran an analysis and the ChatGPT detection tools that moderators were previously using have an alarmingly high rate of false positives. Usage of these tools correlated to a dramatic upswing in suspensions of users with little or no prior content contributions; people with original questions and answers were summarily suspended from participating on the platform. These unnecessary suspensions and their outsize impact on new users run counter to our mission and have a negative impact on our community.

“We stand by our decision to require that moderators stop using the tools previously used. We will continue to look for alternatives and are committed to rapid testing of those tools.

“Our moderators have served this community for many years, and we appreciate their collective decades of service. We are confident that we will find a path forward. We regret that actions have progressed to this point, and the Community Management team is evaluating the current situation as we work hard to stabilize things in the short term,” he added.

The number of moderators may be small, but includes the most engaged moderators across some of the most active sites on the network. It also discounts the impact of deactivating various notification tools and anti-spam bots that users run on new posts to mitigate their impact. The statement trivializes who is participating and what this participation will look like to visitors to the platform.

I don't think that the position on detection tools is the issue. Moderators have known, since the very beginning, about the limitations in the tools used to detect algorithmically-generated content. The tipping point was, from my perspective, how the policy was unveiled. Per the Moderator Agreement, policies are supposed to be reviewed by moderators on the Moderator Team prior to being made public. Although the agreement doesn't say how long the review period is, a day of review that starts on a holiday in the US, Canada, UK, and other countries doesn't seem to be consistent with the spirit of the agreement. In addition, the post on the Moderator Team was a decree or edict and not an opportunity to give feedback. However, this is just one in a trend of announcing fundamental platform and policy changes without appropriate feedback.

Although I don't doubt that, across the network, there were a large number of accounts with few (or no) contributions suspended, this policy is far more forgiving than our policies to deal with spammers. I moderate two smaller sites on the network, where we had less than 10 accounts suspended for posting algorithmically-generated content. Only 1 had meaningful contributions previously. If they had posted spam, the majority of the accounts would have been destroyed, which would have also fed into anti-spam measures. If there was a human behind these accounts, there's a (small) possibility that they have learned a lesson and contributed in the future. However, my suspicion is that these accounts were created only to post generated content and there was minimal loss by suspending these accounts. If the problem was indeed the suspensions, then I know that I would have agreed to end the policy of immediately jumping to 30-day suspensions (which was something promoted by the staff), and I suspect other moderators would have as well.

Personally, it seems like leadership at the company doesn't understand where we're coming from or what we want. And that is the first fundamental step to take.

18
  • 45
    This is not just "not a good start". It's the opposite of what we're requiring. This only reinforces the strike. Commented Jun 5, 2023 at 14:30
  • 32
    "These unnecessary suspensions and their outsize impact on new users run counter to our mission and have a negative impact on our community." This is simply untrue. We are the community, and we are sending SE a message. They don't get to put words in our mouths. Accept the words we're giving, instead. Commented Jun 5, 2023 at 14:31
  • 9
    also, the CEO here is using private communications with the mods, that's just so completely wrong that I'm kind of amazed at it
    – Lamak
    Commented Jun 5, 2023 at 14:33
  • 31
    Zoe (who was interviewed by Dev Class) claims that the 11% was accurate at the time, but the percentage on Stack Overflow in specific is much higher now: 15/27 (according to the list of moderators on Stack Overflow) makes 55% of Stack Overflow elected moderators on strike.
    – E_net4
    Commented Jun 5, 2023 at 14:34
  • 78
    "Usage of these tools correlated to a dramatic upswing in suspensions of users with little or no prior content contributions; " -- because both correlate to the rise of people posting ChatGPT spam. It's not rocket science. And correlation is not causation, obviously. Commented Jun 5, 2023 at 14:34
  • 5
    @E_net4isonstrike I didn't make a claim about 11% being accurate or not. Even if it's a small fraction of total moderators, it's the most active moderators on the most active sites. It's also some of the most active curators on the most active sites. It's going to be a huge strain to have these people not participating. Commented Jun 5, 2023 at 14:38
  • 1
    @ThomasOwens Sure, I wasn't implying that you stood by this percentage, just added some more context. By the way, network-wide it has already reached 14%.
    – E_net4
    Commented Jun 5, 2023 at 14:40
  • 4
    An unfortunate comment but an unsurprising one. A company's natural reaction to a strike is to discount the threat. But there's a reason we're actually doing this and not just talking about it: Because the longer we hold out, the more they have to accept that those of us making a point here are a vital and difficult-to-replace part of the ecosystem. Commented Jun 5, 2023 at 14:40
  • 15
    @E_net4isonstrike Minor clarification: 11% was correct network-wide at the time, but not on SO in particular. Note how the CEO fails to call it the Stack Exchange network, but says "Stack Overflow network" (not just Stack Overflow). If it's specifically SO, then yes, 11% is a blatant lie. SO (the site) has had >50% sign the open letter since at least yesterday. However, this morning, 11% network-wide would've been approximately correct. It's currently at 14%+
    – Zoe
    Commented Jun 5, 2023 at 14:47
  • 81
    On Academia.SE, the percentage of striking moderators is 100%. One of us already resigned due to the policy and the manner it was passed down. The policy is against academic ethics of attribution and honesty, and potentially dangerous to many of the askers on our site who are looking for professional guidance from humans with experience in Academia, as they confront conflicts that will determine the course of their careers. Commented Jun 5, 2023 at 15:13
  • 5
    @Trilarion A lot of the heuristics mods used, including our assessments of various detectors, was in mod only spaces to prevent people from being able to make the most minimal of changes to avoid detection. We don't discuss other moderator tools or techniques in public for similar reasons. Commented Jun 5, 2023 at 15:30
  • 27
    @Trilarion The amount of AI content was initially quite high, especially on SO; then, in consultation with staff, moderators took away the "carrot" for posting AI content: by deleting such content and suspending accounts when necessary, we made it so the value of posting AI-generated content here was low, which reduced submissions of that content. The same concept applies to all moderation: you aren't just acting on certain content, but discouraging it in the first place. Commented Jun 5, 2023 at 15:41
  • 7
    Really, it makes the CEO's claim that it is suspensions for AI content that have driven people away quite foolish, because the number affected is so small, while emphasizing the importance of moderating this content in the first place: it's what keeps the amount of AI content manageable. Commented Jun 5, 2023 at 15:43
  • 11
    the lack of transparency is alone enough to justify the strike anyway.
    – Stargateur
    Commented Jun 5, 2023 at 16:28
  • 7
    "a day of review that starts on a holiday in the US, Canada, UK, and other countries doesn't seem to be consistent with the spirit of the agreement" Especially when said day of "review" is after the policy is effective, which occurred simultaneously with it being given to moderators for said "review". The moderator agreement requires a "preview for review"; this was not that, in either letter or spirit.
    – Ryan M
    Commented Jun 6, 2023 at 0:36
76
+300

Power to the striking moderators! 100% support from us users.

A Strike song for some inspiration...

Having said that, I think the demands do not go nearly far enough. Like the opening post here said, this is much larger than the AI decision; it's about the entire relationship of the company to the community and network of sites.

I call on the striking moderators to demand the following:

  • Formal declaration by SE Inc. recognizing the Stack Exchange network as a public resource, irrespective of its private ownership of servers, code, databases, copyrights, etc. And of SE Inc.'s relation to the network being foremost, though not exclusively, that of a trustee. (Yes, this should have legal ramifications.)
  • Acceptance of a community veto power on network policy changes, the details of which the moderators should flesh out either as part of their demand or to be worked out in negotiations.
  • A third-party ombudsman for information disclosure will be appointed, agreed upon by the company and the moderators (mechanism for such agreement to be worked out), who will have full and unrestrained access to all company documents and information, excluding employees' personal affairs and correspondence, and will have the authority and the obligation to disclose all such information which is deemed relevant to the network.
  • Accountability for SE Inc. management vis-à-vis the community (collectively or individually)—a formal obligation, via SE Inc. company bylaws/charter or a binding agreement with the delegates of the moderators, to publicly answer, on Meta.SE, collective queries from the moderators, with full and complete answers (with exact mechanism to be worked out by moderators vis-a-vis management).

Whether these demands are a condition for ending your strike or not, that's entirely up to you. But I believe that is the kind of relationship change we need to see on this network (and I may not have even gone far enough).

11
  • 6
    All four of those are honestly really great ideas, but I don't have high confidence in any of them happening. Commented Jun 8, 2023 at 2:20
  • 5
    @SilvioMayolo: The first step towards achieving a goal is conceiving it. The second is putting it forward. The third is getting people to adopt it as a goal. Making it the demand of a strike may be the 6th or 7th step, but - at least I'm trying to help advance from one step to the next. Let the strikers decide how far they think they can go.
    – einpoklum
    Commented Jun 8, 2023 at 6:29
  • 3
    "But then we can't make money"
    – OrangeDog
    Commented Jun 8, 2023 at 7:58
  • 3
    @OrangeDog: 1. "You make your money off of other things anyway, not the network itself: Ads, jobs, custom installations, etc." 2. "Really? Show us. Open your books and business plans. Otherwise it's just empty rhetoric." 3. "If you don't make enough money without hurting the network, spin it out into a not-for-profit entity and rearrange your business. If that non-proft needs money, it should not be much of a problem to fundraise from the community to maintain and develop the SE network."
    – einpoklum
    Commented Jun 8, 2023 at 8:27
  • 1
    Excellent suggestions, but humanity has a deadly tendency to follow those who should not lead, i.e., the masses believe CEOs are the appropriate decision-makers, when that role now is simply that of a parasite maximizing personal gain at the expense of the public and employees (loss of Titan submersible is glaring example). The history of evolution is necessarily bloody and our current insanity will not endure, but it is going to be painful for a while. Commented Jun 23, 2023 at 14:30
  • However noble your ideas might be, please never claim "100% support" on behalf of other people. You don't know it, and you don't represent all of "us users".
    – Zeus
    Commented Jun 28, 2023 at 7:37
  • It's not "100% of users", it's "100% of the possible support".
    – einpoklum
    Commented Jun 28, 2023 at 12:59
  • The platform isn't yours, it has never been yours, and it won't be yours, even though you start identifying yourself as being a part of it. I actually find it very arrogant to demand ownership over something purely because you like it, so I guess the 100% claim isn't true. It's your choice to put that much time and heart into this. Don't depend on a platform over which you have no control, you could've known that from the start.
    – Nearoo
    Commented Mar 4 at 16:24
  • @Nearoo: The platform is one thing, and the network which rests upon it is another thing. Of course they are inter-dependent. As for the platform - that's a philosophical debate about the legitimacy of intellectual property etc., but it's irrelevant, since my my suggestions only regard the network and its administration. The network, however, was created by the users (especially the moderators). It has been a collective social project which SE Inc. has facilitated, and to which it has contributed, but it is certainly not "theirs".
    – einpoklum
    Commented Mar 4 at 18:51
  • Have you checked out Codidact, btw? Seems like the way the Codidact Foundation is set up satisfies most if not all of what you ask for here. Commented Mar 5 at 21:00
  • @KarlKnechtel: It's an interesting backup plan in case this platform explodes. But almost all of the information and users are here... also, this post was not a personal wish list, but a suggestion to a collective.
    – einpoklum
    Commented Mar 5 at 21:16
60
+50

This is my personal opinion and not necessarily representative of all of the moderators, curators, and users on strike.

The first company response on Meta Stack Exchange was incredibly underwhelming.

As you may be aware, a number of moderators (on Stack Overflow/across the network) have decided to stop engaging in several activities that they had taken on, including moderating content - in fact, almost all moderation tasks. The primary reason for this action is dissatisfaction with our position on detection tools regarding AI-generated content, and discontent with how that was rolled out.

This is an accurate assessment. Personally, I'm more discontented with how the position/policy was rolled out. Having an appropriate discussion with moderators and then the community, as it's laid out in the Moderator Agreement, would have been better, even if the result was the same policy.

We ran an analysis and the ChatGPT detection tools have an alarmingly highly high rate of false positives, which is correlated to a dramatic upswing in suspensions of users with little or no prior content contributions.

As I, and other moderators, have said before, we do know that these tools aren't perfect. When the policy was handed down to moderators, there was an assertion of testing done on Stack Exchange data regarding the false positives in these detectors. However, no specific information was shared, although the moderators are expecting it to be provided in the near future. The tools were only a single tool used to detect algorithmically-generated post and I'm not aware of any moderators who relied on a tool to tell them that a post was generated.

When it comes to users with little or no prior content and posting content that is not permitted, suspension is letting them off easy. When garbage or spam is posted by accounts with no other valuable contributions, we destroy those accounts. In fact, there are destruction reasons for posting "spam or nonsense" without any other "positive participation" and for cases where the vast majority of content violates the terms of service. When we take these actions, not only is the account destroyed, but it feeds into

People with original questions and answers were summarily suspended from participating on the platform. We stand by our decision to require that moderators stop using that tool. We will note that it appears to be very rare, however, for mods to use ONLY the ChatGPT detection tool, and frequently their own analyses were in use as well. We will continue to look for other, more reasonable tools and are committed to rapid testing of those tools and any suggested heuristic indicators.

There is a public admission that moderators used their own analysis. We also used other moderator tools and, in some cases, engaged with moderators from across the network, prior to taking action. Yet the response is to remove a valuable tool from the toolbox. Although there is a false positive rate, the analysis that was shared puts the false positive rate into what I would suspect would be a very tolerable number when combined with other tools, including other detectors.

The moderators who are engaged in this action served this community collectively for many years on the platform. Personally, I consider a number of them friends, and anytime friendship is tested like this, it’s difficult. I would like to say to them clearly that I hope they know how much I, and the whole staff and community, appreciate their collective decades of service to this community, and I hope that we are able to come to a path forward. I regret that actions have progressed to this point. The Community Management team is evaluating the current situation and we’re working hard to stabilize things in the short term. We’ll be working collaboratively with the community on the long-term solution.

This is not the first time that we've been here. But lessons from the past don't appear to have been learned.

I’ll be honest, the next few days and weeks might be a bit bumpy. Both sides share a deep and unchanged commitment to quality on this platform. Additionally, it’s important that we guarantee that anyone who has meaningful contributions to make has the opportunity to do so. As we have updates I and the team will be sharing them here.

I - and the moderators that I'm in touch with - are also very committed to the quality of content on the network. We do hope this resolves quickly, but at the same time, want the resolution that's best for our communities.

3
  • 1
    It is fascinating to watch this subset of the general civilization succumb to the ongoing disease process. Forget who said it, but laws don't control conduct so much as codify the moral convictions of the host culture, i.e., the problem here at SO/SE is that the basic problem of promoting high quality questions and answers (putting aside the usual sociology, see e.g. Freeman's Tyranny of Structurelessness) is made exponentially larger (volume) by use of ChatGPT and the like (suggest replace term "AI" with "ASS" analytical search software to reduce the eschatological panic of some). Commented Jun 30, 2023 at 16:09
  • 1
    It suddenly occurred to me yesterday that what is actually happening here is that the management of Stack Exchange/Overflow wants to provide so-called AI companies a venue to test and perfect (it is impossible to make something which is intrinsically not intelligent "perfect" but Family Feud shotgunning might provide higher percentage canned responses that are not obviously garbage) their product, with the ultimate goal of selling to Microsoft, Google or the like for that specific purpose, thereby generating a large payoff for management. Commented Jul 12, 2023 at 13:29
  • Didn't realize @leanne had already discussed likely plan by SO to produce income by cooperating with AI companies. See July 7, 2023 Stack Overflow CEO Prashanth Chandrasekar interview at VentureBeat venturebeat.com/ai/… moderator-protest/ where this is pretty much confirmed. It appears Stack Exchange is dead already. Commented Jul 16, 2023 at 15:00
50

Mainly focussing on the title, with the statement that "Stack Overflow, Inc. cannot consistently ignore, mistreat, and malign its volunteers":

That's wrong. Of course they can. Obviously.

The most active sites already have been flooded with an unmanageable amount of low-quality content for years. Now, there's a new source of low-quality content. And... why exactly should they care about that?

They don't.

There will always be new people to join the site. Some of them will become volunteers who work off the crap-queues, for a few internet points and badges (Gamification 101). Some of them will become moderators, because ... oh, what a privilege and honor. Some of them will be disappointed with the increasing workload, the decreasing quality, and the ... $%!$§ ... behavior of the company. Some of them will complain. Some of them will leave. Some of them may initiate a strike *shrugs* - that's just noise. New users will come. New volunteers will come. New moderators will come.

They don't care.

I'm aware that this does not sound constructive. But I think that it is important to know that the company does not care about you at all. People who joined after the previous management debacles may not be aware of that. They may have joined with the same idealism and drive for altruism that I had ~10 years ago. And they should be prepared to be deeply, deeply disappointed.

They will sit that strike out.

19
  • 21
    I'm pretty sure the intent was more like "we (the signees) are not going to tolerate this treatment"- not literally "you can't treat us like this"
    – starball
    Commented Jun 8, 2023 at 1:43
  • 8
    @starball Taking that statement literally was a dramaturgic device. They are either going to "tolerate" this treatment, or be replaced by others. The company can do whatever it wants. They may see the public announcement of the strike as a minor nuisance, because now, they have to wait for the dust to settle and open a fresh can of apologies and promises. But they don't really care about the people who are striking.
    – Marco13
    Commented Jun 8, 2023 at 2:13
  • 12
    What you fail to acknowledge with this overly cynical take is that moderators/curators striking means that there is the potential for unchecked abuse on the site, which is even worse than spam. And abuse often doesn't surface with red flags that employees can go and check every once in a while. Delaying handling those situations can have real repercussions on the people involved (racism, harassment, bullying, etc.). Once it spreads the perception that the platform is unsafe, it's game over.
    – blackgreen
    Commented Jun 8, 2023 at 2:35
  • 3
    One thing that I can't understand right now. Let's imagine that the strike becomes effective and SE answers "our" demands. Who will fix the whole mess the day after?
    – Largato
    Commented Jun 8, 2023 at 3:03
  • 18
    "We hear you, from now on we will inconsistently and capriciously ignore, mistreat, and malign volunteers" Commented Jun 8, 2023 at 6:34
  • 2
    Totally agree, the company doesn't care. And I'm afraid that they will do exactly what I commented here
    – hkotsubo
    Commented Jun 8, 2023 at 10:12
  • 5
    @blackgreenonstrike I acknowledge that insofar that I considered to bring up another point: Who would even notice this strike if it wasn't announced publicly? Out of the ~5.5m daily visitors of SO, how many of them require moderation action, and notice when the action is not performed? An random (but high) guess of 1% would still mean that they have "99% customer satisfaction"... Yes, it may sound like "cynicism", but that may just be an expression of my disappointment, based on actual observations that I made here in the past ~10 years. They. Don't. Care.
    – Marco13
    Commented Jun 8, 2023 at 11:29
  • 4
    @blackgreenonstrike I don't think he meant that. I read this post as a quite realistic claim: even if every current moderator was to resign now, there is plenty of other users that are ready to take their place either because they don't have yet realized what the situation is or because they don't care. So the company will in a way always be able to find someone who will "polish the turds" to use shog9 old words. This seem to be tangential to my post here
    – SPArcheon
    Commented Jun 9, 2023 at 7:59
  • The company doesn't really care - it's not a human being. But you can negotiate with it. It's not like it's not dependent on the mods or existing community. Sure it could try to find new mods and new users, but that is risky by itself and may fail. The striking mods however, must be prepared to walk away if necessary (not sure if they really are). That's the situation, in a nutshell. Commented Jun 9, 2023 at 8:14
  • 1
    @SPArcheon perhaps you are correct, and the cynical part of me somewhat agrees with that idea, but the other part doesn’t. I wouldn’t be striking if I thought the company doesn’t care as much as painted in this answer. Although… I probably would strike anyway because even if they don’t care, I do.
    – blackgreen
    Commented Jun 9, 2023 at 11:35
  • 3
    @Trilarion You've been involved in Meta so long that I'm surprised to see that optimism. SE does not really need a "community" (roughly: "a group that shares goals and values"). It needs visitors and activity. I also don't see any risk in replacing mods with new ones. Each new generation will have the choice to swallow the poor treatment, or leave (and ... people will (have to and be willing to) swallow a lot). So imagine the striking mods went away now (and I share your doubts about that): What then? New ones will be elected, and stoically moderate auto-generated content...
    – Marco13
    Commented Jun 13, 2023 at 0:13
  • @blackgreenonstrike You mentioned your "cynical" part and the "other" - what is this "other part"? (I could ask whether that other part is "naive" or "idealistic", but ... my cynical part would say that these are not necessarily different things ;-)). But seriously: You've been here (on Meta) for 2 years. You replaced a mod who left after the 2019 debacle. That mod replaced one who left after 2015. And if you leave, you will be replaced, by someone who doesn't complain as much as you. (I'm playing <strike>devil's</strike> SE's advocate here, I hope that's obvious...).
    – Marco13
    Commented Jun 13, 2023 at 0:21
  • @Marco13 I guess I'm naturally optimistic. I just don't want to believe yet that the company pulls an Elon Musk and fires everyone who isn't agreeing with them. But it may happen if course. Having a team of highly efficient mods is valuable, but maybe not that much. Commented Jun 13, 2023 at 3:55
  • 2
    Of course they don't care about the users/mods, but not caring about the quality of content (which they don't rn) is not a sustainable practice for such a website. Threatening to make the quality even worse (which is what the strike essentially does) is probably the only way the moderators (and users) have to pressure any change.
    – liakoyras
    Commented Jun 13, 2023 at 10:30
  • @liakoyras People will do websearches, find a few bad Q/As, and maybe one of the answers solves their issue, and maybe not. Where do moderation or the strike come into play here? Maybe someone flags a comment, and the flag is not handled, but that affects only people who care about stuff like that, and is totally unrelated to "(technical) quality". When "the community" is defined as "people who strive for quality", and SE does not care about quality, then it does not care about the community. (Sure, that's oversimplified, but ... too little space for too many management failures here)
    – Marco13
    Commented Jun 13, 2023 at 14:48
42

Reveal to the community the internal AI policy given directly to moderators. The fact that you have made one point in private, and one in public, which differ so significantly has put the moderators in an impossible situation, and made them targets for being accused of being unreasonable, and exaggerating the effect of the new policy. Stack Exchange, Inc. has done the moderators harm by the way this was handled. The company needs to admit to their mistake and be open about this.

I agree that it would absolutely be in the community's interests to be able to read this secret policy. It is completely unreasonable for us to be subject to new rules which can't even be said in public.

If Stack Exchange, Inc. won't allow the policy to be published here, there are other venues which might consider publishing it. (These outlets have written about Stack Overflow's policy on AI content before, so they might run further stories on the same topic.)

To me, it seems inevitable that the secret policy will become available one way or another, so it's just better for everyone if Stack Exchange, Inc. publishes it themselves.

7
  • 26
    Moderators tend to take our obligations to protect moderator-only information seriously. We recognize that there are aspects of the policy shared with moderators that should not be shared with the public. I believe most of us think that the company should be the one to disclose their policy, since it is their policy. But revoking the policy and following the agreement to discuss policies before rolling them out would be the preference. Commented Jun 5, 2023 at 15:59
  • 1
    @ThomasOwens I agree, the company should disclose the policy. My answer here is an argument for why it is in their interests to do so ─ i.e. it will get disclosed sooner or later anyway, if they don't publish it then someone else will.
    – kaya3
    Commented Jun 5, 2023 at 16:14
  • 2
    It reads like you're suggesting that a moderator leak the full version of the private policy to the press. Meta is where public policies are posted, with links from the private version to the public version as well as with links from the Help Center to the public Meta posts. Commented Jun 5, 2023 at 16:23
  • 3
    @ThomasOwens I haven't told anyone to leak anything. I've stated that I believe the community has an interest in seeing the policy (an opinion you haven't disagreed with), that it could be leaked (a statement of fact which I presume you don't dispute), and that I believe it's inevitable it will be leaked if it's not posted officially (a judgement which perhaps you disagree with, but you haven't persuaded me that it's not inevitable).
    – kaya3
    Commented Jun 5, 2023 at 17:10
  • 6
    You mention about protecting moderator-only information, but information itself can't be harmed and doesn't need protecting; it is people's interests in that information which can be harmed and may need protecting. If we accept that the community has an interest in seeing this information, then whose interests are protected by it remaining private? The fact that the strike is occurring at all shows that many moderators are not specifically motivated to protect Stack Exchange, Inc.'s interests. If there is any information mixed in with the policy which it would harm someone else's interests ...
    – kaya3
    Commented Jun 5, 2023 at 17:13
  • 3
    ... if it were disclosed, then like you I trust that that information won't be leaked, because as you say, moderators take their obligation to protect private information (i.e. protect people's privacy) seriously.
    – kaya3
    Commented Jun 5, 2023 at 17:14
  • 8
    there is just one issue: the confusion works in their favor. The users only see the part of the picture while only the moderators see the full version. Considering that the moderators themselves had to ask the company to avoid misrepresenting the issue in articles published by other sites, it looks like this is intentional. That way, at least a small part of users will think that the mods are acting unreasonable because all they see is a policy that does not contain any of the bigger issues shared only in private.
    – SPArcheon
    Commented Jun 7, 2023 at 9:19
38

This is peripheral to the question, but pertinent:

I'm perplexed as to why the company is seeking to destroy itself in this manner. While a useful amount of AI output is reasonably accurate and useful, a significant portion is either very misleading or entirely fabricated.

ChatGPT will:

  • Produce totally fake references,
  • Produce wholly non-existent web links.
  • Make comments which, when challenged, it will disclaim and say something entirely different.
  • In some cases contradict itself in the same answer.

The only way to deal with these issues is to either have an expert knowledge greater than the level in the answer, or to use it only as a source for cross referencing.

  • By allowing essentially uncontrolled use the company will destroy the integrity of its core asset.

A relevant example: I am not a Quora member, but I read a significant number of their answers in my (relatively wide) areas of interest. I have been surprised to see some Quora members, obviously using ChatGPT for essentially their whole answers, producing many factually incorrect and hardly relevant answers, but receiving numerous upvotes from other members.

While Quora still contains a very large percentage of genuine and highly useful human-generated answers, it is obvious that they have not chosen to actively oppose the posting of complete rubbish to their site. The likely outcome seems clear. Why Stack Exchange inc would wish to follow the same path is puzzling.


Added:

The company may be inadvertently "throwing the baby out with the bathwater". The proposed use of AI to improve asked questions has SOME prospect of being useful.

Experienced users can ignore the advice (as long as it is only advice) and many beginners would benefit from good advice.

Worst case, it may make questions worse in some cases (but possibly not too many) and bad questions can be and presently are treated badly. This would be a negative for newcomers, but it could be ameliorated by thoughtful community action if members are prepared to actively help newcomers. Based on present practices, this seems unlikely; I perceive the site and the majority of member actions creating a substantial barrier for new members. Improving this is hard.

Overall, AI may quite possibly improve questions on average. This impression may affect the company's perception of AI answer improvement, which is far less likely to be net-positive overall.

One area where I have reservations is the suggested use of AI generated code to improve questions. This is very much a "double edged sword" as AI code is often good, but may contain subtle errors leading to 'rubbish out'. If new user code related questions suddenly sprout AI-generated code examples, the quality would need to be extremely high on average to not produce negative reactions overall from the community.

18
  • 7
    re: "By allowing essentially uncontrolled use the company will destroy the integrity of its core asset." - sadly and bafflingly, they seem to already be aware of this: stackoverflow.com/help/gpt-policy
    – starball
    Commented Jun 18, 2023 at 20:41
  • 3
    Because SE is no longer close to the users in any realistic sense, the layers of narcissistic management that march to the latest admin fad; ESG gibberish, is complete with the takeover by Prosus. This may end badly. Hard to predict.
    – Carl
    Commented Jun 19, 2023 at 8:47
  • 5
    @starball just because they agreed with the community to allow moderators to implement the ban (initially) and to publish that policy document, doesn't mean they've actually considered the implications of the underlying argument, nor that they agree. Commented Jun 19, 2023 at 9:21
  • @KarlKnechtel I've added a comment re AI -improved questions. It is pssible that they see the genuine advantages in that area and are blinding themselves to the fact that Ai generted answers are net-negative for quality except when vetted expertly. Commented Jun 19, 2023 at 11:07
  • 11
    AI is a useful tool in the hands of someone willing to invest some time to understand it and its limitations. For example, I could it to generate example sentences to illustrate a particular usage for my ELL answers. It is not useful (yet) as a general tool the way SE is experimenting with it. It requires a LOT of work to get a model that encodes a good post in SE terms. Right now, it's adding "Thanks in advance" type signatures and other undesired but prevalent content. There are many highly scored posts on SO that are not good exemplars for new users.
    – ColleenV
    Commented Jun 19, 2023 at 12:38
  • 3
    @ColleenV Fundamentally, this style of generative AI will never be able to improve content by being trained on the overall corpus of the existing content. By design, it will generate output that mimics what is already there. The public ChatGPT has, to my understanding, already been fed with the entirety of SO (as of somewhere in 2021?) among many other sources. And as you say, such a model cannot necessarily be re-trained by feeding it a subset of SO selected by any simple heuristic (such as post score). It would need a fundamentally different kind of AI to filter the training data. Commented Jun 20, 2023 at 1:50
  • And even then, these tools are fundamentally unsuited for tasks that require actual programming-like problem solving - just as Copilot cannot write your program for you (if it could, it would have already taken over the world by now, or at least eliminated the overwhelming majority of existing programming jobs). Commented Jun 20, 2023 at 1:52
  • @KarlKnechtel I've said it before--we haven't made the breakthrough that leads to the scary kind of AI capable of self improvement (yet). The generative stuff can be a great tool, but people underestimate the amount of human labor it takes to get it there. AI Is a Lot of Work (Article from The Verge) I don't think people understand how much subsistence wage labor these systems are built upon.
    – ColleenV
    Commented Jun 20, 2023 at 12:44
  • @KarlKnechtel I wanted to load photos to Facebook so that they appeared in the album in date order. I can do that manually by uploading them one at a time. I asked CHATGPT 3.5 how to do this. I described the task clearly, told it what works and what the issues were otherwise. It provided a Python program plus links to two necessary downloads from elsewhere. I have not yet tried it but it looks sound. I've never used Python. Commented Jun 21, 2023 at 12:27
  • 10
    @RussellMcMahon "I have not yet tried it but it looks sound. I've never used Python." - yes; that's the exact problem. ChatGPT has never used Python either, and doesn't actually have any idea whether the code is sound, no matter how strongly it might "profess confidence" (generate text that represents such claims) in the code. It doesn't have ideas at all. It only has an extremely sophisticated model of what words are likely to follow what other words, taking a rather large amount of context into account, using much more sophisticated algorithms than older attempts at AI. Commented Jun 21, 2023 at 13:06
  • 2
    @KarlKnechtel Indeed. It may not work. BUT I started programming in FORTRAN in 1969, and in machine code (not assembler) on a NatSemi SC/MP microprocessor in 1976,. I often dwelt in the darkness of embedded systems and assembler (!!!) (6800, 8080, Z80, 6502, AVR, PIC, ... but various other languages and systems have happened along the way. I'm 72 :-). The code and the added packages seemed sound and logical. I've never used Python, but reading the code it makes sense. It MAY have faults I've missed. tbd. Commented Jun 21, 2023 at 13:17
  • 3
    I agree, @KarlKnechtel. Another of many examples: I asked ChatGPT how many times I need to take the square root of an an adult's age before the result is less than 1. ChatGPT replied that as long as the age is a positive integer, only two square roots will be needed to yield an answer less than one. And I asked the same question again, and this time, I got of lot of totally incorrect gibberish about logarithms. Relying on a tool like ChatGPT to provide correct, useful, reliable answers to anything is an exercise in folly.
    – HippoMan
    Commented Jun 24, 2023 at 19:30
  • 1
    @HippoMan Used with due diligence CGPT is a superb and useful tool. As a metaphor, it's like a double ended Katana with a mid grip and no hand guards. You can cut yourself as easily as your opponent without training and consant care. Or a pair of Nunchuks in other than expert hands :-). Commented Jun 25, 2023 at 5:34
  • 1
    @HippoMan ChatGPT is simply a tool, mostly a search tool. You have to check all the output but the suggestions by it can be helpful. Chat GPT helped me. I'm using it. I'm just not writing my contributions to the network with it. But at some point I can imagine that it will help me, but I will still check everything from it Commented Jun 27, 2023 at 9:50
  • 1
    @NoDataDumpNoContribution: yes, ChatGPT can be useful, but it has no way of filtering out false information from its search results. Relying on ChatGPT or other LLM-based tools to verify the integrity of other text is a mis-use of those kinds of tools, due to the lack of safeguards against false positives and false negatives.
    – HippoMan
    Commented Jun 27, 2023 at 20:39
31

I wrote a user style to hide the review queue, downvote, VTC, VTD, delete, undelete, and flag buttons/links.

19

The purpose of this answer is to provide an indication of the financial motivation for SE's decisions. In all humility I would suggest that one cannot assign motive for any particular actions without following the money. In that vein, a good starting point may be to see who the investors in SE were in the latest Series D startup financial round. A Series D startup is generally either one which has not achieved its objectives in rounds A, B and C, or one that is designed to prepare a company for going public. Said information is available crunchbase's stack-exchange financial base. These include enter image description here and can be followed up for further investigation by following the link above and links to those investors. More complete information is on pitchbook SO.

It may have made more sense to offer SE's users and moderators the ability to invest in SE in a funding round than to do so seeking investment from those with little or no understanding of what they purchased, and who come with intellectual baggage or conflicted interests and may or may not wish to dismantle SE or transform it to be unrecognizable. Such is the behaviour of the nouveau riche who for good reason are oft regarded as uncouth.

Rather than lamenting an opportunity lost, perhaps investigation of a means of having SE's users buy out SE should be explored, as management currently is inept enough that that may be its most realistic option.

Unfortunately SO (and SE) was bought out by Prosus for $1.8 billion. As Prosus is an ESG activist investment firm, it has the power to distort market forces by propagandizing, censoring, and proselytizing Neo-Marxist ideation. For example, it is a major shareholder in Tencent, a Chinese information firm.

Edit BTW, I'm poisoned, I have almost completely stopped reviewing, posting, voting, etc. Stuff like this doesn't help.

34
  • 1
    Why buy them out, when we ourselves are the primary asset?
    – kaya3
    Commented Jun 17, 2023 at 18:27
  • 5
    @kaya3-supportthestrike To manage that asset properly, i.e., it would better insure the corporate pecuniary interests are a positive sum game. A bit like owning a home rather than squatting in an abandoned one.
    – Carl
    Commented Jun 17, 2023 at 18:37
  • 1
    meta.stackexchange.com/questions/377908/…
    – pkamb
    Commented Jun 18, 2023 at 1:42
  • 1
    By "ESG" do you mean en.wikipedia.org/wiki/… ? Commented Jun 19, 2023 at 9:19
  • 2
    ESG = Environmental, social, and corporate governance. ... "also known as environmental, social, governance, is a business framework for considering environmental issues and social issues in the context of corporate governance. It is designed to be embedded into an organization's strategy that considers the needs and ways in which to generate value for all organizational stakeholders (such as employees, customers, suppliers, and financiers)." Commented Jun 19, 2023 at 21:15
  • 2
    Re "Prosus is an ESG activist investment firm": Who says that? Are you sure they are not solely driven by quarterly results? Commented Jun 19, 2023 at 21:19
  • 3
    If they were, how is it connected to the current crisis? It sounds more like an opportunity to air a particular political view. Commented Jun 19, 2023 at 21:23
  • 2
    @This_is_NOT_a_forum Read their website, they say it themselves. Their activist policies are reflected here. This isn’t just about the new AI policy, it comes with a portfolio of other activist policies, e.g., climate change, pronoun usage, Covid-19 propaganda, and lately AI. To turn this around and say that it is I who is airing a particular political view is to ignore the fact that you are being told what you are allowed to think for a slew of issues. I didn't choose the issues, but I will be dammed if I will ignore them all with the exception of one of them that you object to.
    – Carl
    Commented Jun 20, 2023 at 3:31
  • 2
    Do you truly believe the SE users will invest in SE under the current circumstances ? Why would I invest my money in SE after seeing a lot of spam posts and ChatGPT posts ?
    – Nobody
    Commented Jun 22, 2023 at 8:25
  • 5
    @Nobody They are acting exactly like countless companies did around the IT hype early 2000s. They have no goals or long term strategies, just mindlessly chasing a buzzword hype. The only thing of substance is to repeat the buzzword as much as possible - that's the entirely business model. It's just castles made of sand with a label on top of it, just switch the label from "IT" to "AI" or any other fashionable buzzword. All such companies sooner or later violently crash into bankruptcy and the investors lose all their money. Why would they do it? Because scamming is as old as mankind itself.
    – Lundin
    Commented Jun 22, 2023 at 13:48
  • 5
    As for buying SE, the company has zero value to the community. All content is open license. A non-profit, open source community like Codidact can legally just pull the data from SE just fine, as long as there is attribution to the original author. Some sites like writing.codidact.com did just that - exported the entire content from writing.stackexchange.com and started anew. Why pay $1.8 billion when you can have it for free. This was a failsafe that Atwood/Spolsky made on purpose, in case SO would turn into complete a**hats in the future.
    – Lundin
    Commented Jun 22, 2023 at 13:54
  • 4
    @Nobody It can be compared to Reddit, which is destroying the foundation of the site's value, in the hope that they'll be able to sell it before the top floors collapse, and they'll be able to misrepresent the operating expenses to the person they sell it to, because there will be no foundations. Commented Jun 22, 2023 at 20:04
  • 3
    @Lundin Wow. So without the good will of its users, SO (SE) is essentially worthless. It wouldn't take much money to start over with a better administration. I've seen that idea presented here before with the Monica Cellio incident. If I had to guess, that may happen this time, people are tired of fighting nonsense.
    – Carl
    Commented Jun 22, 2023 at 20:04
  • 1
    @Nobody One can compare SO with Vice that was, at one point, evaluated at $5.7 billion. Vice was just bought out of bankruptcy for $222 million or 3.9% of what was once claimed.
    – Carl
    Commented Jun 23, 2023 at 23:13
  • 3
    What do you mean by "poisoned"? Commented Jul 14, 2023 at 16:31
7

As someone who has been working on programing and data analysis on computers since 1981, I am very concerned about allowing people to submit answers that are not a product of their own minds but a product of an artificial intelligence that has simply collected a series individual 'facts', made available by persons unknown, which leads to a conclusion which may (or may not) be plausible.

The same applies to an artificial intelligence (AI) tools checking whether artificial intelligence has created an answer.

That the moderators are not using these tools is, to me at least, obvious, based on their actions that I have observed in the last 6 months.

I myself have flagged/voted many answers that due the style of writing, lack of sources and often very weird constellations of 'facts' making it obvious to me who has the required background knowledge in the topic of the question.

The last 2 weeks the user Serg Z

on Law and History Stack Exchange, has flooded many existing questions with junk answers.

How should a later reader determine which answer is based on knowledge and facts and which are simply a collection of random facts pasted together by a AI and submitted by a user as if it was their answer?


Below is a email I sent to three friends today.

  • a Professor of Philosophy from Vancouver, Canada
    • who is sympathetic to the idea of a central source of knowledge
  • a US programmer since the mid 1960's for supply automation in the car industry and later debit bookings
  • the Italian author of a well known PC based Spatial software who I have worked with for many years

I have added this mainly for the benefit of the business leadership of Stack Exchange, Inc. so that can, hopefully, better understand from the viewpoint of a data analysis on computers of many decades that the AI policy should be to treat such answers as a form of plagiarism (and therefore be deleted), since the answer is not a product of the author that submitted the answer.

The ChatGPT experience was from December 2022, where it became clear that the system (at least then) was designed to give an answer at all costs not caring if the result was nonsense or not. This is copy and paste knowledge at its worst.


I was taught that one 'had to learn how to learn'.

One lesson was that one had to have enough background knowledge to determine if any found/returned result was plausible (part of the 'learning' process).

Next, check the sources from preferably multiple -independent- reliable sources.

Finally assume that any result might not be conclusive and that multiple possibilities may exist (often dependent on certain situations).

In 1981, long before the first MS-DOS for PC's were available here, the first lesson in the programming course was: A program will always be as stupid as the person that programmed it (you can't program a bookkeeping program if you know nothing about bookkeeping). Lesson #2: the main difference between a stupid program and a stupid person is that the stupid program is more capable in making more mistakes faster.


Tell this to some people today and you can see the question in their eyes: 'From what alternative universe (far, far, away) does this person come from?'.

This is what is meant with 'a critical lack of tech skills' in the article below.

In the report the tech giant [Google] also highlights what it calls "a critical lack of tech skills in the UK", which - if unaddressed - will "remain a stubborn barrier to equitable nation-wide growth, especially as demand for AI and other tech expertise soars".

A few years ago, I saw a quote:

With the internet making information available to everyone will make them smarter

  • well, we got that one wrong didn't we?

As far as I am concerned, for the likes of ChatGPT and Bard, the first two lessons from 1981 still apply.


On History Stack Exchange forum there was a question about a ChatGPT result that the person couldn't make sense of:

So I tried the ChatGPT system for the first (and only) time to see if I could figure out how ChatGPT came to the result that the OP of the question reported.

My conclusion was that ChatGPT was designed never to say 'I don't know' or 'insufficient results to come to a (sane) conclusion'.

It collected 3 facts that were true for a 3 month period in 1867:

  • 1815-12: Dutch and Prussian representatives created Neutral Moresnet
    • since 1830 Belgium instead of the Netherlands
    • existed until 1914
  • 1867-04/06 : International Monetary Conference of 1867 in Paris (Latin Monetary Union)
  • 1867-03-23 / 1867-05-11 Luxembourg Crisis

Thus the 'Moresnet Conference of 1867' (with the participation of The Netherlands, Prussia and Luxembourg - who never belonged to the Latin Monetary Union - and Luxembourg that had nothing to do with Neutral Moresnet) was simply 'created' as a fact, with no supporting sources, hoping that nobody would notice that the answer is nonsense.

For the gullible, ChatGPT and Co. are a dangerous source of information.

15
  • 3
    There's a lot in this answer, but a lot of it is also not a comment on the strike. I think this answer is better suited to a question that discusses the ban on LLM content... While I agree with the answer, it does not seem to answer this question.
    – Cerbrus
    Commented Jul 28, 2023 at 9:50
  • 2
    @Cerbrus The simple non acceptance by the Stack Exchange, Inc. that AI generated answers is plagiarism seems to me to be a major, justified, cause of the strike. That the moderators don't use (as implied) the AI tools, with the reason why, is also stated. Commented Jul 28, 2023 at 10:43
  • 1
    Yes, in the first 3 paragraphs... But the rest of the answer? It kinda lacks focus. Again, I agree with your points, I just think this is not quite the place.
    – Cerbrus
    Commented Jul 28, 2023 at 11:00
  • 2
    @Cerbrus As stated it was mainly for the benefit for Stack Exchange, Inc. to assist in understanding why the moderators and some users are reacting this way. Understanding the reason why, often helps in resolving a problem. That was my intension of adding what I actually wrote to others this morning. Commented Jul 28, 2023 at 11:15
  • 1
    "For the gullible, ChatGPT and Co. are a dangerous source of information." -> did you mean "misinformation"? :)
    – OldPadawan
    Commented Jul 28, 2023 at 15:41
  • Thank you very much for your informative answer. It was a pleasure to read. Commented Jul 28, 2023 at 15:50
  • @OldPadawan No, for them it is considered a source of information. When rephrased (What major conference took place in 1867?) correct results were returned. But adding 'Neutral Moresnet' to the question an incorrect result was returned. Those familiar with the events (or do cross checking as the OP did) recognise this. But some peaple don't, assuming (incorrectly) that only correct information is returned. The lack of sources makes it more difficult. ... Commented Jul 28, 2023 at 16:01
  • 1
    @OldPadawan Looking at International Monetary Conference of 1867, shows quickly that it took place in Paris and not in Moresnet as claimed. That is the danger in my mind, since it makes it more difficult to check if the result is plausible. Commented Jul 28, 2023 at 16:02
  • 1
    @MarkJohnson, LLMs aren't deliberately designed to not say "I don't know". Rather, the inability to say "I don't know" is an unavoidable consequence of how they work: LLMs provide the text that, based on the training data, is most likely to follow the input. The only way to get an "I don't know" out of one is to provide it with a list of things that it doesn't know.
    – Mark
    Commented Jul 28, 2023 at 18:52
  • 1
    @Mark More the reason that LLM (Logic Learning Machine) should not be the source for answers based on knowlage. If it cannot state either: 'Yes,an answer is possible ' or 'No, an reliable answer is not possible' then it is neither logical nor intelligent. Commented Jul 28, 2023 at 19:15
  • 1
    That's a weird backronym. Generally, LLM stands for Large Language Model.
    – tripleee
    Commented Jul 29, 2023 at 10:19
  • 1
    @tripleee There are many usages of LLM - Wikipedia as an abbreviation. That is why it should be written out when first used. Commented Jul 29, 2023 at 10:35
  • 1
    If you are discussing ChatGPT in particular, I think it's fair to assume that you should be familiar with the central terminology, especially as this meaning of the abbreviation LLM has become almost a household word in recent months.
    – tripleee
    Commented Jul 29, 2023 at 12:21
  • ChatGPT can be useful, just not for anything factual (or at least it must be thoroughly checked). For instance, to come up with input for a regular web search. Commented Aug 3, 2023 at 1:15
  • @This_is_NOT_a_forum When thoroughly checked (with possible adaptions/corrections/additions) then it becomes a product of their own mind. When it is blindly copied and pasted and posted as if it is their own product, then it is plagiarism. The problem is that many of these 'authors' don't have the faintest idea (or worse care) if 'their' answer is factualy correct or a chapter of history from alternative-reality-1527 but submit it anyway to gain reputations. That is the problem in my mind. Commented Aug 3, 2023 at 1:51
1

If I visit the open letter website it states that the strike is finally over:

image

I want to know if this is an official statement or not.

Thanks beforehand.

3
  • 5
    You may want to read over the answers/comments on Moderation strike: Results of negotiations
    – Timothy G.
    Commented Aug 5, 2023 at 4:45
  • 3
    To recap, it is "official" as far as the organizer of the open letter is concerned. The strike was not tightly coordinated or controlled, so there will be different groups of users with different criteria for when exactly to resume moderation activities. Some are probably waiting to see if the promises and concessions made my the company are in fact more than just words; and some will probably never return.
    – tripleee
    Commented Aug 5, 2023 at 8:46
  • 2
    See also What do you feel like is still missing to end the strike?
    – tripleee
    Commented Aug 5, 2023 at 9:09
-29

It would make sense for a company promoting and selling new technology to buy up a company selling old technology and then mothball its old-technology operations.

Meanwhile it would make sense for a company promoting and selling old technology to get with it and maximise its selling price with the above in mind.

"Artificial intelligence" is about solving issues - perhaps even issues you didn't know existed - by running computer programs. OpenAI is a leader in the field.

SE/SO is the world's leading site featuring expert questions and answers that are written by human intelligences. Go figure.

Edit: it seems from comments that Prosus, the company that already owns SO/SE, already knows a fair bit about AI, so perhaps SO/SE is not about to be sold. As user @leanne writes, it seems the writing is on the wall. Online expert question and answer provision based on human intelligence is about to get cut back drastically.

*Another edit: OK, downvoters, the decision is nothing to do with profit and AI-isation but derives merely from ignorance at the top of the company of what the company's unpaid volunteers know. Seriously how likely does that sound?

19
  • 19
    I don't get what you're trying to say here. Maybe some more clarity is needed.
    – starball
    Commented Jun 9, 2023 at 20:39
  • 13
    It's not really clear what you're saying here ─ are you claiming that SE, Inc. has been bought by an AI company who is now intentionally running into the ground? Or that it is pre-emptively running itself into the ground in order to make itself more appealing to be bought by an AI company? This doesn't really make sense, because if SE, Inc. will run itself into the ground then there's no need for an AI company to buy it in order to do that.
    – kaya3
    Commented Jun 9, 2023 at 20:40
  • 4
    Go figure what?
    – Levente
    Commented Jun 9, 2023 at 20:42
  • 2
    Moderators are saying they deal with a very large number of what they have concluded to be AI-generated answers. Instruct them not to be so hard on such answers and it's clear what the result will be, and in particular it must surely be clear to senior executives at this company, who one can assume are not stupid. The result will be that AI-generated answers build towards taking over. Who's that profitable for? This website does not exist for community. It exists for profit.
    – tell
    Commented Jun 9, 2023 at 20:44
  • 1
    @kaya3 - It makes perfect sense. SE does seem to be running the human side of its world-leading EQ&A operation into the ground. I'm suggesting that that's a means, not an aim in itself.
    – tell
    Commented Jun 9, 2023 at 20:47
  • 1
    What is it a means to? It doesn't make the company more attractive to a buyer, even if the buyer does want SE to be destroyed. They won't pay to destroy it if it is already destroying itself for free.
    – kaya3
    Commented Jun 9, 2023 at 20:53
  • I think you are missing my point. It's unlikely that the company is destroying itself - or to be more exact, destroying the human-intelligence EQ&A service it has built up, largely using volunteer labour - for free. Its owners and senior execs would be stupid if that were the case. Is it likely they're so stupid?
    – tell
    Commented Jun 9, 2023 at 20:55
  • So, I seem to have read somewhere that the tendencies of AI potential were not suddenly recognized only in November 2022, when ChatGPT got published. That early insights were available to some circles earlier than that. Tim Urban posted about exponential AI acceleration back in 2015. I wonder, when in summer 2021 Prosus decided to buy SO/SE, had they been entirely clueless about the role AI was going to play? Or did they make their decision in awareness of the potentials in AI development? What was / is Prosus' plan?
    – Levente
    Commented Jun 9, 2023 at 20:59
  • Why is this happening?
    – Levente
    Commented Jun 9, 2023 at 21:03
  • 2
    Ah, thanks for that info. I thought SO/SE was still privately owned. But still, I think I'm on the right lines and the same applies to the current owners, who AFAICS aren't big AI players.
    – tell
    Commented Jun 9, 2023 at 21:06
  • 5
    @Levente and tell: checking out the SO blog entry, Is this the AI renaissance? (Ep. 564): 1) yeah, Prosus has known about AI and its workings for years. They also have their own AI team. They have purchased Udemy, CodeAcademy, and other learning sites. I posit that "the writing's on the wall": Prosus is going to be using these sites' data for its own search/learning system that they can possibly hugely monetize. Soon, no more "volunteers" needed...
    – leanne
    Commented Jun 9, 2023 at 22:08
  • The majority of education in the hand of one monopolistic provider. What could go wrong? We are so done, so, so, so done.
    – Levente
    Commented Jun 9, 2023 at 22:11
  • @Levente: monopolistic and money-hungry! Plus, if international companies are, like US companies, beholden to their shareholders... who cares how it's done, as long as the shareholders are happy
    – leanne
    Commented Jun 9, 2023 at 22:14
  • Thanks, @leanne. So Prosus (owned by Naspers ) do know about AI then. Interesting that they own a big chunk of Tencent which owns Wechat.
    – tell
    Commented Jun 9, 2023 at 22:20
  • 2
    ChatGPT is definitely a step forward in AI technology, but unfortunately a lot of unskilled people think it has a powerful brain, it does not. Currently it is mainly an artist with words, an electrical parrot so to say. If it gives for example an answer to a question regarding law, it cites non-existing laws, if it is asked to solve programming challenges, it imports non-existing classes. It can copy a bit, just like people sometimes imitates people, with mixed results. Eventually tools improve, but currently the level is insufficient. Commented Jun 21, 2023 at 21:20
-91

From a longtime SO/SE user: I do not support this strike.

A company owns this collection of sites. A corporation has the responsibility, and right, to make their own decisions about their properties' functions and appearance. They are not beholden to their volunteers regarding their decisions.

I have seen complaints about "firing" a moderator, the appearance of voting buttons on the sites, and now the disallowance of removal (except in very narrow circumstances) of perceived AI-related posts.

As with any company, I'm sure that Stack Exchange Inc's management has made decisions based on input and findings by their employees as to the best way to proceed.

Just because some volunteers don't like some decisions, and/or the way they were communicated, does not mean the company has to do what the volunteers want done. The company must do what they think is best for their bottom line, including what they think is best for their users.

I personally, and most people I know, have stopped using SO for any new questions because of the combative way we have been treated by some volunteers who believe they know exactly what's right and wrong in every case; who downvote and close questions even when they don't have experience with what's being asked. They're so professional and knowledgeable that they can somehow magically determine whether a question or answer is valid without any experience in the topic.

There are a bunch of companies out in the world, including the creators of ChatGPT, who are trying to find a way to determine whether something was created by AI or a human being. If all of those people can't even determine 100% if something was created by AI, then what extraordinary ability do some SE volunteers have to determine, without a doubt, that a post's information was provided by AI?

OMG: instead of volunteering, you should be getting paid the big bucks for this incredible skill!!!


Update:

After doing some research on Prosus and checking out the SO blog entry from April 2023, Is this the AI renaissance? (Ep. 564): I can say that

  • Prosus has known about AI and its workings for years.

  • They have their own AI team.

  • They have purchased Stack Exchange, Udemy, Codecademy, and other learning sites.

Based on this information, I posit that "the writing's on the wall": Prosus is going to be using these sites' data for its own search/learning system that they can possibly hugely monetize. Soon, no more "volunteers" needed...

And, if international corporations, like US corporations, are beholden to their shareholders... well, that's why they seem to not care about their volunteers.


Update to clarify my update:

For those who might not understand what I'm saying about shareholders - Prosus, owner of the SE/SO collection of sites, is a public company. "Public" means that a company, such as Prosus, provides stocks, aka, a portion of the company that can be purchased by the public or provided to their employees as compensation. The people who own these stocks are called "stockholders" or "shareholders".

Although these companies are not actually required by law to maximize profit for their shareholders, they often do exactly that - which is why I say "beholden to their shareholders".

What I'm saying with the earlier update is that Prosus can, and likely will, find the optimal way to increase their profits in order to satisfy their shareholders. That may include using the purchased assets, such as SE/SO data, to train AI to teach people to code and to answer their questions - which may or may not make the SE/SO (SO, particularly) sites OBE (no longer necessary).

This has nothing whatsoever to do with my non-support of this strike - the reasons for which are in the portion of my post prior to the updates. The updates are merely further information around why the company might not be interested in yielding to the strikers' demands.

67
  • 59
    "They are not beholden to their volunteers regarding their decisions." ─ And we are not beholden to them regarding the hosting of our communities. The whole Stack Exchange model cannot work if there is no agreement that it's in everyone's interests for them to continue hosting us and for us to continue maintaining the value of their product by curating and moderating it. "The company must do what they think is best for their bottom line, including what they think is best for their users." ─ sure, and if they want to know what's best for us they can listen to us about it.
    – kaya3
    Commented Jun 8, 2023 at 23:44
  • 9
    Do you really in your heart believe that Stack Overflow Inc staff always make the correct decisions? That they should take no input from the people that made this network of sites a success in the first place? Software developers who can write and maintain the apps this site runs on are readily found. A legion of people willing to put forth their own unpaid time to actually make the site useful are not readily found.
    – mason
    Commented Jun 8, 2023 at 23:50
  • 31
    The way we are able to tell with greater accuracy than ChatGPT detection tools whether a post is written by ChatGPT, is not because we have discovered some magical solution to the problem that OpenAI are trying to solve, but because the problem we are solving is a different and much easier one. OpenAI want to do it using just the text of the post, whereas we have lots of other input data available, including the original question, the user's post history, and other context; and OpenAI want to minimise both false positives and false negatives, while we mainly want to minimise false positives.
    – kaya3
    Commented Jun 8, 2023 at 23:50
  • 50
    Hi @leanne. I'm Fred. I've been answering questions on SO from late 2010 to late 2016, and I also have been "curating" the site during this time period. I am aware the private SE company can do whatever they want with their assets, or, as you say, "properties". Problem is, volunteer work is not an asset or a "property". It is not paid for. It is not taxed. It is something one does out of their own time and (sometimes) money. The unique situation the SE company finds itself in is that it depends on distributed, volunteer work, and they cannot easily replace that. Commented Jun 8, 2023 at 23:51
  • 10
    Detecting ChatGPT-generated text can be improved by leveraging human intuition and contextual understanding. Humans possess a vast array of knowledge and experiences that enable them to spot inconsistencies, logical fallacies, and contextual discrepancies in generated text. They can identify nuances, sarcasm, and understand context-specific cues, allowing them to differentiate between genuine human responses and AI-generated ones. By actively engaging with the content, asking probing questions, and critically analyzing the text, humans can bring a depth of understanding that surpasses AIs.
    – kaya3
    Commented Jun 9, 2023 at 2:57
  • 10
    The parallel with the book publisher would be an author who turns in a new 300-page manuscript every week on topics ranging from tropical fish to child rearing to nuclear physics to spiritual growth to agricultural sustainability to the history of the Incas.
    – tripleee
    Commented Jun 9, 2023 at 3:51
  • 11
    This post looks like it is just trying to pick up a fight. You don't agree with the strike? Fine, no one here is forcing you - everyone has is views and those should be respected. You, on the other side aren't. Based on your reaction, I assume you are just venting out in frustration. It is clear what your purpose is here:" I personally, and most people I know, have stopped using SO for any new questions because of the combative way we have been treated by some volunteers", so now you are here to talk trash about EVERY volunteer.
    – SPArcheon
    Commented Jun 9, 2023 at 7:44
  • 27
    "OMG: instead of volunteering, you should be getting paid the big bucks for this incredible skill!!!" - this line makes your purpose quite clear. You see the volunteers as the ones who did you wrong, and now you are felling a twisted pleasure in seeing them done wrong. I think you may find a far more productive use of your time in reflecting about why you "were done wrong". And I will go even as far as to say that is completely possible that you were indeed punished without reason. But in that case you should just ask the company to review you case instead of being mad at the world.
    – SPArcheon
    Commented Jun 9, 2023 at 7:48
  • 30
    Lastly, since you mentioned a "fired volunteer" I suggest that you may want to actually read about that before empty making jokes that only make YOU look quite bad. The "fired volunteer" as you call it was subject to a slandering campaign on the web that the company started, not just removed from their position. You may thing that they were "fired" with a reason? Again fine. But it is NOT fine to go and make that person you scapegoat puppet in the hope to gain some free advertisement like the company did. So, please. DON'T JOKE ABOUT THAT.
    – SPArcheon
    Commented Jun 9, 2023 at 7:52
  • 9
    they have the right to "fire" people, revoke moderation rights (since those weren't paid employees in the first place), suspend or ban users. But they don't have any right to go around and tell the press that "there is this moderator that refuses to acknowledge that everyone should be allowed to choose how they identify and was planning to misrepresent users identities on purpose with the sole intention of hurting people, so we removed her and advise no one should hire her ever. BTW here is her mail if you want to send in well deserved hate letters". Yet that is what was done back then.
    – SPArcheon
    Commented Jun 9, 2023 at 12:21
  • 7
    @leanne I haven't seen you provided any reasoning for why you disagree with the strike. The only reasoning you seem to have provided is that Stack Overflow Inc is a company and can legally do what it wants - but now that we agree that's true, that still doesn't provide any reasoning for why moderators shouldn't strike. What about their reasoning for striking do you disagree with?
    – mason
    Commented Jun 9, 2023 at 15:30
  • 20
    If your disagreement with the strike is about whether a human can detect AI generated content or not, then you should have made your answer about that. As written now, it's some rant about how Stack Overflow Inc is a company and can therefore legally do whatever it wants - and that's really relevant to the discussion. Practically nobody disagree with that - it's just that it misses the point and is completely irrelevant. That's why you have so many downvotes here.
    – mason
    Commented Jun 9, 2023 at 16:22
  • 10
    Your position would be clearer if you didn't mix it in with irrelevant grievances about other poorly-received changes unrelated to the strike, your own questions being closed, which institutional investors happen to own shares, and so on. It's very hard to discern any logical relation between any of the points you are trying to make, and you seem to be contradicting yourself in multiple ways. So, despite the volume of text you have written, I do not understand what "not support" really means if you don't think we should stop striking, and you don't think SE should refuse what we ask.
    – kaya3
    Commented Jun 10, 2023 at 0:51
  • 9
    OK, so you are wrong, and that's fine.
    – kaya3
    Commented Jun 10, 2023 at 19:24
  • 16
    Why do you keep highlighting the word volunteer in an apparently condescending way? Just because they are unpaid does not devalue their contributions. Remove the volunteers from the network and see how much "content" remains.
    – Someone
    Commented Jun 11, 2023 at 0:36

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .