567

Note: as part of the strike organization, this post is a mirror of a post on MSE

Update

On August 2nd, 2023, negotiations between community representatives and representatives of the company concluded with an agreement being reached. The details of the agreement can be found at Moderation strike: Results of negotiations.

On August 7th, 2023, based on the result of several polls held by various sections of the community, the coordinated call to strike concluded. Further details can be found at Moderation strike: Conclusion and the way forward.


Introduction

As of today, June 5th, 2023, a large number of moderators, curators, contributors, and users from around Stack Overflow and the Stack Exchange network are initiating a general moderation strike. This strike is in protest of recent and upcoming changes to policy and the platform that are being performed by Stack Exchange, Inc.1 We have posted an open letter addressed to Stack Exchange, Inc. The letter details which actions are being avoided, the main concerns of the signees, and the concrete actions that Stack Exchange, Inc. needs to take to begin to resolve the situation. Striking community members will refrain from moderating and curating content, including casting flags, and critical community-driven anti-spam and quality control infrastructure will be shut down.

However, the letter itself cannot contain all of our concerns, and we felt it was important to share some of the background and details that were not included in the letter in the interest of brevity. We also wanted to touch upon several points at the same time that are related to Stack Exchange, Inc.’s recent behavior.

Background

A history of the Artificial Intelligence policy

On December 5th, 2022, Stack Overflow moderators instituted a “temporary policy” banning the use of ChatGPT in particular on the site. This was instituted due to the general inaccuracy of the answers, as well as that such posts violate the referencing requirements of Stack Overflow. The moderator team kept an eye on community feedback to guide it, and support welled beneath this policy. Similar policies were enacted across the network.

Within the next several days, thousands of posts were removed and hundreds of users were suspended for violating this policy.

Over the next few months, Stack Exchange, Inc. staff assisted in the enforcement of this policy. This included adding a site banner announcing the ban on these posts as well as editing and adding Help Center articles to mention this policy. Moderators were also explicitly given permission to suspend for 30 days directly in such cases, skipping the escalation process that is generally encouraged.

On May 29th, 2023 (a major holiday for moderators in the US, CA, UK, and possibly other locations), a post was made by a CM on the private Stack Moderators Team2. This post, with a title mentioning “GPT detectors”, focused on the rate of inaccuracy experienced by automated detectors aiming to identify AI- and specifically GPT-generated content - something that moderators were already well aware of and taking into account.

This post then went on to require an immediate cessation of issuing suspensions for AI-generated content and to stop moderating AI-generated content on that basis alone, affording only one exceptionally rare case in which it was permissible to delete or suspend for AI content. It was received extremely poorly by the moderators, with many concerns being raised about the harm it would do.

On May 30th, 2023, a version of this policy was posted to Meta Stack Exchange and tagged [mod-agreement-policy], making this a binding moderator policy according to the Moderator Agreement. The policy on Meta Stack Exchange differs substantially from the version issued in private to the moderators. In particular, the public version of the policy conspicuously excludes the “requirements” made in private to immediately cease practically all moderation of AI-generated content.

The problem with the new policy on AI-generated content

The new policy, establishing that AI-generated content is de facto allowed on the network, is harmful in both what it allows on the platform and in how it was implemented.

The new policy overrode established community consensus and previous CM support, was not discussed with any community members, was presented misleadingly to moderators and then even more misleadingly in public, and is based on unsubstantiated claims derived from unreviewed and unreviewable data analysis. Moderators are expected to enforce the policy as it is written in private, while simultaneously being unable to share the specifics of this policy as it differs from the public version.

In addition to these issues in how Stack Exchange, Inc. went about implementing this policy, this change has direct, harmful ramifications for the platform, with many people firmly believing that allowing such AI-generated content masquerading as user generated content will, over time, drive the value of the sites to zero.

A serious failure to communicate

Throughout the process of creating, announcing, and implementing this new policy, there has been a consistent failure to communicate on the part of Stack Exchange, Inc. There has been a lack of communication with moderators and a lack of communication with the community. When communication happened, it was one-sided, with Stack Exchange, Inc. being unwilling to receive critical feedback.

An offer by Philippe, the Vice President of Community, to hold a discussion in the Teachers’ Lounge moderator-only chatroom took days to be realized. During that conversation, certain concerns were addressed3, but the difficult questions remained unanswered – particularly about the lack of communication ahead of time.

The problem with AI-generated content

This issue has been talked about endlessly, both all around the Stack Exchange network and around the world, but we feel it’s important to highlight a few reasons why several communities, not just Stack Overflow, decided to ban AI-generated content. These reasons serve as the backbone not only for our moderation stance against AI-generated content, but also why we feel confused and betrayed by Stack Exchange, Inc.’s sudden decision to halt our efforts to enforce our community-supported decision to ban it.

To reference Stack Overflow moderator Machavity, AI chatbots are like parrots. ChatGPT, for example, doesn’t understand the responses it gives you; it simply associates a given prompt with information it has access to and regurgitates plausible-sounding sentences. It has no way to verify that the responses it’s providing you with are accurate. ChatGPT is not a writer, a programmer, a scientist, a physicist, or any other kind of expert our network of sites is dependent upon for high-value content. When prompted, it’s just stringing together words based on the information it was trained with. It does not understand what it’s saying. That lack of understanding yields unverified information presented in a way that sounds smart or citations that may not support the claims, if the citations aren’t wholly fictitious. Furthermore, the ease with which a user can simply copy and paste an AI-generated response simply moves the metaphorical “parrot” from the chatbot to the user. They don’t really understand what they’ve just copied and presented as an answer to a question.

Content posted without innate domain understanding, but written in a “smart” way, is dangerous to the integrity of the Stack Exchange network’s goal: To be a repository of high-quality question-and-answer content.

AI-generated responses also represent a serious honesty issue. Submitting AI-generated content without attribution to the source of the content, as is common in such a scenario, is plagiarism. This makes AI-generated content eligible for deletion per the Stack Exchange Code of Conduct and rules on referencing. However, in order for moderators to act upon that, they must identify it as AI-generated content, which the private AI-generated content policy limits to extremely narrow circumstances which happen in only a very low percentage of AI-generated content that is posted to the sites.

This isn’t just about the new AI policy

While a primary focus of the strike is the potential for the total loss of usefulness of the Stack Exchange platform caused by allowing AI-generated content to be posted by users, the strike is also in large part about a pattern of behavior recently exhibited by Stack Exchange, Inc.

The company has once again ignored the needs and established consensus of its community, instead focusing on business pivots at the expense of its own Community Managers, with many community requests for improved tooling and improving the user experience being left on the back burner. As an example, chat, one of the most essential tools for moderators and curators, is desperately out of date, with two-minute, high improvement changes being ignored for years.

Furthermore, the company has repeatedly announced changes that moderators deem would cause direct harm to the goals of the platform, this policy on AI-generated content among them. The community, including moderators and the general contributor base, was not consulted nor asked for input at any point before these changes were announced, phrased in a manner that indicated that there was no possibility of retraction or even a trial period.

Some of these planned changes have been temporarily held off due to controversy, this strike influencing those decisions, but that does not change the recent tendency of Stack Exchange, Inc. to make decisions affecting the core purpose of the site without consulting those most affected.

The events of the last few weeks seem like history repeating itself. Stack Exchange, Inc. ventures into a new pursuit, this time, generative AI, in contrast with the community’s interests, makes a decision at odds with all feedback available to them, ceases communications with us, and we go on strike. This is similar to what happened last time the community moderators prepared to go on strike.

How we resolve this

Even though the strike may end, many community members are not comfortable with returning to the status quo before the AI policy itself, if nothing else changes. The strike’s focus on the AI policy is not downplaying the significance of SE’s other actions. We deserve much more than just retracting the AI policy. Stack Exchange already made promises after the 2019 debacle that they have since failed to meet. We are worried that Stack Exchange will continue down the same path once the situation calms down.

While the recent actions by Stack Exchange, Inc. are in conflict with the community and take a significant step backward in terms of the relationship between the company and the community, we do not think that our relationship is beyond repair. We do however worry that we are nearing the point at which it cannot be repaired anymore.

While it certainly may be true that the company wants to meet our needs, and wants to care for us, the reality is that this is not happening. It is time to wake up and realize what must be done. Stack Exchange, Inc. is not acting in our interest. It is time to do so.

What the striking users want

For the strike to end, the following conditions must be met:

  • The AI policy change retracted and subsequently changed to a degree that addresses the expressed concerns and empowers moderators to enforce the established policy of forbidding generated content on the platform.
  • Reveal to the community the internal AI policy given directly to moderators. The fact that you have made one point in private, and one in public, which differ so significantly has put the moderators in an impossible situation, and made them targets for being accused of being unreasonable, and exaggerating the effect of the new policy. Stack Exchange, Inc. has done the moderators harm by the way this was handled. The company needs to admit to their mistake and be open about this.
  • Clear and open communication from Stack Exchange, Inc. regarding establishing and changing policies or major components of the platform with extensive and meaningful public discussion beforehand.
  • Honest and clear communication from Stack Exchange, Inc. about the way forward.
  • Collaborate with the community, instead of fighting it.
  • Stop being dishonest about the company’s relationship with the community.

A change in leadership philosophy toward the community

We need business leadership to meet minds with the community members and community managers, because currently it appears that leadership ignores them.

Immediate financial concerns appear to drive feature development. The community also has feature development wants and needs, but no substantial consideration is given to those needs, let alone resource allocation. The lack of merit leadership gives to the community and CMs even leads to its own business decisions being reckless and harmful, like the AI policy.

Leadership needs a change in philosophy to one that treats the community as more than a product, and values its needs and expertise. Such a philosophy is evidently currently missing, and leadership takes the expertise of its own product for granted. Leadership needs to represent this philosophy in actually allocating resources based on community needs as well as its own, and informing its feature development using the expertise of the community. Development can be guided by both business and community needs!

In conclusion

The sites on the Stack Exchange network are kept running smoothly by countless hours of unpaid volunteer work, and, in some cases, projects paid for out of pocket by community members. Stack Exchange, Inc. needs to remember that neglecting and mistreating these volunteers can only lead to a decrease in the goodwill and motivation of those contributing to the platform.

A general moderation strike is being held until the concerns laid out in the open letter and this post are addressed. Moderators, curators, contributors, and users, you are welcome to join in by signing your name in the strike letter.


1While we’re aware that the legal name of the company is “Stack Exchange, Inc.”, the name “Stack Overflow” is more recognizable, and thus used in the open letter. The “Inc.” serves to demonstrate that our concerns lie with the corporate entity, and not the site itself, its moderators, or individual employees.

2Stack Exchange, Inc. provides a free Stack Overflow for Teams instance for Stack Exchange moderators, allowing moderators to store and share private information, bug reports, documentation, and communication with SO staff.

3This includes another planned change to the foundational systems of the platform that has the potential to facilitate unprecedented levels of abuse. (This was referred to as “the second shoe” during the planning stages of the letter and the strike, as in “waiting for the other shoe to drop”.) This has been delayed indefinitely while parts of the plan are reconsidered.

47
  • 34
    This post is a mirror of a version on MSE. If you want to edit it, please consider making edits upstream there and then copying the edits over to here to keep them in sync.
    – starball
    Commented Jun 5, 2023 at 7:25
  • 83
    I would feature this, but I'm on strike. Edit: To ask about or discuss the strike, please join us in The Meta Room Commented Jun 5, 2023 at 7:31
  • 114
    We can't feature it anyway. We've been told the company will unfeature anything mentioning the strike Commented Jun 5, 2023 at 7:45
  • 55
    Having written a large percentage of this post, I can confirm that it wasn't AI-generated unless someone's been keeping some very big secrets from me.
    – Mithical
    Commented Jun 5, 2023 at 9:17
  • 25
    @JanTuroň if the current generation of AIs were capable of producing something as coherent, well argued, and correct as this post, we wouldn't be having this discussion in the first place. Commented Jun 5, 2023 at 10:37
  • 14
    @Gimby While I completely support the moderators' strike action, I'm sadly convinced that it's far too little, far too late.
    – Ian Kemp
    Commented Jun 5, 2023 at 10:42
  • 18
    @Elikill58 Probably. The strike includes SOCVR and many of the active curators. There's 343 signatures (and rapidly increasing) from all over the network Commented Jun 5, 2023 at 11:32
  • 33
    @Andreasdetestscensorship You haven't heard about it because they said it privately to specifically striking mods. Well, a subset of striking mods, because all communication between the company and striking mods has so far been awfully inefficiently handled. Commented Jun 5, 2023 at 12:05
  • 19
    @JanTuroň on the contrary I do believe the curators and moderators aren't disposable especially not by AI. They can be replaced yes, but that would probably be quite costly to Stack Overflow (Not just in monetary terms but as well as in the health of the platform). What I do see is that many members of the community have raised concerns and also asked for transparency from the company over the past few days but so far I don't see any response from the company on the same. Commented Jun 5, 2023 at 13:41
  • 26
    @JanTuroň ironically, one of the things SE held against the mods is that AI detection tools are unsatisfactory. AI is not going to replace mods anytime soon.
    – cottontail
    Commented Jun 5, 2023 at 14:07
  • 25
    Stack Overflow, Inc. cannot consistently ignore, mistreat, and malign its volunteers oh but they can. And they have many times in the past. Last time, most moderators didn't walk away from their positions when push came to shove and many of us predicted that was going to be taken as green light for stuff like that to happen again and again
    – Pekka
    Commented Jun 5, 2023 at 14:10
  • 21
    @JanTuroň How exactly does an answer that gets fundamentals factually wrong and makes up things serve the community at large (forgetting the mods for a second)? This is a phenomenon I’ve personally seen here. Once, but it was stunning. How do the posts of people who don’t know a subject enough to vet AI glitches doing this on repeat benefit people who consult the site for factual answers? This isn’t about “demanding moderators”, this is about general user trust that the answers work. The mods help achieve that and in this case it is clearly to user, non-mod, benefit to push back.
    – JL Peyret
    Commented Jun 5, 2023 at 17:01
  • 15
    Had the change truly been just banning GPT detectors from being the main justification used, that would've been perfectly fine, something we've also told the company repeatedly. They're also looking for something to blame for the decline in traffic, and they've opted to pin that blame on us. Commented Jun 6, 2023 at 13:28
  • 15
    It's not just circumstantial evidence; unless it's by explicit admission, the internal guidance for the policy says it isn't good enough. We have lots of indicators and patterns that prove a post is GPT well beyond reasonable doubt. Seeing as you've flagged a few posts, you've probably picked up on some of these yourself. That scales; mods and flaggers on SO sees a lot of these posts, and over time, that adds up. Why the exact guidance given to mods hasn't been made public yet is beyond me. Commented Jun 6, 2023 at 13:50
  • 21
    Using your own brain to detect ChatGPT content is similar to detecting plagiarism, which many mods / curators have honed to a fine art over years of practice. Additionally, ChatGPT text tends to feel superficial: it looks ok on the surface but it doesn't feel like there's anything solid beneath that surface, especially in longer passages. "The lights are on, but nobody's home".
    – PM 2Ring
    Commented Jun 6, 2023 at 14:51

5 Answers 5

176

I am not a moderator but I have flagged a lot of AI-generated answers with 0 false positives (yes 0!).

As an expert in my field, I can identify wrong-useless-non-sense answers that are clearly generated by AI and don't deserve to be on Stack Overflow.

The Web is already suffering from those AI-generated content, especially in the Programming field with a ton of articles and posts that makes no sense so let's, at least, keep Stack Overflow a safe place with HUMAN content.

37
  • 41
    Absolutely. Maybe it's not possible to incontrovertibly prove that an answer was generated by an LLM, but the likelihood of a dedicated SME with years of experience misidentifying an LLM post approaches 0. And if false positives may arise (I haven't seen any evidence of this yet), surely there's a better way to deal with it than by effectively prohibiting all enforcement. The patterns are obvious--new users churning out 20 answers in random tags they've never posted in before with zero grammatical mistakes, half of which are wrong and happen to look exactly like ChatGPT. What are the odds?
    – ggorlen
    Commented Jun 6, 2023 at 1:26
  • 3
    Just out of curiosity: Have you found a single AI generated answer that you would say was correctly answering the question? (I don't want to say high quality because that might be subjective.) Commented Jun 6, 2023 at 6:51
  • 19
    @Trilarion of course. It's pretty common. It would be quite surprising if the AI managed to be never correct, especially on relatively simple questions.
    – Ryan M Mod
    Commented Jun 6, 2023 at 7:17
  • 4
    @Trilarion even if it was the case, I wouldn't consider it "correctly" answering the question but more like a "lucky shot". You can get some generated information that may give you the answer for some basic tasks. But I can easily make the OP hesitating with some tricky comments and a down-vote and in most of the cases they will delete it because they are not confident and they don't know if the information are correct or not. For me it's also part of the garbage that need to be cleaned. Commented Jun 6, 2023 at 8:31
  • 7
    @Trilarion I don't recall specific instances, but probably, yes. It is impossible for moderators to reliably sort out correct vs. incorrect AI-generated answers at Stack Overflow's scale, especially given that we do not have experts in all topics on the mod team. It takes considerably more effort to verify whether the answer is correct than it does to use the AI to generate it.
    – Ryan M Mod
    Commented Jun 6, 2023 at 8:35
  • 4
    They would have been deleted, as they violated the blanket ban on AI-generated content. Also, it's exceedingly unlikely that the mod who happened to review such a case also happened to be a subject-matter expert, and, even if they were, it's unlikely that they spent the time evaluating technical correctness (since mods don't evaluate that at all) but instead evaluated whether it was the output of an LLM. Commented Jun 6, 2023 at 12:09
  • 3
    To expand a bit: even where a mod is a subject-matter expert, it's still difficult to verify if an answer actually solves a problem, since you would often need to reproduce the problem, apply the solution, and test it... that'a lot more time-consuming than confirming if the answers contain nonsense LLM hallucinations.
    – Ryan M Mod
    Commented Jun 6, 2023 at 13:05
  • 4
    Re "The Web is already suffering from those AI-generated content": Yes, many blog articles are now AI-generated. The first I encountered was on the otherwise reputable Baeldung. As far as I can tell it is completely bogus. Either it slipped through the cracks or they don't care about the quality. It feels like content farms mkII, but now with completely useless information. We may soon long back to low-quality forums. They occassionally had something of value. Commented Jun 6, 2023 at 15:16
  • 2
    It could be the new tag line that people are drawn to during the upcoming AI hangover: "Stack Overflow - trusted and free of AI content". They could fix search while they are at it. Commented Jun 6, 2023 at 15:45
  • 4
    That may be the gist: Stack Overflow has lost its killer feature: Very quick answers, either directly or through search. And most questions go unanswered (except in the homework tags), even after 24 hours (80% unanswered in that sample; only one out of five questions is answered). ChatGPT offers that, however unreliable. Commented Jun 6, 2023 at 16:08
  • 9
    Why would people go to Stack Overflow if there is only a 20% chance of an answer and search is broken? Commented Jun 6, 2023 at 16:13
  • 4
    @PeterMortensen I imagine a workflow could be like this: first response to a problem - ask GPT, if the answer didn't work - search on Google, if you found nothing, ask on the web (SO). With a bit of luck we end up with only genuinely new questions. Commented Jun 6, 2023 at 16:26
  • 2
    I think it's a loss that working AI created content may have been deleted, because it's about quality not provenance, but I understand that a more nuanced policy is much more difficult to implement than an all or nothing. I also understand that moderators are not subject matter experts. I foresee that in the future, when AI gets better, we will have many of these discussions again and again. Commented Jun 6, 2023 at 16:28
  • 3
    @Trilarion That's good news, sounds like the quality standards on Stack Overflow can be made more strict so that simple stuff that AI's can answer don't need to be here at all. ChatGPT is not a competitor, it is an alternative.
    – Gimby
    Commented Jun 7, 2023 at 13:36
  • 3
    But how do you know it never was a human whose answer was wrong and made no sense? Commented Jun 12, 2023 at 12:33
160

I am a former moderator for Stack Overflow, and have been largely inactive since my resignation in 2019.

An email from (who was and maybe still is) a moderator for another Stack Exchange site made me aware of this strike.

I'm going to paste here what I wrote in the body of that email.

It is… difficult to give up something you love, and more difficult when you’ve put years of your life into it. But sometimes stepping away is the only way to preserve the ideal that your time, your love, your care is and should be worth more than it’s being valued.

5
  • 83
    It's chilling how this is something you would tell someone in an abusive relationship. Commented Jun 5, 2023 at 21:28
  • 6
    Community and community moderation have stopped being essential to SE long ago. They are assets, and if other assets come along that perform better, they can and will be retired. Things ending up this way probably became inevitable the moment VC money entered the equation and the option of going down a more Wikipedia-like route was dismissed. I hope all the goodwill, talent, and potential of the greater SE community can one day be channeled into a genuinely community driven, open source approach less tainted by capital interests. It will never happen here though. The stakes are too high.
    – Pekka
    Commented Jun 7, 2023 at 11:50
  • 10
    @Pekka maybe codidact.org then, eh? The site looks very nice now, I must say. Very... recognisable. But with a help link right in the menu bar, what a technological advancement.
    – Gimby
    Commented Jun 7, 2023 at 13:29
  • 5
    It's pretty telling... the post-term sentiments of giants like George Stocker, Shog9, Jon Ericson, etc -moderators or employees serving at or near the front with a corporate leadership that didn't respect or value its community.
    – canon
    Commented Jun 9, 2023 at 18:12
  • 4
    Louis Rossmann is making the exact case we here are making about Stack Overflow about Reddit, in their case over extortionate API fees ultimately harming especially handicapped users. Boy is the world changing 😢 youtube.com/watch?v=U06rCBIKM5M
    – Pekka
    Commented Jun 11, 2023 at 12:22
34

Fully agree that this is worth standing up to, but there is something I would like to point out with regards to having an impact.

Moderation use is a slow burn, and the attrition of striking will not be seen for weeks or months. Moreover, by making a public spectacle of this issue (which is fine, this isn't a judgement), that will drive more users to interact with the discourse at hand: namely generative AI, banning/suspending, and the way oversight is handled from the company versus volunteers.

What I am getting at here is that, by stirring the pot, more viewership and content is created than normal. For this corporate group who solely looks at traffic metrics to determine their self worth, this just validates their inner circle, regardless of the -3548 next to their posts or responses. They are not part of the community, and as such, they don't care what votes look like since they don't understand them.

They only understand money, and a mod striking doesn't have enough effect on their bottom line. There needs to be a more direct line taken towards impacting their financial stance. This is accomplished by:

  • Ceasing your visitation entirely from the site (ad revenue)
  • Ceasing the creation of content as it directly contributes to visitation (ad revenue)
  • Suspending the use or migrating the content from Teams (subscriptions)

Time ranges can be chosen here to coordinate, so that it is clear that there is a reason for this to be done, and that it is not simply walking away.

These are immediately observable factors that will undeniably require not just a response, but a change. Without a financial impact, there will be no changing anything. Just look at what has happened before with these types of situations, or listen to some of the previous employees and mods tell their stories. The past is now.

2
  • Also, if you do continue to use SO/SE, have an ad blocker installed. Commented Jun 28, 2023 at 13:12
  • 2
    @ClementCherlin: What, SE has ads? :-D
    – einpoklum
    Commented Jul 20, 2023 at 23:35
-19

The crisis here seems severe enough that I ought to break my self-imposed embargo, make a new contribution to the site, and express my thoughts on this matter.

As you can see by my name, I left Stack Exchange on October 6th 2019, due to continuous fighting between the community and management. Note that this does not mean that I agree with either the community or the management - only that this fighting will interfere with the smooth operation of the website, and I honestly don't feel any desire to contribute to a sinking ship. Recent events have only proven my actions correct.

I could have deleted my account, but I decided against that, in favor of changing my name to reflect that I left the site (in keeping with other people changing their names to signal their opinions). More importantly, I suspended all my contributions on the site. I only broke my self-imposed embargo once, on June 22nd 2022, to update an answer that was last edited on Jan. 27th 2018.

Thus, Stack Exchange could never consistently mistreat and malign me, because I am no longer a contributor that could be mistreated or maligned. However, it could easily ignore me because, after all, I'm no longer a contributor.

Yet, the fact that I didn't contribute meant that Stack Exchange could not get my content and could not monetize my content. Since I generally use ad block, I am wasting their bandwidth and being a waste of time. Attempts to wean myself directly off Stack Exchange, so I don't even need to use the resource outright, were only partly successful. But I am confident in the future.

My embargo had an impact, however indirect. At the very least, I avoided dealing with the intricacies of Stack Exchange Inc. And this is why I posted. A strike, I feel, is too limited. You protest, Stack Exchange responds, you stand down, nothing changes, and this will just happen a few years later. And this assumes the best-case scenario of the platform still existing a few years later, because AI is constantly advancing. While LLMs can indeed stagnate at their current limitations, we cannot rely on this.

Staying on the platform, talking on the platform, and doing everything on this platform is exactly the wrong choice to make here. You need to prepare for life without Stack Exchange indefinitely. At the very least, it makes your negotiating power stronger, since it shows that you don't need the platform. Migrate everything to open-source Q&A platforms. Create a Discord and handle questions over there. Or even create an LLM that rely on a database filled with Stack Exchange content, to reduce the probability of hallucinations by these LLMs.

But do something. Because the last thing you need to have is the cycle repeated yet again.

22
  • 8
    so, any thoughts on why the strike won't do anything?
    – Lamak
    Commented Jun 5, 2023 at 16:00
  • 18
    Because (a) the previous protest done in 2019 proved fruitless in affecting change and I see no reason why this would change here, (b) there is a power imbalance between volunteer moderators and a for-profit company with branding and control over a popular web domain - which means the for-profit company does not need to make much, if any, compromises to get its way, and (c) I suspect this platform is a loss leader that is useful only as a marketing tool to get people to trust the StackExchange brand, and thus the site could be sacrificed if it's no longer worth the effort. Commented Jun 5, 2023 at 16:06
  • 4
    Note that the strike can succeed if there is a viable, popular alternative that people are using instead (since that reduces the bargaining power of the for-profit company). I don't see that yet. Commented Jun 5, 2023 at 16:06
  • 67
    Worth noting that in 2019, the strike was called off just before it was set to start. That means SE didn't get to see the impact of a full strike (there were mostly resignations, and not on an as large scale), and consequently, didn't see the consequences of pushing the community away. Commented Jun 5, 2023 at 16:20
  • 13
    Not sure why this answer is being downvoted; perhaps people are upset that they didn't see the writing on the wall years ago, like its author? FWIW I've been following a broadly similar trajectory of tapering off my curation since about the same time for likely the same reasons; I didn't see any possibility that things would get better then, and I see even less of a possibility now, as my recent answer explains.
    – Ian Kemp
    Commented Jun 5, 2023 at 16:21
  • 2
    Given the recent developments, it looks a lot like 2019 over again indeed. IIRC there were also cases of defamation in the press back then. Commented Jun 5, 2023 at 16:33
  • 27
    There is an alternative: codidact.org. Not sure how mature it is, but it is there. Maybe it's time we all moved there. Commented Jun 5, 2023 at 17:01
  • 2
    In my opinion we did have some good moves from SE inc, both in attempting to heal the situation in 2019 (arguably too little, certainly too late) and after that, and we've had some okay years. Stating that all was lost long ago and the best move is to leave isn't really helpful imo. Obviously, feel free to disagree and leave SE, but I still have hope.
    – Erik A
    Commented Jun 5, 2023 at 17:11
  • 2
    My two cents: SO can't do everything perfectly all the time, nobody can. That being said, navigating the headwaters of AI generated answers (ahem... 90% junk) which is surrounded by all the hype of the so-called "AI Age" is not going to be easy. With everybody pushing it as the latest and greatest "answer to everything" it is only natural that people want to get off easy, get "free" rep by asking ChatGPT the question, and copy pasting an answer. It really is just an issue of whether you're willing to put in the work and actually answer the question genuinely, or use something else's knowledge. Commented Jun 5, 2023 at 19:02
  • 4
    @IanKemp Perhaps the downvotes are from people who disagree with the prediction that LLMs will make Stack Exchange and similar sites obsolete, or the suggestion that we should make our own LLM trained on Stack Exchange posts. There's a different, well-received post by someone who left in 2019 here which doesn't go in that direction.
    – kaya3
    Commented Jun 5, 2023 at 21:09
  • 9
    or it's just a boring opinion.
    – Kevin B
    Commented Jun 5, 2023 at 21:12
  • 3
    "Since I generally use ad block, I am wasting their bandwidth and being a waste of time" - stackoverflow.blog/2016/10/26/…
    – starball
    Commented Jun 5, 2023 at 23:59
  • 29
    "How can I make this about me" - answer-ified...
    – Cerbrus
    Commented Jun 6, 2023 at 9:35
  • 5
    Or perhaps abandoning all hope is just... not a fun alternative? If we all take this route than we practically guarantee that things don't get better. I much prefer to fight for the chance at improvement than give up all hope and assume that this awesome platform will just cease to exist and never flourish again in the future. That future is sad, dull, and easy compared to the one we're fighting and hoping for.
    – zcoop98
    Commented Jun 6, 2023 at 15:35
  • 8
    "You need to prepare for life without Stack Exchange indefinitely. At the very least, it makes your negotiating power stronger, since it shows that you don't need the platform." - I don't know about y'all, but one of the main reasons I am here is because I actually do need the platform. There isn't much else like it, not at this scale and not this active. The fact that I coincidentally also enjoy contributing to it is just a bonus. If I had to list all the things I have learned by reading/writing Q&As here I wouldn't know where to begin. Commented Jun 6, 2023 at 23:52
-50

I would ask the CEO and his primary leadership team to communicate at least once a month face-to-face with the main community members, either in person or in video calling.

I think the problems we are seeing, quite frankly from a few years before, are due to misunderstandings. One-to-one meetings, once a month, face to face, hopefully will clear those misunderstandings.

1
  • 49
    It's so much more than "misunderstandings". SE is deliberately withholding information, they're deliberately misleading, or even flat-out lying.
    – Cerbrus
    Commented Jun 7, 2023 at 7:25

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .