65

A couple of weeks ago, I gave a heads-up that we’d be sharing updates on community product initiatives that we’re planning on focusing on over the upcoming months. Des, our Director of Product, shared this today on the blog.

The TL;DR is we are going to be focusing on:

In the upcoming days, weeks, and months, we’ll be sharing more specifics about these initiatives and the next to come. They’ll each have their own dedicated Meta announcements. I’m treating this post as a catch-all or index if you will. As updates about each of these projects are announced, I’ll be updating this post to link to them so you can find everything easily.

Those individual posts will share more details and will be the right place for feedback and questions on those specific initiatives. However, if you have any general questions or feedback about these focuses feel free to leave those as answers here. We’ll be keeping an eye on this post through April 5th and will do our best to respond, where appropriate, during that time.

15
  • 11
    I linked to the Staging Ground and OverflowAI Alpha posts that went live today. I'll link to posts on the other initiatives as they go live in the future and note the update so this post.
    – Rosie StaffMod
    Commented Mar 27 at 14:12
  • 3
    I know the specific announcement has been made yet, but I'm curious which cloud provider you are going with? AWS? Azure? GCP?
    – TylerH
    Commented Mar 27 at 14:45
  • 8
    @TylerH The previous announcement (the question associated with the first link in Rosie's post) read "...to provide Stack Overflow content directly within Google Cloud"; I guess that does not explicitly answer your question but anything else would be quite a surprise. Commented Mar 27 at 14:48
  • 8
    @BryanKrause Oh, that's a good point--I agree it would be a little surprising if they integrated w/ Google Cloud for AI but not for hosting. Then again, it's surprising they're moving away from their own industry-leading on-premises infrastructure, so...
    – TylerH
    Commented Mar 27 at 14:51
  • 9
    You're making this announcement on the network meta, but link to posts on SO meta. That calls for a short word on the way these initiatives are going to spread from SO to the wider network.
    – ccprog
    Commented Mar 27 at 16:32
  • 10
    @ccprog Some initiatives, such as Staging Ground, are starting with Stack Overflow. It’s the largest and most active site on the network, so we often start there. However, we always do so with an eye on the larger network to see if there is value in new features rolling out across Stack Exchange. Some initiatives, such as the Product Advisory Council (working with community managers and the product team) will be network-wide, not just limited to Stack Overflow.
    – Rosie StaffMod
    Commented Mar 27 at 17:22
  • 19
    Given the context of the article, by onboarding, are you primarily referring to this thing (that currently can't be publicly discussed), or are you looking at the actual sources of onboarding problems, particularly wrt. lack of information to new users regarding the function of the sites and the network? I really, really want to be optimistic, but the blog post exclusively focuses on a minor change related to the thing that can't be discussed yet, which isn't actually an onboarding strategy that's meaningful; it's just a bump in signups Commented Mar 27 at 18:43
  • 22
    which isn't going to solve any of the multiple classes of onboarding problems we have. It'll look good to shareholders, but not solve any actual onboarding problems (and this is disregarding the other major problem that thing:tm: comes with, but we've already warned you extensively about that, and you've made it clear you don't care about the fallout for mods) Commented Mar 27 at 18:43
  • 23
    I'm glad to hear you're re-committing focus on human knowledge sharing. However, from the linked blog post: "There was a sense of optimism regarding Stack Overflow’s AI entry and in remaining a valuable resource for technologists." I can't find evidence to support this statement. There's a +81/-117 score on the OverflowAI search post and the overwhelmingly top-voted answer doesn't sound optimistic to me. Same for just about any other AI-related proposal I've seen on the network.
    – ggorlen
    Commented Mar 27 at 18:52
  • 4
    @ggorlen, as AI initiatives go, a 40% approval rating is resoundingly positive.
    – Mark
    Commented Mar 28 at 3:10
  • 13
    Hosting images locally will help Chinese users; Imgur is blocked in China. Commented Mar 28 at 12:35
  • @TylerH The "StackOverflow for Teams" product was migrated to Azure so there's some experience with that cloud provider at least.
    – Bergi
    Commented Apr 3 at 3:15
  • 1
    I wanted to share this MSO post meta.stackoverflow.com/questions/429682/… on the test around voter expansion on MSO since it was mentioned here last week. I’m not linking to it in the question above because as I shared last week this is just one test related to the larger project around better onboarding. Product will be sharing a post about the larger initiative in the upcoming weeks and I’ll link to that post and update this question once that is live.
    – Rosie StaffMod
    Commented Apr 3 at 14:32
  • 10
    @Rosie if the company feels that that experiment is in any way "better onboarding" I'll inform you the community has very differing views (as shown by the votes and answers to that question) Commented Apr 3 at 18:21
  • 8
    For the record, the thing I referred to in my previous comment is the 1-rep voting experiment - it isn't actually onboarding, but I'm afraid that it'll end up being the only "onboarding" change to be done under this initiative, as most other initiative attempts end up only getting 1 and maybe 2 associated features before being quietly axed Commented Apr 4 at 9:31

5 Answers 5

38

Thanks for following up on the previous announcement. This definitely helps answer the questions about what's coming next for "us", the community of Stack Exchange users, especially those who curate and moderate content and help maintain the overall quality of the network as a resource.

I think the blogpost starts out a bit on the wrong foot with the overemphasis on generative AI (especially in the very first sentence) that seems to cue "here's more GenAI stuff you didn't ask for!", but if a reader sticks with it, I think the other initiatives discussed there (those bulleted here as well) sound like a promising direction. I'll be eagerly awaiting further details: a bullet point like "better onboarding" doesn't say very much, and I'm curious to know which of the plethora of ideas offered by the community will get attention. A renewed Staging Ground, on the other hand, is a clear win for the people who have already put a lot of time into making it work (including former employees).

On the AI side of the blog, both the "guiding principles" and OverflowAI alpha takeaways sound to me like a more measured approach to AI tools that would have been appropriate for Stack Overflow to take from the outset. My confidence in SO's approach would have been much higher if you started there, but we can't undo the past and I'm glad you're getting there. Don't forget that your primary value here is an excellent, human-made, human-curated resource! You are wonderfully positioned to be an answer to many of the limitations of AI (and those limitations have been well-known for a long time). Don't lose sight of that chasing fads!

34

I await the exact meaning of "better onboarding" with bated breath.

I hope it means being intentional and zealous about earnestly promoting contextually-relevant Help Center pages to their intended audience in the site UI/UX (sorry for the word dump in that sentence, but I chose each of those words with care).

I hope it means adding basic logic to have the system automatically catch potential basic mistakes like posting non-answers, and then popping up friendly tips concerning those mistakes and guiding to a proper path of site usage before those mistakes are committed. (an example of what I refer to as "JIT / Just-In-Time guidance")

Such changes would offload a lot of annoying work from human volunteers, and remove a lot of frustration from both sides of curators and newer users.

Zoe's comments don't quite inspire confidence though.

And reading the relevant part of the blogpost doesn't inspire confidence either.

We believe if it were easier to participate, more users would.

Better onboarding to me does not mean more users participate (unqualified statement). It means more users participate and are automatically guided through texts which teach them the site rules, guidelines, and norms. I.e. The Help Center and Meta.

We'll start by getting more users onto the platform; a project has already kicked off with experiments to optimize the sign-up flows.

We're focusing on simplifying and making the platform more approachable, enabling everyone to have those "aha!" moments more frequently.

Assuming "approachable" means having fewer negative experiences on the receiving end of curation / guidance comments (what I'll call "desired output"), and that those "aha!" moments are the moments where people come to understand what Stack Exchange is, what its mission is, and what its methodology is, along with its rules, guidelines, and norms (what I'll call "efficiency", since it is what enables each individual to avoid such negative experiences), then you're choosing an approach to increase desired output where you attempt to increase input (add more users) and leave efficiency (desired output divided by input) untouched.

desired_output = input * efficiency

I can't explain why I think this input-focused approach is poor. But I do. I think it's shallow and lacking in taste/perspective (excuse my lack of skill and interest in persuasive writing with sweet words).

Maybe it's because I see a lot of low-hanging fruit in increasing efficiency (see what I wrote at the top of this post and all my other meta writings about SE).

Maybe it's because I went through a certain stream of education which is concerned about efficiency in design.

Maybe it's because I have a morbid interest in reading comments on Reddit by the people who probably didn't have what I think is a good onboarding experience on SE, and ran into a pitfall or shot themselves with a footgun, and then decided they hated it and would go to AI and never come back. The natural question is- where are you going to get all this input for the equation? I'm convinced that it's largely these types of people who left. Good luck getting them back without changing the story with respect to educating newer users about what this platform is and how it works, and waiting for such an actual story change to become reflected in peoples' mental images of the platform and their external discourse about it (if you increase input without changing efficiency, you'll have done absolutely nothing to change the balance of that discourse- the proportion of people who understand and appreciate the goals and methodology of SE). Or maybe there's no way to get them back. Maybe if they learned what SE really is and how it works, they'd realize they never wanted to take part in the first place. To me, this naturally leads me to think that it's wiser to focus on the efficiency part of the equation.

We will continue this effort by acquiring new users and providing lower-effort ways for users to participate. By simplifying how users interact on the platform and providing guidance to learn community norms, we can help users feel successful while maintaining a high bar with content health.

I don't see how "lower effort ways to participate" has any direct consequence of "providing guidance to learn community norms". The bars to participate are already incredibly low. The actual issue is that the guidance on how to do so properly is not reaching the ears of the people who need it. Though I am interested what ideas you have about "lower effort ways to participate", and have a couple of ideas myself.


As for discussions, I await the return of downvotes (one of the primary/foundational curation mechanisms).


As we look forward to a new fiscal year, we are focused on initiatives that support community health and growth by making it easier for users to find the information they need

I wonder what exactly the information being referred to is. The statement is especially weird since the AI search experiment is "ramping down". Since I care about user onboarding, my mind immediately jumped to hope that this is about making it easier for users to find Help Center pages and Meta Q&A they should read, but I'm going to try not to get my hopes up with assumptions.

3
  • 3
    We see onboarding as enabling more users to participate on the platform. That entails making certain actions easier to take, guiding them through site norms, and mitigating risks that may come with inviting more users to participate. We see this as a longer-tail initiative that will span into several projects and tests, and we think they will result in setting users up for success, and allowing them to engage on the platform in a way that in meaningful to them and to the community.
    – Rosie StaffMod
    Commented Mar 28 at 18:47
  • 13
    @Rosie only one of those things fits squarely into my understanding of what onboarding is: "guiding them through site norms". in the employee sense of the word, onboarding is not hiring, and hiring is not part of the onboarding process. onboarding is what a decent company does after hiring someone. all I get from the blogpost is the vibe that SE thinks hiring is onboarding. It is not.
    – starball
    Commented Mar 30 at 6:56
  • 13
    Very well said, especially concerning the onboarding. If the bars get any lower, I feel there will just be more moderation to do, meaning the workload of users who already voluntarily put time and effort into a community they (still) believe in will just increase, with little quality in return. Onboarding should make it easier to gain traction for those who join SE based on the merit of the platform, not because it was easy to sign up (to put it bluntly).
    – Joachim
    Commented Mar 30 at 18:30
9

Thank you for letting us know about your priorities. It is good to know what to expect.

If you are looking for feedback, here is mine:

Staging Ground sounds promising and like a step forward. I'm encouraged to hear a focus on onboarding; that sounds like it could be a step forward, depending on what that means, and I wait to see what specifically you all have in mind. The work on image hosting sounds necessary.

The remainder of the items sound neutral to me -- neither positive nor negative. I do not expect moving to the cloud to make any notable difference to the user experience, either way, and I am skeptical it will make much of a difference to reliability, in terms of my ability to use the network when I need it (I find the network already more than reliable enough, and I don't expect to see any end-user-visible value from this move). I am not optimistic about the Product Advisory Council making an appreciable difference either way, though I appreciate that CMs expect it to be positive, so we shall see. I don't think Discussions contributes to the Stack Exchange mission or is particularly valuable in its current form and so, while it's unclear exactly what work on Discussions is planned, I don't expect value from more work on Discussions. I think the AI stuff is not a good use of time, so it's good that it will be ended, but I'm skeptical of plans to devote resources to ramping it down or new applications of AI.

Overall, it sounds to me like 3 out of the 7 priorities are ones that I would agree are a good priority. I'm glad to see work on those. It sounds encouraging that there will be work on improving the platform.

3

Try Discussions for all technical topics on Stack Overflow.

Share insights about all your favorite technologies.

  • Seek advice, expertise, and best practices from others.
  • Dive into meaningful conversations with others in the community.
  • Communicate your opinions about specific areas of practice.
  • Registered users can create new Discussions and reply to existing Discussions.
  • While voting is supported and post scores are tracked, there is no reputation impact to voting within Discussions.

The people who run Stack Exchange have finally conceded that comments have their use. Beyond requests of clarification, detail, evidence of research, and informing users their questions are either off-topic or a duplicate; comments–I presume–will not to be locked, transferred to chat, edited or deleted by moderators or staff members. There's also a good chance that the number of flags, especially on Stack Overflow, will drop significantly as users who insist SO and other Stack Exchange sites are not forums will refrain from raising flags every time they encounter a meaningful banter between two or more users in their field of expertise.

How does this groundbreaking experiment work? In a video entitled Community Discussions, the femalr voice-over, which I swear is generated by an AI, describes the simple process:

…you see a discussion where someone is asking for opinions or thoughts on which approach they should take. You click into the full discussion and read through the comments. Then you want to add in your perspective. You share your thoughts on what they can consider, thinking about how they can trade off between future proofing their approach and future expense associated with it. You post your input and are part of the discussion.

Will this experiment gradually be extended across the network?


By the way, how is “community discussions” significantly different from an online forum or message board? How will SE differentiate from Reddit?

We were told years ago that comments were worthless, they added only noise. Stack Exchange was different, it focused on answers not chit-chat. We've all heard (and mostly accepted) this argument, haven't we? Examples:

I have been told time and again, as have others, that the Questions and to a far greater extent Answers are what matter in the long run; comments are ephemeral. Source: @KorvinStarmast

A former Community leader, much respected, posted in June 2018 the following reflections: [emphasis mine]

Try not to provide full answers in comments; if you end up working a problem out in comments, please move it to an answer. We know you're trying to help, but the system expects answers to questions. If we're reiterating that comments are ephemeral (and they are), we have to caution against leaving good information in them that needs to last, too

Instead, six years later, comments will occupy a higher role: they will be productive, meaningful, an immediate way to share opinions and experiences in any of the ramifications a single post might generate.

Returning to the announced initiatives and the related Stack Overflow article, I read the following [emphasis in bold mine]

As we consider integrating AI into our platform through partnerships and new features, we remain committed to preserving the essence of Stack Overflow: a space driven by human connection and genuine knowledge sharing.

and further along…

  • The combination of community-driven verification with AI generated excitement about improved trust and reliability of technical content; however, user trust in the accuracy and relevancy of content is paramount. Transparency is key.
  • AI-powered search features helped developers cut down on time in low-complexity situations, but sometimes ran into hallucinations when dealing with answers to more complex problems.
    Source: Community Products: Reflections and looking ahead March 27, 2024

SE cannot pretend to value human-generated content when visitors and users alike will soon be provided with solutions generated by AI, which in turn will (miraculously) be curated by the human community. Why would developers, programmers and engineers sacrifice their precious time and years of experience on checking the validity of AI codes? It's plainly absurd.

Am I seeing irony where none exists?

Only now does the company fully appreciate its legacy and success lies with the user base, complete with their human interactions and their behavioural flaws. It has also paid tribute to the incredible generosity of its communities on numerous occasions. Yet, at the same time SE seeks to promote AI generated content, as if it were the deus ex machina, in the present crises. Madness.

7
  • your list at the beginning of "officially-within-guidelines" types of comments is quite lacking. see /help/privileges/comment (downvote is not mine)
    – starball
    Commented Mar 30 at 22:31
  • 4
    @starball the post is long enough as is, and I think comments = requests for clarification, detail and evidence of research is a fair summary without entering into minutia. Thank you for the link by the way. Commented Mar 30 at 22:35
  • 2
    I'm struggling to connect the dots between what is above the divider vs below it. It makes me want to say "but discussions are resulting in tons of flags! 8/10 discussions end up deleted daily via flags" but i suspect that's not actually your intent with this answer... hence i'm struggling to understand the connection
    – Kevin B
    Commented Apr 1 at 20:41
  • 2
    It's a weak attempt to show the contradiction between the former hypothesis and the new feature. The former said comments are superfluous and disposable while the latter says comments lead to "deeper dialogue" and "meaningful conversations" For years SE argued comments were ephemeral, and users saw lively, interesting, productive discussions below their posts suddenly disappear without trace. Only now the company understands its success lies with the user base, its humanity, with its flaws but also incredible generosity while at the same time it promotes AI generated content. Madness. Commented Apr 1 at 21:28
  • I don't see it as contradictory at all. It seems more like a way to funnel what used to go on in the comments to another location, so that the main site can continue to be about Q&A, while also permitting the benefits that come from Discussions. It doesn't seem very different from "moving the conversation to chat" except it's no longer time-limited.
    – trlkly
    Commented Apr 9 at 20:32
  • @trlkly Huge amounts of valuable information were lost due to chat being time-limited. I'm treating Discussions as the place for Q&A and ignoring SO completely. (I've been making only low-effort contributions since Monicagate.) Commented Apr 14 at 19:40
  • @KevinKrumwiede In the ideal world, any valuable information would then have wound up put back on the Q&A site. But I know that often isn't the case. Just like the point of comments is supposed to be to add any useful info to the Answers. And I see nothing wrong with being low-effort here: I myself just contribute when I happen to know things others don't.
    – trlkly
    Commented Apr 14 at 20:44
1

We will launch the 1-rep voting experiment on Stack Overflow for 4–6 weeks, along with account creation prompts was posted a couple of weeks ago on Stack Overflow Meta. I'm surprised that this so-called "experiment" wasn't mentioned here as one of the initiatives, as it was derived from If more users could vote, would they engage more? Testing 1 reputation voting on some sites.

The "experiment" is currently paused. Ref. Pausing the 1-rep voting experiment on Stack Overflow: reflecting on the feedback and rethinking the approach

I kindly request that someone from the CM team add it and make an update about the 1-rep voting "experiment" / "initiative".

1
  • Note: This question still has the featured tag.
    – Rubén
    Commented Apr 9 at 14:42

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .