25

Stack Exchange has announced:

With our new mixed method research approach, one thing we lost was regular, in-depth conversation with a group of folks highly invested in Stack Overflow’s growth. We also wanted to keep seeking out feedback from a broad range of perspectives. 

That’s why we’re creating a working group of users made up of people from all corners of the developer community — from folks new to programming, those who don’t participate in Stack Overflow but are passionate about programming, experienced Stack Overflow users, frequent contributors, and more. We’ll hand-select folks of diverse backgrounds who are excited to chat with us regularly about everything from new ideas to features, to how we communicate with the broader Stack Overflow community. 

On the other hand, they've already stated they don't have the resources to deal with the existing feedback they have.

It’s hard to capture structured feedback on Meta. There are now so many conversations that we aren’t often able to participate. As a result, users end up not feeling heard and a lot of confusion (including some misinformation) is generated.

According to this SEDE query, Meta gets around 600 unclosed posts per month, counting both questions and answers. Since most posts are small, that is really not a huge amount of traffic. Anecdotally it doesn't seem hard for the regulars to stay on top of it.

If Stack Exchange is having a hard time keeping up with Meta, it seems like it will have even more challenge holding regular conservations with an adequately large working group and tracking and handling all the feedback gathered. It wouldn't be surprising to hear that they've already gathered more than 600 surveys today alone.

What will Stack Exchange do in order to have enough resources to handle all of the new feedback they will be gathering with The Loop?

6
  • 3
    Worth remembering that a survey is vastly easier to examine and categorise than free text (Meta posts). So, while The Loop is likely going to produce more data that needs handling, it'd be easier to handle it. Of course, it still needs somebody, possibly many people, to sift through it.
    – VLAZ
    Commented Nov 26, 2019 at 6:29
  • 1
    If you do a Loop quarterly I'm sure they can handle. If that doesn't work out make it even less frequently. Keep in mind they set the topics, not the participants.
    – rene
    Commented Nov 26, 2019 at 6:38
  • 8
    They want fresh feedback instead of the old bad one. They'll ignore all the old valuable feedback like moderation improvements, moderation tools improvements, better flagging and all such and gather new feedback which is relevant to them
    – weegee
    Commented Nov 26, 2019 at 6:51
  • To answer the actual question, this is how you get results from a survey on survey monkey. If the response is from more than 100,000 users then they get CSV export of data
    – weegee
    Commented Nov 26, 2019 at 7:06
  • 6
    It's easier to use the feedback from a small focus group than from a large crowd of angry shouting people.
    – Raedwald
    Commented Nov 26, 2019 at 7:21
  • 11
    @Raedwald yes but meta is not a "large crowd of angry shouting people". It's a rather more opinionated place and has a lot of those requests and the best thing is that it's available to everyone, so the feedbacks by the users can be judged and is transparent.
    – weegee
    Commented Nov 26, 2019 at 7:30

1 Answer 1

32

I will try to answer this on a high level:

  1. Download the results from Survey Monkey (csv) and import them into an internal tool
  2. The responses to the "what do you like best" and "what do you find frustrating" questions are coded in to categories
    • For a number of months we have been doing this by hand (yes, a few people have looked at many thousands of these responses, and assigned them to one of many dozens of categories).
    • Based on their assignments, and with the help of members of our Data team, a Machine Learning routine has been set up that has been trained (using the data set up to date) to auto-classify new responses. This is necessary for coding the responses to The Loop survey, as we expect to receive many times the number of responses we normally get in a given month, way beyond our capacity to code by hand.
    • We will continue to spot-check and code a sample by hand, to verify that the ML process is getting things right and to improve its training and accuracy.
    • A number of people will be sifting through to read as many responses as time permits. We know that many people have strong feelings here, including both Meta regulars, as well as network users who are not regular Meta users, but still feel passionately enough that they are willing to respond to the survey.
  3. The Data Team will take the raw data (overall satisfaction level, and coded good/bad responses, as well as the optional demographic data) and do their thing.
    • Data will be analyzed for overall trends for satisfaction and coded responses, both across all users as well as across different demographic groups.
    • The results here can greatly affect our decision making on a product level, and when combined with historical results, can also help us to see how attitudes are changing (for both the good and bad) over time.

Big thanks to all who are participating. Even though we cannot act on every single request, we appreciate and value all of your feedback.

Credentials: I am the developer who added the Site Satisfaction Survey to Stack Overflow nearly half a year ago, and have seen on a detailed level exactly how we have been handling those results (approximately 1500 per month). The survey publicized here is structured in a very similar way.

Disclaimer: I am not the one doing the work I describe above, so it is likely that I got some details wrong. But the basic gist of it should be accurately represented.

13
  • 7
    First of all thank you for answering so thoroughly. I have a secondary question towards your point 3: How does optional demographic data factor into your analysis of the results and are other categories automatically gathered too? From what I can see, this survey would be lacking context of site paritcipation of respondents, was that deemed not necessary to select for?
    – Magisch
    Commented Nov 26, 2019 at 8:50
  • @mag 1) you're welcome 2) I tried to answer question of how we use the data above. I am not aware of the use of any automatically gathered info. We definitely do not associate your answers with your responses. True, this survey does lack the context of site participation. On the monthly surveys we are able to identify if the respondent is logged in or anonymous (though not their accountId). We sacrifice this bit of context here for the sake of opening the survey up to a wider audience. Commented Nov 26, 2019 at 9:27
  • 16
    I have to point out that results of your survey will give you spectacularly wrong data. At the end you will be fixing wrong things, just like you already started. Unwelcoming community is extremely broad and when you start dissecting that you will find that actually it is not that people are not nice, but that moderation is the problem, and then you will find that it is only a problem for people that come on SE (SO) with wrong expectations thinking that they can ask just about anything. Commented Nov 26, 2019 at 9:27
  • 1
    @ResistanceIsFutile thanks for your feedback. I understand your frustrations and your fears. Data analysis is not my specialty. I will leave validation and analysis of results up to our Data team. Commented Nov 26, 2019 at 9:28
  • 1
    We already had "Welcoming wagon" that solved absolutely nothing. Unless you start implementing long standing requests about better moderation tools, about giving more feedback to new users (including negative one, so they can improve early) about teaching new users better how the sites work... nothing else will work out. Commented Nov 26, 2019 at 9:29
  • 9
    @ResistanceIsFutile It's not really fair to put interpretation bias on Yaakov here. He's not who you're angry at.
    – Magisch
    Commented Nov 26, 2019 at 9:29
  • 2
    @mag I am not angry at Yaakov at all... I am just saying this will not give any meaningful results. And that starting point is somewhere else. They (we) already know what needs to be fixed, but those fixes are not priority because 100K new users each month are deemed more important than actual product and long term viability. It is feelings over content and quantity over quality. Commented Nov 26, 2019 at 9:33
  • 9
    @ResistanceIsFutile I guess we will have to agree to disagree. If we absolutely knew what needed to be fixed, to help out all users, we would have a very clear path forward. I appreciate your conviction re: your approach to what features need to be done. We feel the need to gather a wider range of opinions. However, our goals are not all about new users and cannot be reduced to "feelings over content and quantity over quality". Content and quality, as well as welcomeness and usability for all users remain at the top of the issues that we are trying to address. Commented Nov 26, 2019 at 9:43
  • 3
    @YaakovEllis There is nothing wrong with gathering wide range of opinions. My problem is that I am not convinced you (Company) will use those opinions wisely. SO is currently a mess, there are things you can do now to start cleaning that up... for instance there was successful 3 CV experiment there... why are we back to 5 CV even after you had analyzed data? Instead we got increasing reputation for questions that will bring more stuff that needs to be cleaned up... You are saying that you do care about quality, but IMO you are not actually doing anything in that direction. Commented Nov 26, 2019 at 9:56
  • @anonymous unless of course they know why people are unhappy (at least on meta) and are ignoring that because reasons. Commented Nov 26, 2019 at 13:57
  • 1
    @anonymous I answered your question here Commented Nov 26, 2019 at 15:22
  • 1
    @YaakovEllis Thank you so much for answering my question so throughly and so quickly! Commented Nov 27, 2019 at 2:31
  • Thanks for the valuable insight into the company's internal process for handling this data. All else being equal, sharing this kind of thing can strengthen the community's perception (in aggregate) of being a part of SO's decision-making process.
    – Zev Spitz
    Commented Nov 27, 2019 at 18:03

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .