In the beginning of March, I shared a process by which posts that need staff attention could get escalated, and a plan to test it out between March 16 and April 30, 2020. As promised on that post’s timeline, I’m here to share with you results from this testing period, as well as the plan for this process going forward. Spoiler: we think it went well and wanna keep it moving forward ^_^
Stats from the testing period:
During the duration of the test, a total of 127 questions across all Meta sites got status-review added to them. The tag was added by a staff member on 74 (~58%) of these, and the other 53 (~42%) had the tag added by a moderator.
The majority of questions came from MSE, followed by MSO, and the rest Metas from all over the network:
- 52 (~41%) from MSE.
- 40 (~31%) from MSO.
- 35 (~28%) from other child Metas.
Most of the questions escalated were bug reports or feature requests, but there were posts of all types escalated to staff (note that tags are not mutually exclusive):
- 65 bug
- 39 feature-request
- 22 discussion
- 15 support
With regards to staff response time, a total of 68 questions (~54%) got some sort of response from staff after the tag was added. The average time elapsed between the tag getting added and any sort of response was ~7d 1h (min. of ~0.1h; max. of ~48d 5h; median of ~1d 4h). Of these (sets below aren’t mutually exclusive):
- 19 questions (~28% of above; ~15% of total) got commented on by staff. The average time elapsed between the tag getting added and the comment getting posted was ~6d 3h (min. of ~1min; max. of ~48d 5h; median of ~1h).
- 44 questions (~65% of above; ~35% of total) got answered by staff. The average time elapsed between the tag getting added and the answer getting posted was ~6d 11h (min. of ~0.1h; max. of ~42d 2h; median of ~1d 18h).
- 61 questions (~90% of above; ~48% of total) got edited by staff. The average time elapsed between the tag getting added and the edit getting submitted was ~7d 18h (min. of ~0.1h; max. of ~43d 12h; median of ~21h).
Most responses ended up resulting in a corresponding tag edit, and 65 (~51%) questions got status-review removed. 60 of these got removed by staff, the other 5 by a moderator. Of these:
- 48 questions (~74% of above; ~38% of total) got status-completed added to them.
- 1 question (~2% of above; ~1% of total) got status-bydesign added to them.
- 4 questions (~6% of above; ~3% of total) got status-declined added to them.
- 2 questions (~3% of above; ~2% of total) got status-norepro added to them.
- 3 questions (~5% of above; ~2% of total) got status-planned added to them.
- 7 questions (~11% of above; ~6% of total) didn’t get a new status tag added.
It’s also worth noting that of the 68 posts that got a response from staff, 51 had status-review added by staff, and the other 17 by mods:
- 74 tagged by staff, 51 responded, 23 unresponded
- 53 tagged by mods, 17 responded, 36 unresponded
Out of the many posts that got escalated through this process a few were worthy of highlighting, either because of the work put in by the users who asked the question, or because they got more than a run-of-the-mill response from staff:
- Triage needs to be fixed urgently, and users need to be notified upon receiving a review ban! on MSO.
- Closed as a duplicate, but the duplicate list is empty? on MSE.
- The text of an old application is being attached to my current applications on MSO.
- SEDE appears to be partially-refreshed on MSE.
- Though they don’t have a visible response from staff, Meta posts from Academia, Biology, and Travel (see also here) requesting banners linking to COVID-19-related resources also came through this process, having had the banners put up as a result of it.
Next steps:
Given the way the test went, we want to keep this process going. In the original post I mentioned that we’d set targets for how many posts staff can respond to, and how quickly we’d be able to do so. I also mentioned we’d work on reviewing the targets we set quarterly, to make sure they’re still appropriate. Given the stats shared above, for the rest of Q2 2020 we’re setting a target at trying to respond to 50% of Meta posts from across the network, within 2 weeks of status-review getting added to it. There’s about 6 weeks before the end of Q2, which should give us a good window to see if there’s consistency between the testing period and this new period, and if these targets are indeed reasonable (fingers crossed!).
For this process to work for both the community and the company going forward, for as long as this process is in effect the CM Team will provide you with guidance on what posts to escalate every 2 months — as I did for the testing period — along with stats similar to the above for the preceding 2 months. That way, we can ensure that y’all escalate stuff that’s more relevant to our current projects, and thus more likely to be impactful. You can expect a post before the end of May, with guidance on what to escalate in June (update: since posted here); in the meantime the guidance from the testing period still applies, but we’d also like to see anything relating to our ongoing work on Review Queues. As I noted in the guidelines for the testing period, though, we’re not just looking for stuff relevant to current projects: the guidance I offered there for new posts should still hold going forward, regardless of what our teams are working on.
The ratio, shared above, of posts tagged by staff vs. mods as well as whether or not these get an answer isn’t quite what we’d expected: this can be because the tag gets added to posts relating to issues currently being worked on, which means a higher likelihood of getting a reply… but it can also be due to uncertainty on the mods’ (and/or the communities’) end on whether stuff should be escalated. Whichever the case, we’d like to see more stuff escalated by the community and the mods: as such, I’d like to invite the moderators to grab a CM in The Teachers’ Lounge when in doubt, or to ultimately err on the side of over-escalating — this may ultimately drive our numbers down a bit, but that’s a risk we’re willing to take in order to make sure the process is as clear from your end as it is from ours.
The guidance and targets mentioned above are useful for new posts, or posts relating to projects our teams are working on… but what about older stuff? We’re gonna take the time between now and the end of Q2 to create guidance and establish a separate target for escalating and responding to old posts that don’t relate to any ongoing projects. These are likely to get much slower responses, so it makes sense that they’d have a separate target just for them, along with guidance specific to them. Measuring these will also help add some depth to the stats we plan to share, as they’ll touch on longstanding bug reports or feature requests from our communities.
We’ve gone through several internal changes, and through them all our commitment to improving our responses to our communities has been a top priority. As we continue to improve on the ways we respond to our communities’ requests, it is our hope that you will see tangible progress taking place. As with the previous posts, feedback on how the process went from your end is welcome, as are requests for clarification.