69

This is one of three project announcements for Improving Review Queues. We’ve summarized the project objectives and goals here.

As a reminder, this project is still in the early stages of discovery. In this post, we are sharing proposed changes to review bans and other new features. We are asking for your feedback before we begin implementation. After we collect community feedback, we will be open to including changes into the next design iteration.

Suspension of privileges

Ban notification

Most users learn how to be good reviewers by actually reviewing posts, but sometimes find themselves making wrong decisions. This can lead to a less than obvious suspension of privileges, with too little guidance as to what they did wrong and how they can improve in the future.

In the event of a suspension, a user will be notified in the Review Queue dropdown. A post notification will be available with reason and time remaining on the Review Queue main page, along with guidance for continued learning in this area. We’re also proposing softening the language by renaming “review bans” to “review suspensions.”

Open questions:

  • Do you have suggestions of other ways we can notify reviewers of a ban?
  • What information would be most helpful to you in this situation?

New features

My tasks queue My tasks
We’re considering introducing a new, additional workflow and means of task discovery. My Tasks is a curated experience that’s currently based on your Watched Tags. This page includes an overview of pending tasks that may require more immediate attention or pique interest in a user’s subject-matter expertise.

New filter options Queue filtering
We’d also like to add more useful, robust filtering options on all queues. Right now, the filter function is difficult to discover. We want to make this feature more noticeable and add queue-specific options. For example, you will be able to filter by your Watched Tags and sort tasks that are soon to expire out of each queue.

Badges
Currently, the gold Steward badge is awarded once at 1,000 reviews per queue regardless of whether you’ve performed 1,000 or 10,000 reviews. We propose that reviewers can now earn this badge multiple times, adding more opportunities to earn badges while reviewing - this change is intended to be retroactive.

Open questions:

  • How would you want to be able to customize in My Tasks?
  • Are there other review-centric badges you would like to see added?

In case you missed it...

We propose changes to Onboarding and Updated workflows/pathways here: Improving Review Queues - Design overview I.

13
  • 3
    Please address if there will be changes to the math for determining suspension lengths. Currently years old strikes count against a person cumulatively and don't age out of the system, or get lessened by huge numbers of good reviews. Commented Apr 23, 2020 at 18:22
  • 4
    @JasonAller AFAIA they do age out of the system, your suspension duration is doubled if you were suspended within the last 30 days, otherwise it is reduced, the more time you spend out of suspension, the shorter your next suspension will be (and obviously moderators can impose arbitrary length bans up to a max of a year) Commented Apr 23, 2020 at 18:24
  • 6
    IIRC, both increases and decreases are exponential, @JasonAller - as Nick says, the trick to getting a short review ban is to just not get review banned for a while.
    – Shog9
    Commented Apr 23, 2020 at 20:34
  • I was asking primarily about manual bans and not automatic failed audit bans. Commented Apr 23, 2020 at 20:43
  • 5
    Manual bans are entirely at the discretion of moderators; there's only an upper limit.
    – Shog9
    Commented Apr 23, 2020 at 21:57
  • 2
    I think we should remove the review badges, they're actually an incentive to robot-reviewing. And people robot-reviewing is much worse than people not reviewing
    – hkotsubo
    Commented Apr 23, 2020 at 23:04
  • 3
    "...this change is intended to be retroactive." EdChum, gnat and others will be very pleased to hear that. Commented Apr 24, 2020 at 7:08
  • @Trilarion How many badges would they get from that, do you know? Is that a thing that can be queried? Commented Apr 25, 2020 at 17:14
  • Filtering by watched tags could actually motivate me to go back to reviewing regularly. Previously I was annoyed by having to read bad post about games I'm not interested in all the time when reviewing. If I could filter by only those that interest me, not only would I have less to do and it would be easier, but I would also only see the tags that I actually care about keeping clean. Commented Apr 25, 2020 at 17:22
  • @FabianRöling Filtering by tag is possible since many years iirc. EdChum has about 90k reviews, see the stats and although I think that reviews are important no amount of badges will motivate me to do any more reviewing work. Commented Apr 25, 2020 at 19:59
  • @Trilarion Filtering by tags has been possible for quite some time, but (i) it's extremely undiscoverable, and (ii) Fabian specified by watched tags, which you'd currently need to do manually, and then re-set manually every time you wanted to review under a different filter and then come back. This is not the level of usability that makes it easy to attract new reviewers.
    – E.P.
    Commented Apr 27, 2020 at 13:43
  • I love it! I love the My Tasks feature.
    – Dharman
    Commented Apr 30, 2020 at 16:21
  • 1
    I just found this question while checking if someone ever proposed filtering queues by watched tags. Any news if this is still worked on?? Commented Dec 22, 2021 at 13:36

12 Answers 12

42

Only one note here...

We’d also like to add more useful, robust filtering options on all queues. Right now, the filter function is difficult to discover. We want to make this feature more noticeable and add queue-specific options. For example, you will be able to filter by your Watched Tags and sort tasks that are soon to expire out of each queue.

This is good, but... It won't be enough. The most effective reviewers are people who are invested in the topic - and they're overwhelmingly browsing questions in relevant tags, not haunting review. We identified this back in 2013, built a system to capitalize on it in 2014, and... then had to turn it off in 2015. It'd make a great tie-in with your "my tasks" proposal...

Put the entrypoint to this in front of the folks who care the most - those who are already taking care of their tags!

29

Review suspensions - clarity for users and moderators

  • As a user, it's useful to know if you've been review banned, even if you aren't frequently checking the review queues to see the warning there.
  • As a moderator, it's useful to know whether a user is currently review banned, and also whether they've been review banned in the past (and why, and for how long).

Review bans are pretty much like suspensions in many ways, except that they don't block access to all site functionalities but only one specific subset. Can we treat them more like suspensions in terms of informing people too? People here covering both users and moderators. I suggest:

  1. Review suspension to send an automatic notification to the suspendee. Rather like mod messages accompanying suspensions, but there's no need to introduce a special new message system for it. Just a notification in the user's inbox saying "you have been temporarily blocked from reviewing" with a link to the review page where they can see the full ban message and reason.
  2. Review suspension to add an automatic annotation on the suspendee's account. As a moderator, when I review-ban a user, I need to annotate their account separately. This is useful both for my fellow moderators to see at a glance what I've done and why, and for any moderators in the future to see at a glance how often this user has been review-banned before, and when, and why, and for how long. Would it be possible to automatically annotate the user's account, maybe with a short note "review banned for X days: ..." followed by the same review-ban message that the user sees when they visit the review page.
1
  • 8
    Please do not create automated annotations to record each review suspension. On SO, there are users with over 20, to 50+ review suspensions and this will flood/cloud the more "serious" nature of the annotations/mod message history list. We already have a link directly to a user history page with a filtered list of past review suspensions, from the user mod dashboard. Commented Apr 24, 2020 at 3:49
23

Suspension of privileges

From the screenshot you provided, it seems like you intend to hide the entire review page from suspended users. I think we should still allow review-suspended users to see the header menu (at least) for reviews... we don't need to hide everything review-related from them, we just don't want them to have the ability to perform reviews.

In some cases, it can be useful for even a review-suspended user to view a pending or completed review item for other reasons, especially if we want them to be able to learn. Some people learn by looking through completed reviews to see how other users are reviewing.

New features

I plan on bringing these up in a UX interview soon, but I'll mention them here as well:

  1. Please let us filter tags to ignore. We can filter by specific tags if we want, and that's great, but I'd really, really love to never see a question tagged with in the close vote queue (I picked that tag as a random example; nothing against Haskell) if I so choose.

  2. Please let us upvote and downvote posts from all review queues. We can in some, but can't in others. This is inconsistent and I often have to click through to the main view of a question from the queue in order to downvote it.

  3. It would be nice to be able to favorite (or follow) questions from the review queue as well for certain interesting review items or items we want to make sure we follow up on later.

7
  • 1
    "From the screenshot it seems like you intend to hide I think it would be good to still allow review-suspended users to see the header menu for reviews..." - This is a hard-to-understand run-on sentence; did you forget to finish a thought there? That said, I like #1 - I know there's a few people on RPG.SE that already filter out all D&D from their browsing experience, so it'd be nice if those people could avoid having to try to review said questions.
    – V2Blast
    Commented Apr 24, 2020 at 0:17
  • 1
    To continue on your #1, there is already a filtering option for questions for this. So the data that is in there should be usable for the review queue as well. With perhaps a user preference to turn it off. Which could be helpful for mods and power users, for example, I imagine.
    – Luuklag
    Commented Apr 24, 2020 at 8:41
  • 1
    @V2Blast Thanks for catching that; I reordered the post while typing it up and missed that unfinished sentence.
    – TylerH
    Commented Apr 24, 2020 at 13:13
  • 4
    @TylerH If you're referring to the interviews I'm conducting, you should have seen an invite sent out last week or so. It may be in your spam folder? Other than that, all of your points have come up in other interviews and are definitely things we are considering. We are 100% including post following.
    – Lisa Park
    Commented Apr 24, 2020 at 16:38
  • @LisaPark I am -- I just haven't had a chance to fill out the sheet as work has been so hectic this week and I can't access google services from work.
    – TylerH
    Commented Apr 24, 2020 at 19:58
  • 2
    Trivia: watched tags are already used for review filtering (falls back on everything if nothing matches). But since ignored tags are not used, the utility of this is limited - for the reasons you note in #1.
    – Shog9
    Commented May 4, 2020 at 1:44
  • 1
    I love the idea of making it easy and obvious for (especially new or banned) reviewers to review recent reviews from other reviewers. Even as a seasoned reviewer at this point, a habit I’ve maintained is to go back and audit the reviews I’ve performed and skipped each week to see how my votes line up with the consensus. I don’t always agree with the consensus, but it’s a good way of identifying potential errors in my rubric and understanding of the review criteria. Even though this is possible today, it isn’t easy or obvious. I see this as a valuable potential learning tool. Commented Jun 10, 2020 at 21:14
20

So, you say this in the main post...

Please let us know if there are other Review Queue issues you’d like to see addressed as well.

...and this is a suggestion related to the "My Tasks" feature, so I figured I'd stick it here.


Way, way, back in 2014, there was a discussion on how to empower silver tag badge holders. The most highly upvoted answer there suggests giving silver tag-badge holders a few extra close votes per day in their tag.

This would help allow people to handle more close reviews and help empty the queue.

This would tie in well with the "My Tasks" feature, I feel. You're already sorting reviews based on your favorite tags - it isn't too much of a stretch, IMO, to put in something here for people with silver+ tag badges to take care of.

...it's a bit tangential, true, but something to consider in relation to the "My Tasks", I think.

1
15

Speaking about badges, the current queues have a (not too easy to discover) tooltip for badge progress: it is shown when hovering over your total number of reviews in the queue.

enter image description here

I don't see that number anywhere in the screenshots – is that intentional? (Since there is a progress bar for daily reviews.) Hiding it could have the effect of reducing robo-reviewing, which is a real problem on some sites but almost non-existent on other sites which really could use more reviewing activity.

2
  • 3
    I'd really rather this not be hidden as the information here is valuable and the only place where it's available in aggregate like this. I think the current implementation of a 'not too discoverable' hover modal is fine.
    – TylerH
    Commented Apr 23, 2020 at 22:11
  • 11
    We're considering moving badge information to each queue's "Stats" tab and changing the green progress bars to the badge progress bars you can find in your profile. stackoverflow.design/product/components/progress-bars/#badges
    – Lisa Park
    Commented Apr 24, 2020 at 16:32
14

Thank you for focusing on the filtering system. From what I can see, it's the most under-used feature of the current system, and it could really use some attention. In particular:

Allow users to save filters

Have the existing filters way up front, together with the button for creating a new one. Make it easy for me to jump straight into the filtering modes where I am most useful, and to switch to the next such mode once I'm done with the first one.

Pre-set dupehammer filters for users with gold tag badges

or at least offer it as a suggestion. If I have a gold tag badge and you've given me a dupehammer for it, presumably it's because you trust my judgement about duplicates on that tag, so by extension you think that this is where my reviews would be most solid and most impactful. Dupehammer reviews, if presented early, count by 4x or even 5x, and they're a great way to get a duplicates queue out of the way so other users can focus on trickier reviews. Let's focus this attention where it counts the most!

Use closing history (and not just answer activity) when suggesting filters

On my main site, I rarely answer homework questions, but they make a large proportion of the questions I vote to close. (The community consensus is that the tag is on-topic but with clear guidelines on what is allowed, and many new users post off-topic questions, which I consider to be quite harmful to the site.) While there is a correlation between the tags where I'm active answering and the tags where I feel comfortable casting close votes, that correlation is insufficient to predict my future voting behaviour -- you need to look at my past closure voting history to give the best predictions.

If you can, point me to questions that should have been tagged

In addition to the above: few new-user off-topic homework questions get tagged by their authors, and this takes time to add. But I'm willing to put some money down that if you throw some data science at it, a reasonably high percentage of new-user questions tagged, say, will end up closed and tagged as homework. Give me a filter for those, please.

Allow me to filter on off-topic sub-reason

Same as above. Reviewing for closure requires different mindsets depending on why the questions were marked as off-topic, and you already have the data to make that separation.

5
  • re filtering by sub reasons: Isn't that already in the design draft? "Not reproducible" as an example? Commented Jul 9, 2020 at 7:49
  • If it is, I missed it (and ctrl+f doesn't give much). Can you point me to the specific place where that is discussed?
    – E.P.
    Commented Jul 9, 2020 at 12:29
  • I don't know, whether there is a discussion, but it looks like they are included in this image from the question: i.sstatic.net/DNyVc.png Commented Jul 9, 2020 at 12:30
  • @MEE To be honest I'm unfamiliar with the community-specific close reasons on SO. If this is already implemented, it would be great if a team member can confirm it.
    – E.P.
    Commented Jul 10, 2020 at 9:00
  • Well. At least the last two are (evidence). I'd guess, that the others are then too, as this is likely only a mockup. But yeah, not 100%-ly sure, until someone from team confirms. Commented Jul 10, 2020 at 9:08
12

Regarding the review bans, consider adding a feature that allows moderators to mark single reviews as bad and then automatically provide guidance based on this (and automatically issue review bans if problems repeat). This would considerably streamline manual review bans and provide guidance to the affected reviewers.

I elaborated this in more detail in a separate feature request.

11

My Tasks is a curated experience that’s currently based on your Watched Tags

The intention sounds very good. However, I have an issue with this and it's that I don't use Watched Tags like you assume I do. I really hope I'm speaking for more than myself here but my workflow isn't to add what I want to Watched Tags and then look at the curated home page. I find that the home page is not sufficiently well tailored. It's also hard to track for me.

What I do is I follow the tags I want as custom filters and I have Watched Tags as an extra layer on top of the custom filter. So, they don't represent well what I'm actually after - some of them are for things I don't want.

Here is an example: there are a lot of questions tagged as Java and JavaScript when the author only needs only one or the other. So, I have a custom filter which includes JavaScript and then Java as a Watched Tag. Thus, when a question shows up with both, it's highlighted, and I check if the tagging is appropriate and perhaps remove one of the tags, if needed.

And here is how that looks:

Screenshot showing three questions, one of which has both the Java and JavaScript tags and is thus highlighted

I have most of my activity on the JavaScript tag, yet the proposed system will give me a lot of Java questions to look at.

The front page is already bad at this. The Interesting tab supposedly uses my Watched Tags to determine what to show me but I never figured out how it does that and it seems to be consistently wrong.

Screenshot of the Interesting tab right now

Here is the state right now. I've highlighted four items that are completely out of the left field for me:

  1. Nope.
  • python I don't care about Python. I've very limited experience with the language. My only post on the tag is on a question that was tagged both JavaScript and Python because the author was asking how to do something similar to Python but in JavaScript.
  • concatenation string-concatenation - I don't consider myself some sort of expert or even interested in either of these fields. They are things that I happen to do. I only have a single answer on a question that has the last tag.
  1. asp.net-core and kestrel - I have basically no experience with ASP.NET-Core, nor with Kestrel. I know they exist but I can't really even tell you their exact functionality. I've never posted on anything with either of those tags. To the best of my knowledge, I've not interacted with posts on these tags - no comments, votes, nor anything. Why is that question expected to be interesting is beyond me.
  2. pytorch dataloader the former is probably related to Python but I only assume so based on the Py prefix. I don't really know either of these things. Never really interacted with these either. Why would I be interested in the question?
  3. owl ontology protege I have literally no clue what any of those three tags is. They may as well read "Gobbledygook", "Blah", and "Twaddle".

So, there you go - that's my experience with the Watched Tags - they are sort of used but also irrelevant results are thrown in. Hence why I don't use Watched Tags for "things I'm interested in" - it doesn't seem to work that way.

If Watched Tags are to be used for review filtering then it seems it would be mutually exclusive with what I do. I can either

  • maintain Watched Tags that I do want but lose the benefit of using them alongside custom filters, since that would overlap.

or

  • forfeit using the Watched Tags for review queues and still have a list of things I'm not necessarily interested in.

P.S. Let me pre-emptively link to this because it's applicable:

XKCD comic #1172: Workflow

2
  • You are saying the front page is bad for the workflow you use watched tags for; I would posit you are just using watched tags incorrectly. If you want to periodically check for questions tagged with [java] and [javascript], just save a search results page or a custom filter for that instead. The "watched tags" feature is designed to help you notice questions you want to see based on how they are tagged.
    – TylerH
    Commented Sep 16, 2020 at 15:43
  • 1
    @TylerH what I use is a custom filter as it gives me exactly what I want. Before custom filters, I'd directly watch tags but it was more cumbersome. The watched tags do not. My point here was that employees assumed a certain workflow for the tags and the front page but I wanted to show that, at least for me, that workflow doesn't work. Maybe it doesn't work for others, as well. Since employees were trying to base their decision on that workflow in place, I felt it important to show that maybe not all people used it and thus the decisions they took might not be completely valid.
    – VLAZ
    Commented Sep 16, 2020 at 16:48
11

One of current criteria involved in triggering review suspension is blatantly wrong and needs to be corrected.

Respective feature requests are hanging ignored for many years despite strong community support:

Criteria that needs correction is failing "known good" audit when user attempts to add a comment.

This criteria contradicts the very intent of the audits - these are supposed to catch reviewers who aren't paying attention, but comments indicate exactly the opposite - that user has reasonably carefully read the post.

Given above, criteria must change to opposite - audit should pass with a message like: "Attempt to comment demonstrates that you pay attention while reviewing. Congratulations, you passed this audit."


When exactly to pass such an audit, I don't have strong opinion on that. Triggering pass immediately when user clicks "add comment" link (without even attempting to actually write it) would be simpler to implement and quite smooth but it opens a risk of abusing this as a low effort check to discover the audit.

Passing audit after user has already written text in comment box and tries to add it looks more reliable than above, but it may cause complaints in cases when users spend considerable effort preparing comment, only to discover that system swallowed it without a trace. To decrease such friction system could actually add comments to non-deleted audit posts (and this would complicate implementation a bit).

1
  • update: this appears to be fixed. Changes to the system were announced at MSO here and respective feature request over there was marked status-completed
    – gnat
    Commented Oct 15, 2021 at 17:13
9

Regarding review suspensions:

According to this post and several other comments I have read by some Stack Overflow moderators, one major problem with the current review ban/suspension system is, that many users don't notice the ban before it expires.

Maybe I misunderstand something, but it seems, that while your suggestion currently covers making these suspensions more noticeable when they are active, they do hide the notice on the review page and the inbox notification upon expiration.

To solve the other problem, too, I'd suggest the following:

  • Every review suspension has probably some kind of a unique ID in the database.
  • There'll be a new link of the form /review/suspensions/id, which will permanently show the notice and start and end date of the suspension to the user and moderators.
  • This page will be linked in the notification. There'll be some visible notice, whether the ban is still active or not.
8

You guys need to get some talented UI/UX designers on this task, not more programmer types. Programmers tend to love rules and when you are holding a hammer, everything looks like a nail. The problem with the review queues is that it's a bunch of rules in search of a solution.

You see a problem and think, "Oh, we need a new rule to address that". So you put a new rule in place, but there are still problems so you make another rule and another, and you end up with a labyrinthine mess of rules that were decided by committee. You tell yourselves, "Well, we did some user research so our solution is justified". But you'll never solve the issue that way because you keep coming at it from a creator's perspective, from a programmer's perspective.

Instead, you need to come at it from the user's perspective. Not the sort of user like yourselves who already know the site well. And not just a sampling of users for data analysis to decide what new rules to create. You need a visionary designer's mentality to crack this nut. Someone who doesn't get mired in details and data as programmers often do. You need a high-level thinker who can take a huge step back, get a big-picture view of the entire situation, and come at this with a whole new philosophy that will inform your efforts.

I don't know if you have such people at your disposal, and admittedly there are not many people like that in the world. I think it's likely that Stack Overflow is full of programmer types, and that's probably the kind of people who are in a position to make changes. That much is evident in the design of the review queues and also in numerous other mechanisms throughout the site.

At the most fundamental level, this is the reason why Stack Overflow creates so many frustrating experiences for people, especially those who don't think like a machine. The programmer mentality is evident in the site's design. In a metaphorical sense, the site's user experience feels much like trying to program a computer. Which for many beginners is a horribly painful experience. Computers are fundamentally anti-human and unfriendly. In the words of Joseph Campbell:

Computers are like Old Testament gods; lots of rules and no mercy.

Machines may be that way, but the UI/UX of a website doesn't need to and shouldn't be that way. It can be a friendly, humanizing experience if you know how to build it like that. But you have to approach the problem like a master UI/UX designer, not like a master programmer.

3
  • I arrived here from Stack Overflow, not realizing that I was now in the meta for the broader Stack Exchange. So my opinions are directed specifically at my experience with the review queues at Stack Overflow. Although presumably the queues are the same at all the Stack Exchange sites, in which case I think my remarks would still apply.
    – peacetype
    Commented Sep 30, 2020 at 0:20
  • 2
    This is one of the most insightful perspectives on the issue that I've read so far. +1 Commented Sep 30, 2020 at 1:01
  • 4
    FWIW, Lisa (the author of this thread) is a pretty talented UI/UX designer. So... let's hope she's given the time and resources to put that skill to use on this problem.
    – Shog9
    Commented Sep 30, 2020 at 16:34
4

Provide a confirmation "dupe hammer" to silver tag badge holders, so if they confirm a duplicate in the review queue, the question is closed without needing a 3 person to confirm it.

Be default sort the review tasks, so the tasks that are closest to completion come first. Remember to take "dupe hammer" into account with this sorting.

Exclude tasks for question that have had few recent views unless it for duplicate.

In addition to using the filter tags, consider using tag badges (or stats) to sort the review tasks.

Maybe some AI can be trained to predict when a user will skip a task, and don't show tasks the user is likely to skip.

Option to get an alert if a question voting to close for duplicate is edited.

2
  • "Option to get an alert if a question voting to close for duplicate is edited." This is called "following the question" and already exists.
    – pppery
    Commented Jul 9, 2020 at 15:36
  • @ppperyso a checkbox when reviewing duplicate to automatically follow if vote to close with the choose being remembered. Commented Jul 9, 2020 at 17:37

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .