-6

I am preparing a suggestion to rework the review queues icon to indicate various levels of pending reviews, "battery indicator style". The state of such an icon can be refreshed at a rate that turns out acceptable performance-wise - once in 1 or 5 or 10 etc hours.

Here is a sketch of how it is supposed to look like:

Enter image description here

(The tooltip text is naturally expected to reflect the respective icon, saying something like "low number of pending reviews", "high number of pending reviews" etc)

The problem is, I haven't been much into review matters lately and can't figure what would be reasonable per-queue values to differentiate low from medium from high here. Something like, 5 - 50 - 500?

Limits for close queue look particularly tricky. There seem to be always high demand, and if indication is done in a straightforward way then 3K users will have indicator always "high", that would be practically useless.

Per my estimate there are about 95 thousands 3K users (out of about 440 thousands eligible for various reviews starting at 500 rep points) - it doesn't look wise to have unusable indicator for such a large amount of reviewers.

Also, if there are folks active in reviewing at smaller sites, I would be interested to learn whether our values at SO would do fine over there or I better propose them to be configured separately.


Here is a background story for those interested. These old popular posts are strongly related:

Suggested edits take way too long to be reviewed due to a 2017 change to the top bar. Let's revert it

Is the top bar redesign the sole cause of the suggested edit slowdown?

Suggested edits have fallen off dramatically since roughly the week of February 13. That's when the new top navigation bar went live... This seems to indicate to me that the new navigation bar is the primary cause of the suggested edit queue hitting its cap on a regular basis...

Frankly, I would prefer things to just totally revert to the old way with displaying the number of pending reviews rather than this battery-style stuff.

While total recall is not done though, I decided to propose some compromise approach that doesn't impact the rest of screen estate and that can be easily adapted to meet whatever performance-related limitations could be blocking recovery of old proven solutions.

10
  • 5
    What exactly is the goal here? To indicate to people that the queue is full and thus needs their attention? Frankly, I assume that the queues are always rather full; it's about what is in them that keeps me off, not a lack of knowing that something is in them. Commented Feb 17, 2023 at 15:25
  • 4
    I was talking specifically about my experience in the last sentence. I'm decently sure that I know my own motivation better than what you've linked. Just to be sure, I've gone for another round of edit reviewing; lo and behold, it's still depressing to see how many turds are being polished. Commented Feb 17, 2023 at 16:48
  • 4
    Sorry, but I'm not convinced by the linked plots. Looking at the second linked meta-Q, there seems to be already a downwards trend starting in Jun/Jul with a lots of noise on top. That removal of the counter may have been an occasion to jump ship / final nail in the coffin doesn't mean it's the cause, or that recreating it will get people back. Commented Feb 17, 2023 at 17:12
  • 4
    I don't quite understand how a different icon would somehow get people to decide they want to use the review queues. If anything, the new one that we switched to in 2014 is more annoying than the number we had previously, and thus more likely to be noticed until your notification blindness sets in (due to the fact that it's always a red dot, just as before it was always a red dot with a number.) Nothing significantly changed from a UI perspective in 2014 that would reasonably cause a sustained drop in reviewers.
    – Kevin B
    Commented Feb 17, 2023 at 17:23
  • I deleted this post after 10th downvote and indicated (in comment) lack of support as the reason for deletion. But after closer studying feedback provided in comments I decided to undelete because I found no substantial reasons to take it seriously
    – gnat
    Commented Feb 24, 2023 at 10:55
  • @gnat Well, thanks, I guess? What reasons can I give so that you take the feedback of "What exactly is the goal here?" seriously? Commented Feb 24, 2023 at 11:49
  • @MisterMiyagi after unsuccessfully trying to make sense of comments you posted above, I took a look at your profile at main site and noticed just a single silver badge for reviews. Comparing this to over 100 golden review badges I've got I decided that it may be OK to disregard your feedback
    – gnat
    Commented Feb 24, 2023 at 11:55
  • @gnat Isn't the entire point of this exercise to get people like me to do reviews? Even if you disregard all the stuff that I raised as discussion, please consider this as strong motivation to edit the question to actually say what the point of this change is! Commented Feb 24, 2023 at 11:57
  • @MisterMiyagi not exactly so, because I haven't yet figured what to do about 3K reviewers with CV queue always in high demand obscuring indication for other queues. As of now I am primarily focusing on eligible reviewers under 3K rep doing non-CV reviews (there are over 300,000 of such users)
    – gnat
    Commented Feb 24, 2023 at 12:05

1 Answer 1

1

To get better idea of what could be thresholds for proposed feature, I've been sampling number of pending reviews in various queues displayed at reviews page for about 10 days.

Observed numbers for First Questions queue were in the range 6.1K-6.6K, First Answers 3.6K-4.1K, Late Answers 2.6K-3.0K, Close Votes 2.8-4.1K, Reopen Votes 494-581, Suggested Edits 473-503, Triage 477-499, Low quality answers 87-138.

To get simple things out of the way, numbers above essentially answer the question of whether thresholds should be network wide or site-specific. Answer is, thresholds would better be site specific because observed values look orders of magnitude beyond what one could expect at smaller sites. For example, when I looked at second largest site reviews page it displayed 487 close votes, 2 reopen votes and 0 in all other review queues. It was like a different universe compared to what I have seen at our reviews page.


Studying helped me better understand why old number was removed from top bar. With values I observed, it just looks impossible to have a usable indication of pending reviews demand in the form of some single "aggregate" number.

To have it sufficiently detailed and convenient to use we probably would have to display each queue numbers separately, and that would be radically different UI than what we had back then and what we have now, maybe similar in size and placement to regular top bar. I doubt that dev team is interested in doing this now and it was even less likely back then when they changed that old "aggregate" number to even less useful red dot.

Anyway, getting back to proposed solution that fits into existing UI, after looking at sampled numbers I decided to pick a threshold between average and high queue load such that it occured less than in 1/10 of my samples (except for close queue). For threshold between average and low I picked about 1/4 of higher threshold.

The reason for picking high threshold like this is, I wanted to minimise risk of some queue permanently dominating indicator, assuming that it will reflect queue with highest demand. By the way, it is possible for system to automatically do such sampling and thresholds (re)calculation, say once in a week or two.

For close queue I picked just arbitrary high threshold because I felt unable to reliably figure value that won't carry a risk of eventually dominating over other queues.

So what I've got looks about as follows: First Questions cross high threshold when pending reviews are over 6.5K, and are considered at low level under 1.1K. Respective numbers for first answers are 3.9K and 1.0K, for late answers 3.0K and 800, for close votes 10K and 1K, for reopen votes 570 and 140, for suggested edits 495 and 120, for triage 495 and 120, for low quality posts 120 and 30.


I think it would be very desirable to additionally have sort of personalised adjustment to generic indicator, such that users who hit the limit in particular queue would have this queue discarded for the purposes of indicator.

This would serve two goals, first to help such users see if there is substantial demand in queues other than they completed. For this purpose I think it would be helpful for indicator to have a tooltip telling which queue is currently in highest demand (for a given user).

Second, it would give a satisfying sense of accomplishment to those who managed to hit review limits in all of their queues. This looks difficult to achieve for 3K reviwers having 8 queues and respectively 8 limits to hit but there are over 300 thousands users with reputation between 500 and 2K having much better chances with 4 queues they are eligible to handle.

(Given potential performance impact I think it would be OK to refresh such personalised adjustments at even lower rate than generic review indicator.)

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .