71

I'm saying this with caution because I'm basing my experience on a documentation topic that is very small and does not have enough contributors.

I've observed that since the documentation review queue is active, some proposed edits which had been waiting for several days have been approved.

These edits had been quickly approved by two users already, but because of the lack of contributors a third approval hadn't been granted, and the edit had not been rejected either.

Now, these edits have finally been approved but by people who have zero reputation on said topic. I am not convinced these users were really knowledgeable enough to make such decision.

My conclusion is that the documentation review queue is encouraging people to review edits outside their domain of competence, and this may lead to a quality decrease, especially in topics which have lower participation.

To me, the solution to the problem of edits staying unanswered on topics with lower participation should rather be to accept an edit with a lower number of approvals (from people who can judge, instead of looking for approval from people who can' t).

7
  • 16
    My conclusion is that the documentation review queue is encouraging people to review edits outside their domain of competence, and this may lead to a quality decrease, especially in topics which have lower participation. How is this different from all the other review queues exactly? Commented Sep 12, 2016 at 9:58
  • 4
    @FrédéricHamidi: your point being... ?
    – Zimm i48
    Commented Sep 12, 2016 at 11:02
  • 54
    @FrédéricHamidi Its different from other review queues in that none of them besides a very small fraction of CV reviews require technical expertise. In LQP, SE, CV queues etc I can easily review items where I have no technical relation to, yet in docs that would be very inappropiate.
    – Magisch
    Commented Sep 12, 2016 at 11:04
  • 19
    I am skipping like 90% of the proposed changes I get.
    – Timothy
    Commented Sep 12, 2016 at 13:08
  • 7
    I said how we could offset some of this meta.stackoverflow.com/a/331751/792066, btw, that's not the only problem: people is happy to approve plagiarized stuff stackoverflow.com/documentation/review/changes/92539
    – Braiam
    Commented Sep 12, 2016 at 13:08
  • 3
    Another interesting wrinkle to this is that there may be more knowledgeable SO members who lurk and answer only the tough questions in a topic, and they would be the ones you want to review documentation if they know their stuff. Also, if there are more experts than questions in an area, reputation for a tag/subject is spread thinly amongst the members; even if they know all the answers, there's not enough reputation to go around...
    – BenPen
    Commented Sep 15, 2016 at 14:25
  • @Braiam, people ( both authors and reviewers) should be reminded that plagiarism is bad - see suggestion meta.stackoverflow.com/questions/330001/… Commented Sep 15, 2016 at 19:41

7 Answers 7

24

I'm torn on what to do with this. On the one hand, one does not need to know anything about the tag to be able to determine inappropriate content like asking a question, spam, rude/abusive/vandalism edits. On the other hand, one needs technical expertise to judge if the information provided in a good edit1.

If we were to say only people with X score in a tag can approve docs then some tags could really languish unless you scale it like you suggest. Conversely with the way it is now we have lots of people able to look at the review but we could get documentation that should not be approved, approved, because of overzealous/robo reviewers.

I wonder if instead of lowering the number of people needed to review if we still have the same number but at least one person must have a score of some amount in the tag in order to actually approve the edit. This way you get the benefit of the larger community looking at it but you also have an expert looking at it from a technical content perspective.

1. good edit means it is not a question, spam, rude/abusive/vandalism edit.

4
  • 1
    At least one persons sounds a bit like a bad compromise. What about at least two persons? What about a combined threshold of rep and tag score for each reviewer or for all reviewers combined? What about really mean audits and long bans for those who fail them repeatedly? What about just more audits (are there audits yet) to get an idea how bad the problem is? Commented Sep 12, 2016 at 20:21
  • 4
    @Trilarion that sounds too much work for something that should be easy to solve: stop incentivizing with reputation.
    – Braiam
    Commented Sep 12, 2016 at 20:51
  • 1
    @Braiam Well, we can bet on what will happen sooner (if anything at all) if you want? Commented Sep 12, 2016 at 20:53
  • 7
    @Trilarion None of my suggestion is meant to be a hard number. I was just trying to come up with a compromise that allows users in general to review but still allows the experts the ability to make sure the content that is added is technically correct. IDK if this is achievable or not but I figured I would toss it out there. Commented Sep 12, 2016 at 20:56
24

Following on from the reasoning of why @NathanOliver is torn, to me, it seems the queue is doing too much work. It needs to be split in twain.

The first queue would be for filtering out spam, rude/abusive/vandalism edits, etc.

The edits that pass the first queue move onto a second queue which assesses the technical suitability of the information provided.

The latter queue would require a higher reputation score before it is accessible to a user. As with all other aspects of SO, users with enough reputation can be trusted to skip the edits that they are unable to assess confidently.

2
  • 1
    I wanted to suggest something like this myself. Though I wonder: how often do low quality edits actually appear, and how much extra work do they actually cause to technically-apt users? At the end of the day, score-in-tag checking might be all it takes.
    – Dev-iL
    Commented Sep 13, 2016 at 13:52
  • 4
    I like this idea. Documentation content should be reviewed closely by people who know the subject matter. This extra step isn't necessary for Stack Overflow answers because those have another mechanism (upvotes/downvotes) to enforce accuracy and technical suitability.
    – jkdev
    Commented Sep 13, 2016 at 17:51
5

These Documentation edits are special, since reviewers need to determine the quality of the content. It is not generic moderation and it cannot be done by a generic user with x amount of rep - rep is completely irrelevant here.

The solution is quite simple: the only way this review queue will ever work, is if it is restricted to user with silver and/or gold tag badge in the topic that the documentation-under-review is posted at.

Given the flood of technically incorrect crap that has already been posted on Documentation, I would lean towards gold tag users only.

5
  • If I follow what you mean, only gold tag users have access to the review queue but others can still review edits by looking for them on a specific topic?
    – Zimm i48
    Commented Sep 13, 2016 at 8:33
  • @Zimmi48 What are you talking about? Gold tag user naturally means "person with a gold tag in the specific topic that the Documentation is posted under". Only that gold tag is relevant. If they have 99 other gold tags doesn't matter. Thought that went without saying... I've clarified the post now.
    – Lundin
    Commented Sep 13, 2016 at 8:42
  • Oh right, sorry (I confused gold badge and gold tag). The problem is that in my example topic, we only have one silver tag user and one bronze tag user.
    – Zimm i48
    Commented Sep 13, 2016 at 8:48
  • 2
    @Zimmi48 If you only have two users for a tag, then I'd say that's a pretty certain hint that writing documentation for that tag is bound to be a fiasco and that you should refrain from doing so.
    – Lundin
    Commented Sep 13, 2016 at 8:51
  • Not two users, two silver/bronze users (and 20 or so more answerers). But yes it's probably not a good idea to start some doc for that tag, but it wasn't started by me and it exists now. I would like to avoid the quality going down.
    – Zimm i48
    Commented Sep 13, 2016 at 9:01
4

Perhaps the way to flexibly state this conundrum is to do a best effort constraint that active experts should be reviewing the content in documentation. Now, there are some general questions here: How many people do we really want to put on the sidelines, and can we make Documentation refer to Tag expertise in SO itself? My gut is that the most expert we could get on the site for a tag would be those users with the top 5-10% of reputation attributed to a tag. And that's all well and good, but do we really want to make the universe of reviewers that small? For medium sized tags, the top 5-10% of people could be a matter of 10 people who have given more than 10 accepted answers to questions on the subject. Assuming that number of Answers per tag/subject is some sort of a one legged bell curve, where most users have only one accepted answer for the subject, dwindling down to the very few active experts who have the most accepted answers.

Is the object to have modifications only reviewed by the few? That means that 90% of the active documentation users would be ineligible to review documentation updates, which may be a bigger issue than edit reviewing on the SO Q&A site.

If there are "plenty" of active users who have high reputations associated to accepted SO answers or well rated comments (>1000 reputation? >10000 reputation?) for a tag or topic (active could be editing/approving documentation within the last week.) the algorithm to pick the top 5-10% would hold out for review by one of the experts, but if the tag has no active experts, it would let lower reputation people review it. Maybe you allow the content to go public with a review under x reputation but tag it as "Needs Expert Review." Over time, if it's a developing topic, the experts who emerge can review the topics tagged that way.

It would probably work without the "Needs Expert Review" tag, but that tag would allow users to spot documentation that didn't have good expert review and take it with a grain of salt. This assumes that you can parse out reputation by tag, of course... Multiple tags per question muddies the water a lot, it's not clear that all tags are equal for this purpose...

2

In the C documentation tag there have been a rash of edits by low rep users, possibly looking to boost their rep, posting factually incorrect information, badly coded examples, or adding content without having checked if it, or a better location for it, already exists.

These seem to be passing review by dint of the fact that they are reviewed by similarly low rep users (or in one case a user with high rep but mostly from other topics) who seem not to understand that the edits are incorrect or against recommended practice.

I'm of the opinion that, in general, the less rep barriers there are to using site functions, the better. Having everyone able to contribute is an important part of how a community grows.

I myself am a relatively low rep user, I have only 2 gold badges, both in [c] IIRC, yet it was myself, @Leandros and just one or two others that started the documentation for C while in beta. This is not to say I'm massively knowledgeable in C, generally though if I am not sure about something I leave a comment or an improvement request instead of approving or rejecting. This is not the behaviour the review queue engenders.

Looking at it from a low-rep editors side, either their incorrect edit get's approved and they do not learn that their code maybe incorrect and so continue their education based on false assumptions. Or their edit gets summarily rejected without any explanation - they are no better off but now also maybe peeved at the rejection.

Rejections at least have a certain barrier-to-entry in the form of having to write a rejection reason. Approvals have no such barrier, maybe there should be?

2
  • 1
    It's even worse than that: some users cannot Reject, meaning the whole thing is skewed in favor of the robo-approvers. If that bug were fixed, we might get a slightly more balanced situation. Commented Sep 20, 2016 at 11:07
  • 1
    Dammit, I've just found it seems that The Great Rep Readjustment means I can no longer rollback such inane edits! GAH!
    – Toby
    Commented Sep 20, 2016 at 11:10
-1

Another angle to this is that if the edits aren't correct, the content will get down voted and someone else will have to fix the problem.

There is a definite tension here between allowing for vote directed evolution to produce good content, and wanting to make only forward progress with every edit. The community may not actually be able to insure every edit is progress towards the goal of helpful information, if the people who know the topic fully aren't paying attention at the moment, even with review. But ultimately, if there's enough expert community involvement in a piece of content, it will eventually improve to the point where it's useful to people who need to know. Editing and Review exist both to approve individual improvements, and to prevent edit wars and keep the dis-honest edits at bay; with those excluded, content will tend to go in the right direction.

-2

I would organise in the following way:

  • Members with xx reputation in specific tag, can give a expert approval
  • Members not reaching this reputation watermark, can only issue an generic approval for styling, grammar, readability, content, spam, vandalism etc...

Members with the higher reputation can also give the generic approval in addition to their expert approval (extra tick box)

Any edit would need 2 expert approvals and 3 generic approvals, forcing at least 3 people to approve the edit.

This should cover both the technical validity and the actual content of each document.

Not the answer you're looking for? Browse other questions tagged .