Skip to main content
added 462 characters in body
Source Link
TildalWave
  • 76.2k
  • 1
  • 22
  • 41

OK, this is supposed to be evaluation of the quality of our contents, but I have a few observations to share regarding the evaluation system itself:

  • This evaluation is currently running for 10 hours, yet this new meta question didn't appear in moderator notifications (on-site diamond icon and via email). All other newly posted meta questions do, so I'd expect this one to appear there too. Why? Because review items are selected in semi-random fashion and some might need moderator attention before they are evaluated by our community. The sooner one of us gets to it, the better. For example, one item in this self-evaluation used by now a dead link to embed a YouTube video (since edited to remedy that by yours truly). Would that affect its rating? I have no way of knowing that, but I'd presume so. It should.
  • I only had access to 9 out of 10 questions in the site self-evaluation thread. Presumably, because one of those questions in the list was mine, while remaining nine weren't. If that's the case, then I disagree with that logic. It seems the evaluation system has no problems asking me to evaluate threads where my own answer is best rated and/or accepted, but it has problems asking me if I got satisfactory answers? I realize that I already have that option by simply not marking any answer as the one answering my question, but doesn't excluding authors of question from polling reduce reliability of gathered statistical data? It seems to me they could at least be used as controls, if the outcome should already be known (i.e. no up-voted answers = needs improvement, up-voted answer(s) but no accept = satisfactory, and up-voted answer(s) and out of those one is accepted = excellent). Similar controls could be introduced to questions where reviewer posted answer, but since those obviously aren't done on own questions, I have to assume they aren't on own answers either.
  • The indicator icon for number of review items left seems to be stuck at 10 after completing all review items and the review queues show no new review items. This is double awkward because I only started with the icon showing 9 items left to be reviewed by me. I've been told before that this counter inconsistency is due to heavy caching of beta sites (more so than graduated sites), but it's been now stuck at 10 for the past 30+ minutes1 hour +.

Regarding quality of evaluated items however, I would have to say that this time the selection is the most meh (many satisfactory or needs improvement, and rare excellent) so far. That is worrying, but is hard to consider it as a trend if these evaluations are now done only about every half a year, whereas before they were done two times as frequently. So I'm not convinced they serve their purpose. Actually, what is their purpose? I can vote on items as needs improvement but the edit button doesn't show in them? Who is supposed to improve them then?

OK, this is supposed to be evaluation of the quality of our contents, but I have a few observations to share regarding the evaluation system itself:

  • This evaluation is currently running for 10 hours, yet this new meta question didn't appear in moderator notifications. All other newly posted meta questions do, so I'd expect this one to appear there too.
  • I only had access to 9 out of 10 questions in the site self-evaluation thread. Presumably, because one of those questions in the list was mine, while remaining nine weren't. If that's the case, then I disagree with that logic. It seems the evaluation system has no problems asking me to evaluate threads where my own answer is best rated and/or accepted, but it has problems asking me if I got satisfactory answers? I realize that I already have that option by simply not marking any answer as the one answering my question, but doesn't excluding authors of question from polling reduce reliability of gathered statistical data? It seems to me they could at least be used as controls, if the outcome should already be known (i.e. no up-voted answers = needs improvement, up-voted answer(s) but no accept = satisfactory, and up-voted answer(s) and out of those one is accepted = excellent). Similar controls could be introduced to questions where reviewer posted answer, but since those obviously aren't done on own questions, I have to assume they aren't on own answers either.
  • The indicator icon for number of review items left seems to be stuck at 10 after completing all review items and the review queues show no new review items. This is double awkward because I only started with the icon showing 9 items left to be reviewed by me. I've been told before that this counter inconsistency is due to heavy caching of beta sites (more so than graduated sites), but it's been now stuck at 10 for the past 30+ minutes.

Regarding quality of evaluated items however, I would have to say that this time the selection is the most meh (many satisfactory or needs improvement, and rare excellent) so far. That is worrying, but is hard to consider it as a trend if these evaluations are now done only about every half a year, whereas before they were done two times as frequently. So I'm not convinced they serve their purpose. Actually, what is their purpose? I can vote on items as needs improvement but the edit button doesn't show in them? Who is supposed to improve them then?

OK, this is supposed to be evaluation of the quality of our contents, but I have a few observations to share regarding the evaluation system itself:

  • This evaluation is currently running for 10 hours, yet this new meta question didn't appear in moderator notifications (on-site diamond icon and via email). All other newly posted meta questions do, so I'd expect this one to appear there too. Why? Because review items are selected in semi-random fashion and some might need moderator attention before they are evaluated by our community. The sooner one of us gets to it, the better. For example, one item in this self-evaluation used by now a dead link to embed a YouTube video (since edited to remedy that by yours truly). Would that affect its rating? I have no way of knowing that, but I'd presume so. It should.
  • I only had access to 9 out of 10 questions in the site self-evaluation thread. Presumably, because one of those questions in the list was mine, while remaining nine weren't. If that's the case, then I disagree with that logic. It seems the evaluation system has no problems asking me to evaluate threads where my own answer is best rated and/or accepted, but it has problems asking me if I got satisfactory answers? I realize that I already have that option by simply not marking any answer as the one answering my question, but doesn't excluding authors of question from polling reduce reliability of gathered statistical data? It seems to me they could at least be used as controls, if the outcome should already be known (i.e. no up-voted answers = needs improvement, up-voted answer(s) but no accept = satisfactory, and up-voted answer(s) and out of those one is accepted = excellent). Similar controls could be introduced to questions where reviewer posted answer, but since those obviously aren't done on own questions, I have to assume they aren't on own answers either.
  • The indicator icon for number of review items left seems to be stuck at 10 after completing all review items and the review queues show no new review items. This is double awkward because I only started with the icon showing 9 items left to be reviewed by me. I've been told before that this counter inconsistency is due to heavy caching of beta sites (more so than graduated sites), but it's been now stuck at 10 for the past 1 hour +.

Regarding quality of evaluated items however, I would have to say that this time the selection is the most meh (many satisfactory or needs improvement, and rare excellent) so far. That is worrying, but is hard to consider it as a trend if these evaluations are now done only about every half a year, whereas before they were done two times as frequently. So I'm not convinced they serve their purpose. Actually, what is their purpose? I can vote on items as needs improvement but the edit button doesn't show in them? Who is supposed to improve them then?

Source Link
TildalWave
  • 76.2k
  • 1
  • 22
  • 41

OK, this is supposed to be evaluation of the quality of our contents, but I have a few observations to share regarding the evaluation system itself:

  • This evaluation is currently running for 10 hours, yet this new meta question didn't appear in moderator notifications. All other newly posted meta questions do, so I'd expect this one to appear there too.
  • I only had access to 9 out of 10 questions in the site self-evaluation thread. Presumably, because one of those questions in the list was mine, while remaining nine weren't. If that's the case, then I disagree with that logic. It seems the evaluation system has no problems asking me to evaluate threads where my own answer is best rated and/or accepted, but it has problems asking me if I got satisfactory answers? I realize that I already have that option by simply not marking any answer as the one answering my question, but doesn't excluding authors of question from polling reduce reliability of gathered statistical data? It seems to me they could at least be used as controls, if the outcome should already be known (i.e. no up-voted answers = needs improvement, up-voted answer(s) but no accept = satisfactory, and up-voted answer(s) and out of those one is accepted = excellent). Similar controls could be introduced to questions where reviewer posted answer, but since those obviously aren't done on own questions, I have to assume they aren't on own answers either.
  • The indicator icon for number of review items left seems to be stuck at 10 after completing all review items and the review queues show no new review items. This is double awkward because I only started with the icon showing 9 items left to be reviewed by me. I've been told before that this counter inconsistency is due to heavy caching of beta sites (more so than graduated sites), but it's been now stuck at 10 for the past 30+ minutes.

Regarding quality of evaluated items however, I would have to say that this time the selection is the most meh (many satisfactory or needs improvement, and rare excellent) so far. That is worrying, but is hard to consider it as a trend if these evaluations are now done only about every half a year, whereas before they were done two times as frequently. So I'm not convinced they serve their purpose. Actually, what is their purpose? I can vote on items as needs improvement but the edit button doesn't show in them? Who is supposed to improve them then?