4
$\begingroup$

We all love Space Exploration Stack Exchange, but there is a whole world of people out there who need answers to their questions and don't even know that this site exists. When they arrive from Google, what will their first impression be? Let's try to look at this site through the eyes of someone who's never seen it before, and see how we stack up against the rest of the 'Net.

The Site Self-Evaluation review queue is open and populated with 10 questions that were asked and answered in the last quarter. Run a few Google searches to see how easy they are to find and compare the answers we have with the information available on other sites.

Rating the questions is only a part of the puzzle, though. Do you see a pattern of questions that should have been closed but are not? Questions or answers that could use an edit? Anything that's going really well? Post an answer below to share your thoughts and discuss these questions and the site's health with your fellow users!

$\endgroup$

5 Answers 5

4
$\begingroup$

OK, this is supposed to be evaluation of the quality of our contents, but I have a few observations to share regarding the evaluation system itself:

  • This evaluation is currently running for 10 hours, yet this new meta question didn't appear in moderator notifications (on-site diamond icon and via email). All other newly posted meta questions do, so I'd expect this one to appear there too. Why? Because review items are selected in semi-random fashion and some might need moderator attention before they are evaluated by our community. The sooner one of us gets to it, the better. For example, one item in this self-evaluation used by now a dead link to embed a YouTube video (since edited to remedy that by yours truly). Would that affect its rating? I have no way of knowing that, but I'd presume so. It should.
  • I only had access to 9 out of 10 questions in the site self-evaluation thread. Presumably, because one of those questions in the list was mine, while remaining nine weren't. If that's the case, then I disagree with that logic. It seems the evaluation system has no problems asking me to evaluate threads where my own answer is best rated and/or accepted, but it has problems asking me if I got satisfactory answers? I realize that I already have that option by simply not marking any answer as the one answering my question, but doesn't excluding authors of question from polling reduce reliability of gathered statistical data? It seems to me they could at least be used as controls, if the outcome should already be known (i.e. no up-voted answers = needs improvement, up-voted answer(s) but no accept = satisfactory, and up-voted answer(s) and out of those one is accepted = excellent). Similar controls could be introduced to questions where reviewer posted answer, but since those obviously aren't done on own questions, I have to assume they aren't on own answers either.
  • The indicator icon for number of review items left seems to be stuck at 10 after completing all review items and the review queues show no new review items. This is double awkward because I only started with the icon showing 9 items left to be reviewed by me. I've been told before that this counter inconsistency is due to heavy caching of beta sites (more so than graduated sites), but it's been now stuck at 10 for the past 1 hour +.

Regarding quality of evaluated items however, I would have to say that this time the selection is the most meh (many satisfactory or needs improvement, and rare excellent) so far. That is worrying, but is hard to consider it as a trend if these evaluations are now done only about every half a year, whereas before they were done two times as frequently. So I'm not convinced they serve their purpose. Actually, what is their purpose? I can vote on items as needs improvement but the edit button doesn't show in them? Who is supposed to improve them then?

$\endgroup$
3
$\begingroup$

Interesting to compare your view to mine. I assume we got different random articles to review. I made a google search, based on the question topic, and looked at the results. To me most of the material in here (SX) was the best. I was selecting excellent or satisfactory.

Perhaps my view is already biased. I'm here on SX because I failed to find the answers I wanted elsewhere. For me its the best source of the information I want, so when I rate it, it will always look good. In some sense I have already pre-selected it for quality.

But then we may represent different audiences or consumer groups with different needs and expectations. My satisfactory may be your poor and vice-versa. This is the problem with focus group marketing exercises in science. (Have plenty of plenty career experiences of the differences of opinion between engineering and marketing that could fill a small book!)

$\endgroup$
0
2
$\begingroup$

The questions are evaluated with the answers. In several of the ones I rated, the question was excellent, the answer satisfactory. In one case, the answers were poor, but I really wanted to see a good answer (and writing it is outside my competence to research.)

$\endgroup$
1
$\begingroup$

Final Results

Net Score: 17 (Excellent: 17, Satisfactory: 3, Needs Improvement: 0)


Net Score: 9 (Excellent: 9, Satisfactory: 9, Needs Improvement: 0)


Net Score: 8 (Excellent: 10, Satisfactory: 6, Needs Improvement: 2)


Net Score: 7 (Excellent: 9, Satisfactory: 7, Needs Improvement: 2)


Net Score: 1 (Excellent: 4, Satisfactory: 12, Needs Improvement: 3)


Net Score: -1 (Excellent: 3, Satisfactory: 13, Needs Improvement: 4)


Net Score: -3 (Excellent: 4, Satisfactory: 9, Needs Improvement: 7)


Net Score: -3 (Excellent: 4, Satisfactory: 7, Needs Improvement: 7)


Net Score: -6 (Excellent: 4, Satisfactory: 5, Needs Improvement: 10)


Net Score: -11 (Excellent: 2, Satisfactory: 3, Needs Improvement: 13)


$\endgroup$
1
  • $\begingroup$ An interesting mix of really good questions, with really poor questions. Hmmm... $\endgroup$
    – PearsonArtPhoto Mod
    Commented Apr 29, 2015 at 15:38
1
$\begingroup$

July 2015 update:

Space Exploration core users

SEx.SE: evolution of core expert base from week to week (number of users with more than one positively-scoring answer per week). Source: SEDE. Methodology: https://meta.stackexchange.com/a/261608/

We are definitely not close to extinction.

$\endgroup$
0

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .