6

We all love Cognitive Sciences Stack Exchange, but there is a whole world of people out there who need answers to their questions and don't even know that this site exists. When they arrive from Google, what will their first impression be? Let's try to look at this site through the eyes of someone who's never seen it before, and see how we stack up against the rest of the 'Net.

The Site Self-Evaluation review queue is open and populated with 10 questions that were asked and answered in the last quarter. Run a few Google searches to see how easy they are to find and compare the answers we have with the information available on other sites.

Rating the questions is only a part of the puzzle, though. Do you see a pattern of questions that should have been closed but are not? Questions or answers that could use an edit? Anything that's going really well? Post an answer below to share your thoughts and discuss these questions and the site's health with your fellow users!

3

2 Answers 2

3

Final Results

Net Score: 5 (Excellent: 5, Satisfactory: 2, Needs Improvement: 0)


Net Score: 3 (Excellent: 4, Satisfactory: 3, Needs Improvement: 1)


Net Score: 1 (Excellent: 2, Satisfactory: 5, Needs Improvement: 1)


Net Score: 1 (Excellent: 1, Satisfactory: 7, Needs Improvement: 0)


Net Score: 0 (Excellent: 2, Satisfactory: 3, Needs Improvement: 2)


Net Score: 0 (Excellent: 2, Satisfactory: 3, Needs Improvement: 2)


Net Score: 0 (Excellent: 1, Satisfactory: 5, Needs Improvement: 1)


Net Score: -4 (Excellent: 1, Satisfactory: 2, Needs Improvement: 5)


Net Score: -4 (Excellent: 0, Satisfactory: 3, Needs Improvement: 4)


Net Score: -5 (Excellent: 0, Satisfactory: 3, Needs Improvement: 5)


3

Final results overall

Comparison to four previous site self-evaluations:

Net score data

             _______________Month_/_Year________________
Question_#__|__2/14_____11/13_____8/13_____5/13_____5/12
         1  |   5         3        7        6        7
         2  |   3         3        6        4        7
         3  |   1         2        6        4        7
         4  |   1         2        6        3        7
         5  |   0         0        6        3        6
         6  |   0         0        5        3        5
         7  |   0        -1        3        2        4
         8  |  -4        -1        2        2       -2
         9  |  -4        -1        1        1       -3
        10  |  -5        -4       -2        1       -3


Descriptive statistics

Month/Year:   2/14   11/13   8/13   5/13   5/12
   Medians:      0      0     5.5    3      5.5
     Means:    -.3     .3     4      2.9    3.5
        SD:    3.2    2.2     2.9    1.5    4.4

Inferential statistics

Kruskal–Wallis χ² = 14.3, df = 4, p = .006; self-evaluations' net scores are distributed differently.

Tukey's Honestly Significant Differences (couldn't get the npmc package to install on R v2.15.2):
Qs are scoring lower than in August '13 (Hedges' g = -1.35, p = .02) and May '12 (g = -.95, p = .05).
Questions from November '13 might be scoring lower than August '13 (g = -1.38, p = .06).
However, the distributions might be too different (Kolmogorov–Smirnov tests' p = .055 for all 3 pairs).
May '12 scores aren't normally distributed (Shapiro–Wilk p = .003; skew = -.6, kurtosis = -1.6); were scored by upvote/downvote, not by selecting excellent/satisfactory/needs improvement.

Discussion
Might be worth revisiting August '13 to consider why net scores appear to have decreased.

  • Are things any worse now?
  • Are our standards higher?

Please offer any feedback you have on these analyses (if you at least understand them). I haven't taken into account differences in frequencies of ratings across questions in these comparisons; I only analyzed net scores. A linear decrease model might be worth contrast testing too; net scores correlate with order of site evaluation (Kendall's τ = -.36, which converts to r = -.54 using this method), indicating a downward trend. However, I haven't considered dependencies across time, or hierarchical structure of questions nested within evaluation periods. The May '12 scores may not be comparable due to method variance.


My thoughts on this evaluation's question set

Overall, it's hard to identify interesting patterns. There's a basic need for more activity, a particular need for more comments, and some meta-questions to address, but I've said this elsewhere already. In retrospect, it seems I voted three questions into each category (I skipped the first one and couldn't find out how to return to it), so I'd say satisfactory sums up my opinion pretty well overall.

Looking at the final results, I see my vote for How has geometry been applied in cognitive science? was the oddest, in that only one of six others agreed with me. Two to six agreed with me everywhere else (median = 4). My ratings' correlation with net scores (after subtracting my own ratings) is pretty excellent (Kendall's τ = .69, which converts to r = .89 using this method), so my ratings seem to have been pretty representative. Here's the breakdown...

I marked this question as needs improvement, then [shameless self-promotion:] I improved it!

I marked this question as satisfactory, then [shameless self-promotion:] I answered it!

I marked these questions as needs improvement, but haven't improved them yet myself. I would love to see someone take up some of the work I'm proposing below, and require no credit for these ideas.

  1. Is there a psychological basis for getting hiccups? Lots of better Google hits to be referenced:

  2. Is ignoring messages a learned behavior?

    • Frankly, programmers.SE handled this question much better than we have so far, and they did most of it three years ago, so I don't think it's safe to brush this off as just a matter of time. It might be a matter of relative subject appropriateness for the audiences of experts in question, but note how much more the programmers had to say about explanations based on cognitive theories (even if it's only folk theory) than we have so far.
    • It's also evident from the above that the question itself could've used more initial research effort. I also see some basic typographic errors, and felt it a little unclear and unfocused. In that light, it's fascinating that it's got six +1s, four favorites, only one -1, and again, no comments. I also see @JeromyAnglim editing tags only, which I think reflects his valuable counterpoint on a relevant meta-question that doesn't seem to have been resolved: When should I edit a post?

I marked these excellent and +1'd everything, but still see room for improvement, which is fine by me.

  1. How has geometry been applied in cognitive science? Sources for some additional ideas follow:

  2. Does Andler's (2012) article on 'Mathematics in Cognitive Science' provide an accurate picture of mathematics in cognitive science?

  3. Is extreme empathy and compassion considered a disorder?

I marked these questions satisfactory.

  1. What is this method used in the "it's not your fault" segment of Good Will Hunting and why does it work? Great, critical, OP-accepted response from @what.

  2. Can scientific evidence support concepts of the soul? I worked on this question quite a bit myself. I originally flagged it as primarily opinion-based before I could vote to close. That flag was declined; I'm okay with that. @user3747 accepted and expressed heartwarming gratitude for my answer, so I might be biased, but I appreciate all the editing work that went into improving it; my own contribution there was only to clean up the writing and add tags for the supplementary questions. I'm reasonably content with my answer, having edited it to address the OP edits. Those edits haven't fully addressed my criticisms, but in light of the topic's nature, I can appreciate the difficulty of the task. The question and answer may also be more canonical to whatever extent they better reflect the general issue I have with approaching the soul concept from a scientific perspective: most soul concepts aren't specific or falsifiable enough to permit coherent scientific inquiry. Again, I might be biased, but I find my answer superior to others, only three of which I find really useful, including those easily Googled:

    1. Wikipedia's soul page offers a brief subsection on science. Some interesting points made here might be worth adding to my answer or to another.
    2. Cornelia Dean's article for The New York Times' Science section, "Science of the Soul? ‘I Think, Therefore I Am’ Is Losing Force" usefully demonstrates the variety of opinions other scientists have about the soul. This doesn't make the soul a scientific topic in itself, but surveying a bunch of scientists may be one of the best ways to understand the social construct and its relationship with science from an objective viewpoint. This article doesn't report a proper survey in the psychological or sociological sense of the method, of course, but it's vaguely reminiscent of a small set of case studies, which is a step in the right direction.
    3. Sean Carroll's Scientific American blog, "Physics and the Immortality of the Soul" gives a good critique of the soul's plausibility from a physical perspective. This is a useful counterpoint to other questionable sources that follow.

0

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .