-3

Currently, when sorting through available answers to any question (mine or not) on a topic I'm not nearly an expert on, it may sometimes be VERY difficult to separate wheat from the chaff by using up-votes - which are heavily biased towards earlier even if not better answers).

Another helpful metric is of course the rep - but as noted many many times, the rep as it currently stands is often misleading for mid-range people (1k-7k), by heavily penalizing experts in less-popular fields, or people who give very few but unusually good answers, and being biased towards people who ask and answer many newbie level questions - the latter is not a slander against such people, but merely a note that this situation makes separating merely prolific posters from genuinely competent people is hard.

A suggestion to help in such separation of more-competent people is to display the "accept hit rate" of the user - e.g. I'd be more willing to trust a guy for whom 100% of their question askers so far accepted as "accepted answer", even if their rep is not in 10000s.

NOTE: There was a similar discussion on Meta previously (and here's one more) but it centered on usefulness of such statistics to the user himself, as opposed to others who will peruse their answers.

EXAMPLE1: For a question I asked yesterday, the very best answer was actually up-voted the least (it is the last in the list of answers), made by a user who has second lowest rep of all the answerer. BUT - his ratio of accepted answers for my tags of interest is VERY high. And the second best asnwer, while voted up the most, was from a user with the LOWEST rep - but he also had a decent enough accepted answer ratio that I'm inclined to trust their info despite low rep. Yet, f I did not know enough about the topic in question to be able to discriminate by content, i'd have likely been more trusting one of the other two answers which were very much not the best ones but came from higher-total-rep people.

EXAMPLE2: My own answer to a question where I didn't have nearly enough expertise - and the answer was very sub-optimal and much worse than the ones giveb later by others - was not only voted far above other answers, but actually accepted by the questioner. Go figure. (my answer wasn't bad enough to delete, and i did edit it to improve the info for SO consumption, but still, you get my point). Yet, I had zero accepted score on that particular tag, as opposed to other answerers who came after me yet were not voted up enough.

As both these examples illustrate, the rate of acceptance of someone's answers for specific tags could be a very useful tool when sifting through the asnwers.

What does everyone think? Is it useful? Should it be added as a feature?

7
  • meta.stackexchange.com/questions/13847/… (and see others linked from that question) Commented Sep 21, 2009 at 21:42
  • (-1) for the reasons in meta.stackexchange.com/questions/22929/…
    – devinb
    Commented Sep 21, 2009 at 21:45
  • @Rich- no offense, but did you ever bother reading the question before downvoting? I specifically linked that question in my post end explicitly - in BOLD - explained how that post was completely different from what I'm asking.
    – DVK
    Commented Sep 22, 2009 at 0:45
  • @devinb - so you down-voted because I asked a well thought out question that you happened to disagree with? Nice.
    – DVK
    Commented Sep 22, 2009 at 1:04
  • @DVK: On meta, a downvotes are used to indicate that approval or rejection of suggested features. In this case, I would not want to see this feature appear on SO so I downvoted it. I think you articulated it excellently, but I am downvoting the feature, not you.
    – devinb
    Commented Sep 22, 2009 at 11:57
  • @devinb - Got it. No prob
    – DVK
    Commented Sep 22, 2009 at 13:16
  • @DVK I didn't downvote your question actually, but I disagree that it is completely different. We are asking for essentially the same data, it is just a difference of where that data is displayed. You are asking for it on the user details on the answer box, I am asking for it on the user profile page Commented Sep 23, 2009 at 15:49

2 Answers 2

4

Unfortunately, as you noted, even accepted answer isn't necessarily the best indicator of the best answer.

This is because sometimes short answers will be accepted, whereas long thorough answers will not. Another situation where this is the case is when an answer is added to a question with an already accepted answer. In general, answering a question with an accepted answer is a losing proposition, because it is generally likely that the accepted answer will tend to get more upvotes and maybe the highest voted non-accepted answer but other, newer, answers will get nothing.

Also, users who have fewer answers could get a 25% rating with 3 out of 12 answers accepted, whereas users with thousands of responses is not likely to be able to match that ever^1. This creates a bias towards newer users, or at least a skew in the statistics. The argument could be made that we'll just trust the number less if the user has a lower reputation, but then we fall into the common flaw of trusting reputation to mean more than it does.

Ultimately, I feel like this feature would cause a lot of people to change their habits into only answering questions if they think they can get the "accepted answer" rather than if they think it could help someone. It would also lead to a mass deletion of 'non-accepted' answers, even if they were good answers that just happened to have no upvotes.

1) Jon Skeet is not constrained by such generalities.

13
  • actually Jon has about a 45% acceptance rate (44.8% at this moment). I used my script to check the top 14, most are between 25% and 38% (with three not far behind at 19%, 21% and 23% ) Commented Sep 21, 2009 at 21:55
  • but then again you can prove anything with facts Commented Sep 21, 2009 at 21:57
  • @Rich. Modified. But the laws of large numbers still exist. I'd be interested to see a graph of some kind that breaks down the percentage based on number of answers.
    – devinb
    Commented Sep 21, 2009 at 22:06
  • @Devinb - you seem to be arguing against something I never intended to state in your first pararaph. I merely pointed out that amount of accepted answers from the user - taken on average , especially per tag, is a better guideline to the readers as to the predicted quality/trustworthiness of the user's new answer - *combined with the existing indicators such as votes and total rep. I never argued that it is the best indicator, or the only one worth anything - merely taht with such indicator, the filtering of good answers - again, on average - is more liekly to be accurate than without it.
    – DVK
    Commented Sep 22, 2009 at 1:00
  • Also, it's possible to calibrate the system to adjust for both of your concerns: for the first one, don't show the rating for people with too few answers (this means unknown trust rather than lower trust - see Sybase's meaning of NULL)
    – DVK
    Commented Sep 22, 2009 at 1:02
  • For the second concern, as per linked posts in Meta, you can already find out any user's percentatge, so anyone who for some perverse reason is motivated to game the system to increase the percentage of accepted posts from them can already easily do that since they already know their own score. All myproposal does is makes that score visible to OTHERS who have no incentive and no way to help game the system this way.
    – DVK
    Commented Sep 22, 2009 at 1:03
  • @DVK: The fact that people can find trivia is not the same as having the system publish it to them. There was always a way to discover the percentage of questions that have "accepted answers" per user, but when they began publishing the information, they were saying "It is okay for you to act based on this information" ...
    – devinb
    Commented Sep 22, 2009 at 11:59
  • Similarly, as soon as we begin publishing statistics that are expected to relate to someones 'trustworthiness', we are telling people that they can 'read into' these numbers and ascertain the truth of someone's answer. That's gibberish, each answer should be evaluated on whether or not it works, regardless of who wrote it. Another note that I didn't include in my initial response, once this feature is implemented, people with a low percentage will find themselves less credible, even on their correct responses.
    – devinb
    Commented Sep 22, 2009 at 12:02
  • Either people will ignore this statistic, meaning that implementing it expends time better spent elsewhere, or people will use the statistic and it will unfairly bias people's perceptions of the quality of a response. "This person doesn't have a high %, so they must be wrong", and on a similar response "This person does have a high %, so they must be right". Even if the second person copied from the first.
    – devinb
    Commented Sep 22, 2009 at 12:03
  • Devin - the same argument can be applied to ANY statistics such as rep. All 'm saying is that using rep + accepted rate is more helpful to others users in accessing predicted quality of the answer than just rep alone
    – DVK
    Commented Sep 22, 2009 at 14:03
  • That's exactly the point. On meta it's been known and discussed for a while now that reputation is not a good representation of professional skill. And for the reasons I've discussed, neither is answer percentage. Each question and answer should be judged based on their own merits, and should have nothing to do with the reputation or arbitrary statistics of the author. Being right on 20 other C# questions does not make me right on the 21st, because we have no idea how hard or technical the first 20 were.
    – devinb
    Commented Sep 22, 2009 at 14:10
  • @devinb I'd be interested in seeing a graph too. I've been meaning to do some analysis of the data dump for a while. Using my script I've looked at 100s of users, and the ones that I feel give consistently high quality answers tend to have an accept rate of 25%+, even for large numbers of answers. This indicates their might be some value in this data. I personally don't think it should be on the user's answer box though for the reasons you describe, but think it is interesting on the profile page, and worthwhile awarding badges for having N answers accepted in a given tag. Hence my request Commented Sep 23, 2009 at 15:57
  • I still worry that it may bias people's opinions of the content of the answer. If the information is viewed on the profile, then someone will write a greasemonkey script to display it on every answer. Other (less greasemonkey saavy) users will simply check the user page for every they wonder about. The point is that we are using past acceptance rate to judge future performance, which isn't fair, because as people get better with using the site (and their own technical expertise) their answer quality will tend to grow, but if their % is low, people will just assume they are wrong.
    – devinb
    Commented Sep 23, 2009 at 16:43
0

There's also the problem with more and less popular categories. If somebody answers questions only on Snobol programming, there won't be nearly the competition for acceptance as there is for C# or .NET.

2
  • I guess I didn't stress this enough, but buried in my post is explicit admission that this should probably be implemented on the level of specific tags - see second to last line. Although my concern for tag rate difference is more towards some people being great helpers in Perl but giving pretty crappy Prolog answers (to come up with completely fictional example), but your concern is another reason to go to tag level.
    – DVK
    Commented Sep 22, 2009 at 1:08
  • Good point, but the fairness still depends on perception. If somebody's used to an acceptance rate of >10% in the answers when asking Snobol questions, what will they think of acceptance rates for .NET questions (when IronSnobol comes out), and vice versa? Commented Sep 22, 2009 at 14:35

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .