Where I work, everyone is evaluated according to some metrics. There are tons of metrics, from the number of articles, impact factors of the articles, citations, patents, grant money obtained, number of people trained, products (I'm not kidding - we are a research institution) and so on.
The evaluation itself is done by a secretary who just files away what we give her. There is a committee who is supposed to verify these. Those guys are simply out of their depth when it comes to what they are supposed to do, and it's not their fault.
This is not the norm in the western world, but I've seen similarly painful things done there, too.
The bottom line is that evaluating researchers is not straightforward for anyone. Surely, all kinds of metrics can be invented and they are relevant, until people start gaming them. And it is easy to game the metrics, as long as the people using them are the people financing the research and various managers. Even if one has decent advisers by their side, they can still make huge errors by looking at metrics.
The only people who can evaluate researchers are coworkers, collaborators and competitors. Everyone else might get dazzled by a recent Science paper, some other paper with 500 citations, or an extremely well prepared interview. Or they might dismiss someone because they don't have first author papers, or because they publish in less impactful journals or whatever.
This is one reason why, when hiring, many professors ask their prospective postdocs for recommendations. If someone they trust vouches for you, you get hired. It's even likelier to hire you if they collaborated with you on a project before and you had some success together.
Trying to read into what someone had done on his career path may give you some insight into how good they are at the job. But you need to look at hard facts. For instance, if someone has a few topology papers, you would expect them to be good at topology.
But, if you look at things like not having enough articles as first author, you can draw multiple conclusions: i) they don't like writing, ii) they just piggyback, iii) they lack motivation, initiative iv) they are lazy, but have some skill that is essential for their research group. In my case, it is because I second guess myself so much that I never reach the stage in which I'm happy with the paper. Any of those conclusions can be true, or none of them.
My point is you won't evaluate correctly a researcher until you understand their research and its context. Reading tea leaves (bibliometrics and other metrics) gives some information, but tends to obscure the truth. To give an example. I had more papers and an order of magnitude more citations than one of my office mates. We are in the same field. Yet, he got a postdoc fellowship at MIT, because the stuff he did was so advanced, my work looked like a science fair project in comparison. The place I work at now, would have dismissed my colleague without a second thought because "he's not a team player", "he publishes in low impact factor journals", "his work is not applied enough", "doesn't have enough citations", "he has a low potential to attract funding", and my favorite "his work is too technical".