Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

9
  • 32
    Some things can be tremendously hard to understand for people who are not from the US. How ridiculous some things appear can be pointed out by boiling down certain conversations to the core: A: "We want more black people in a certain field". B: "Why?". A: "Because skin color does not matter". B: "WTF?". It just seems odd. Regardless of that: The assumption that the distribution of people with certain attributes within a social structure or organization has to match the distribution of these attributes that is found in the next larger structure is nonsensical and impossible to enforce.
    – Marco13
    Commented Nov 28, 2019 at 13:54
  • 3
    @Marco13 it is sensible (for the reason that you do not want your organization to live in a bubble and in addition for the reason that one might desire equal opportunities for minorities). It is a gray area though, it depends on what you consider by "match"; It doesn't need to be exactly equal of course (but at the current time women and people with a minority ethnic background experience friction in their careers and it is not nonsensical to create opposing forces that may lead to an equal level playing field) Commented Nov 28, 2019 at 14:04
  • 5
    Maybe I'm overly skeptical (sorry), but you said "This survey has the goal to find out whether there are any negative effects that may follow from that and it will allow to create informed policy that may diminish those negative effects.". Now, 1. I doubt that this really is the (honest) goal (but that's not something that we can know or find out), and 2. this goal cannot be achieved, because it's not possible to identify a causal relation between the observations, and 3., most importantly: ..
    – Marco13
    Commented Nov 28, 2019 at 17:20
  • 8
    ... : I wonder how the results should allow an "informed policy". E.g. imagine the outcome is: "White: 60% satisfied, 40% unsatisfied", and "Black: 40% satisfied, 60% unsatisfied". (And there is not much more in this survey!). How should this be the basis for an informed decision? I.e. what chould a "policy" be based on that? And if it was implemented, and a survey 2 years later showed "White: 50% satisfied, 50% unsatisfied", and "Black: 45% satisfied, 55% unsatisfied", would that be an improvement?
    – Marco13
    Commented Nov 28, 2019 at 17:20
  • 3
    @PeterTaylor I agree that the survey can be debated, and any way of categorizing people may be criticized (too much/little detailed, there is not perfect way). How SE/SO exactly categorizes demographics might be simplistic. But, the idea by itself - that demographics and age, gender, and ethnicity play a role - is not so strange. The question from the OP relates to respondent fatigue and that survey methods courses teach to avoid these demographics questions. But, the goal of SE/SO is to find out more about the idea they have about diversity (and related problems) on the platform. Commented Nov 29, 2019 at 11:38
  • 2
    The problem is to recognize problems that relate specifically to minorities. For this you need to be able to target them specifically, e.g. with a demographics based survey. If you ask the entire group, or listen to the loudest majority, then you may hardly hear the voices of minority groups. --- Indeed, maybe, just "white heterosexual male" versus "other" is sufficient. It already filters out 70% of the visitors that relate to it. A more precise classification may be more than necessary, but it works. In time one may find a need to change the classification according to new findings/ideas. Commented Nov 29, 2019 at 12:05
  • 1
    The update really adds value to the answer. But the survey that you linked to was far more detailed than the loop, and still shows how difficult all this is: Words like "nicer" and "friendlier" are appearing. Are they the same? What is the difference? And they are together with words like "rude" and "assholes" - wasn't this likely used as one phrase, i.e. "rude assholes"? There are approaches for sentiment analysis, but drawing conclusions from bubbles in a word cloud that was generated purely empirically (i.e. without any theory in mind) doesn't seem like a credible approach to me...
    – Marco13
    Commented Nov 29, 2019 at 17:37
  • 1
    Thank you for this answer. While there were unforced errors in the release of this particular survey, the use of demographic categories is standard, and generally good practice. It seems standard survey methods have been interpreted as a tool to ignore input, rather than a tool to allow more depth and nuance in the analysis of that input. I suppose, considering corporate's strategy of actively ignoring meta, it's not that surprising.
    – De Novo
    Commented Nov 29, 2019 at 20:38
  • 4
    @PeterTaylor I agree with you. It was wrong to suggest that this particular execution of a survey that asks for ethnicity was not insensitive or not offensive. I was indeed having in my mind the more general 'idea' of the principle of asking for ethnicity or investigating ethnicity in any other way. Commented Nov 30, 2019 at 14:16