1

Users in a target audience are likely to experience the bandwagon effect because they rely on others' assessment of information. In some domains [1], it is expected that some users are less likely to be influenced by similar phenomena. However, some users such as clinicians are being influenced by the new hype of novel diagnostic solutions, promoting the bandwagon effect. For instance, Artificial Intelligence (AI) is rapidly evolving into solutions for clinical practice [2], but the reality is that the field has not yet fully embraced to clinical practice.

Based on this question, UX research must take into account the bandwagon effect to mitigate the associated cognitive bias. Some of our research problems are concerning with the bias answers that clinicians may provide during these studies. For instance, we study usability (e.g., SUS) and workload (e.g., NASA-TLX) to understand some workflow changes, but we felt like the clinician's answer could be biased. Because now they are must likely to be influenced by the clinical community, who are seeing this as a hype.

Thus, the following question arise:

How to avoid the bandwagon effect on users of the clinical domain for UX research purposes?

References

[1] Luckhoff C. (2021) Bandwagon Effect. In: Raz M., Pouryahya P. (eds) Decision Making in Emergency Medicine. Springer, Singapore. https://doi.org/10.1007/978-981-16-0143-9_9

[2] Francisco Maria Calisto, Carlos Santiago, Nuno Nunes, Jacinto C. Nascimento, Introduction of human-centric AI assistant to aid radiologists for multimodal breast image classification, International Journal of Human-Computer Studies, Volume 150, 2021, 102607, ISSN 1071-5819, https://doi.org/10.1016/j.ijhcs.2021.102607

3
  • 1
    Is the problem that you're trying to solve, using your example, that clinicians might be influenced by hype and thus might over-index positively on the usability of something like an AI-based solution because they don't want to be perceived as "uncool" or a tech laggard?
    – Izquierdo
    Commented Oct 21, 2021 at 17:17
  • Exactly! The problem is concerning with the bias answers that clinicians may provide during these studies. For instance, we study usability (e.g., SUS) and workload (e.g., NASA-TLX) to understand some workflow changes, but we felt like the clinician's answer could be biased. Because now they are must likely to be influenced by the clinical community, who are seeing this as a hype. Commented Oct 21, 2021 at 18:46
  • I have clarified my question. Hope now everything is more clear. Thank you for your comment. Commented Oct 21, 2021 at 18:51

1 Answer 1

1

When screening participants for your research studies, you might include questions that measure how likely they are to be influenced by peers and hype in general. The questions could be open-ended and oblique, like "What was the process you went through when deciding to choose your latest mobile device purchase?" You'd be looking for participants who show rigor when making important purchasing decisions.

You might also look at screening out participants who have strong familiarity with the products or services that are being hyped in a way that might introduce bias - testing only clinicians who are "low information" in that area. In your screening survey, you could name a service and ask potential participants to rate it on a scale from 1-5, with "I have never heard of this" also being an option. The user being screened won't know that you're looking for the "never heard of this" responses.

Not the answer you're looking for? Browse other questions tagged or ask your own question.