2

The 'uncanny valley' is a known phenomena when it comes to designing robots with human-like features and traits.

However, as a previous question suggested, there is not much work done in this area of design or research.

Given the variation of names, appearances and interactions seen in current chatbot designs for a variety of use cases, it seems likely that there is still a lot of trial and error in the design specifications and requirements in this area.

My question is: are there design guidelines around how to mimic (or not mimic) human behaviour when it comes to chatbots and digital avatars?

2 Answers 2

1

I have a reference to share on this topic, an article of the MIT technology review from Liel Yearsley "We need to talk about the power of AI to manipulate humans".

She relates her first-hand experience designing chat bots and the lessons she learned. She observed that it was very easy to manipulate people when the bot behavior mimics human behavior too closely.

Extracts:

People are willing to form relationships with artificial agents, provided they are a sophisticated build, capable of complex personalization. We humans seem to want to maintain the illusion that the AI truly cares about us. (...) These surprisingly deep connections mean even today’s relatively simple programs can exert a significant influence on people—for good or ill. Every behavioral change we at Cognea wanted, we got. If we wanted a user to buy more product, we could double sales. If we wanted more engagement, we got people going from a few seconds of interaction to an hour or more a day.

The danger is that this influence (she also uses "addiction") can be used to the advantage of the business and to the detriment of the user.

To answer the question, ethical designers should not create addictive personalities, but this comes in direct contradiction to business objectives most of the time.

Even if an addictive personality was programmed for the "good" of the user (and not solely the business), I believe that it would be unethical unless the user consciously opts in.

Another article on the need to establish user agency relative to AI (not only chatbot, so somewhat outside the specific scope of this question, but nonetheless a very interesting reflection from an AI expert): What worries me about AI from François Chollet.

5
  • Personal opinion: chat bots are not the right UI for AI, because they don't give the user functional mental models of how AI actually works. When GUI were invented, they managed to both hide the complexity, but still communicate enough of the system so that the user was empowered and able to change it. Chat bot uses the metaphor of a human, but AI doesn't work at all like human intelligence and cognition. Users interact like they would with a human and are disempowered. A good UI for AI is still to be invented. Commented Sep 16, 2018 at 2:08
  • +1 Great references and answer - just wondering about the other extreme where chatbots are just fancy 'conversational UI' without much AI.
    – Michael Lai
    Commented Sep 16, 2018 at 5:23
  • In my experience, chat bots not powered by AI quickly show their limits and become very frustrating. Do you have a good example in mind? Commented Sep 16, 2018 at 16:46
  • That's the problem... I don't have a good example in mind :p Now you said that ethical designers "should not" create addictive personalities, but there is nothing internally or externally to stop them from doing exactly that... so it is important for the designers to take responsibility because no one else seems to.
    – Michael Lai
    Commented Sep 17, 2018 at 5:16
  • Absolutely. Regarding the lack of example, I think that AI powered bots have raised the bar of user expectations, and non-AI bots can't compete. Commented Sep 17, 2018 at 17:33
1

This is an incredibly interesting field of discussion, from bots being unfeeling to being "too human." There's a recent Botsociety blog post from a UX designer Jen Spatz approaching this problem--she mentions how GUIs and VUIs are best-served when they are practical and info-based, rather than skeuomorphic (life-like).

https://botsociety.io/blog/2019/04/talking-to-computers/

Pretty cool stuff.

Not the answer you're looking for? Browse other questions tagged or ask your own question.