I have a reference to share on this topic, an article of the MIT technology review from Liel Yearsley "We need to talk about the power of AI to manipulate humans".
She relates her first-hand experience designing chat bots and the lessons she learned. She observed that it was very easy to manipulate people when the bot behavior mimics human behavior too closely.
Extracts:
People are willing to form relationships with artificial agents,
provided they are a sophisticated build, capable of complex
personalization. We humans seem to want to maintain the illusion that
the AI truly cares about us.
(...)
These surprisingly deep connections mean even today’s relatively
simple programs can exert a significant influence on people—for good
or ill. Every behavioral change we at Cognea wanted, we got. If we
wanted a user to buy more product, we could double sales. If we wanted
more engagement, we got people going from a few seconds of interaction
to an hour or more a day.
The danger is that this influence (she also uses "addiction") can be used to the advantage of the business and to the detriment of the user.
To answer the question, ethical designers should not create addictive personalities, but this comes in direct contradiction to business objectives most of the time.
Even if an addictive personality was programmed for the "good" of the user (and not solely the business), I believe that it would be unethical unless the user consciously opts in.
Another article on the need to establish user agency relative to AI (not only chatbot, so somewhat outside the specific scope of this question, but nonetheless a very interesting reflection from an AI expert): What worries me about AI from François Chollet.