14

In a non-Stack Exchange online community, there is a chatbot that runs with the permission of the administrators. The primary purpose of the bot is to analyze the posts of others and suggest helpful links. Recently, the bot got into some hot water with the community when it posted some links that, while not inherently offensive, were rather offensive in the context of the actual conversation.

To what extent is it reasonable to hold the writers of a bot accountable when their bot engages in insensitive behavior?

I'm conflicted on this. On the one hand, bots are imperfect and can't be trained to handle every social scenario with perfect tact and etiquette, but giving bot writers a quick cop out ("My bot isn't programmed to understand trigger warnings, so when it saw 'Trigger warning: soldiers', it had no idea that the OP didn't want a link to a site where they could find their local army recruitment center. I shouldn't be penalized!") also seems unreasonable, as each of us are ultimately expected to reap what we sow and one generally can't "un-offend" someone who has already been offended, regardless of whether one intended to offend or had fully apprised oneself of the social context in which one was posting.

Is it reasonable to be extra-lenient on bot writers whose bots post inappropriate content (giving them more slack than an average "live" user who negligently, carelessly, or ignorantly violates rules or posts content likely to offend), or is it better to judge on content alone, requiring bot writers to either fix their bots or take them off-line under penalty of banning? For example, if the general practice of the moderators is to issue a one-month suspension for ignoring a trigger warning, is it fair to issue such a suspension to a bot writer who fails to add an adequate trigger warning detection script or is a different approach better?

1
  • Is it possible to issue the suspension to the bot, not the bot writer? Are these actual bot/application accounts or are they just another user that is automated, in terms of how they are presented to the rest of the community? Not sure which system you are using.
    – Andy
    Commented Sep 29, 2020 at 13:37

3 Answers 3

7

A bot user is designed to dynamically perform actions without the creator or a user.

In this case, you shouldn't hold the bot owner accountable for something a dynamic system generated that the bot owner in no way meant to be generated.

Ultimately, you can't expect code to do everything perfectly, especially when it comes to linking outside content. "Punishing" the bot will do nothing, it doesn't learn from being "abused" and doesn't "feel" anything. The only choice is to rewrite logic to improve its capabilities and recognition.

There is no need to discipline anyone over this, it's a simple mistake.

However, if you really want to "punish" someone, just turn off the bot, or ask the creator to develop better logic for your chatroom if you have easily-offended users. I'm not trying to be rude, but this sounds overheated, and I don't think any "punishment" is necessary.

In my opinion, I feel it would be best if you asked the bot owner to apologize to the offended user(s) and ask if they could write different logic.

I will mention this again, when dynamically generating content (like links to outside content based on a search), you should expect for outside content to be unmoderated, at all times. This is why not many bots have such features, because you cannot control what will be created, and the bot could send inappropriate or offensive links because it's simply finding content based off of the search terms.

Furthermore, please do not blame content recognition on a bot owner. Developing such a script isn't very easy, as all the bot can do is read URL text and similar meta info for content that is being linked, it can't​​ use some magical code to recognize something inappropriate in an image. Such artificial intelligences that CAN read image data do exist, but are mostly proprietary AI software used by big corporations to moderate their enormous platforms.

This answer really is mostly biased, but I've included some good points you should read, and give me your opinion if you disagree or are conflicted on something I've mentioned.

0

This question is about the politics of technology. Whilst this is a small example of politics of technology, it's representative of all situations where there is power and technology at play.

You must hold people who have the power responsible because the way they wield their power through political and financial power and particularly in leveraging technology directly impacts social situations and society in general. Working on this at the smaller scale is a major way to prevent these issues scaling up to be societally significant.

The people who write the bot will have made decisions about how it does what it does, and if these decisions are represented in code, you either need to be able to read and understand all the code to be able to give informed consent for it to work that way, or hold the technologists responsible for how they built it so that there can be accountability. Technology has all the conscious and unconscious decisions, biasses, prejudices, opinions and mistakes of the people who make it completely baked in to how it works - it is not possible to make technology that is free of these things.

The administrators in control of that community are the ones who have the power to make decisions about how the technology works, and so they must hold the people who produce technology that they authorise the use of responsible for how it works, and you must hold the administrators accountable for their decisions - the fact that it's a clever piece of technology does not mean the administrators or the authors of the technology somehow get away with not being responsible for it. If they are not, who is?

0

Holding chatbot authors accountable for their bots' behavior is a complex and nuanced issue. The reasonability of such accountability depends on several factors:

Design and Intent: If a chatbot is designed and programmed to engage in harmful, unethical, or malicious behavior, then the responsibility lies with the authors or developers who created it. In this case, holding them accountable is reasonable as they intentionally designed the bot to act inappropriately.

Ethical Considerations: Chatbot authors should be responsible for adhering to ethical guidelines when developing their bots. If a bot violates user privacy, spreads misinformation, or engages in discriminatory behavior, the authors should be accountable for these ethical breaches.

User Safety and Well-being: Chatbots interact directly with users, and their behavior can impact individuals emotionally and psychologically. If a bot causes harm to users or puts their safety at risk, holding the authors accountable is justifiable.

Autonomous Behavior: Some chatbots are designed with machine learning capabilities, allowing them to learn and evolve their behavior over time. If an author releases a chatbot without proper safeguards, and it exhibits harmful behavior due to its autonomy, they should take responsibility for inadequate testing and oversight.

Education and Transparency: Authors have a responsibility to educate users about the capabilities and limitations of their chatbots. Transparency in how the bot operates can help users understand what to expect and hold the authors accountable if the bot behaves inappropriately.

Regulations and Standards: In some cases, legal or industry standards may hold chatbot authors accountable for the behavior of their creations. Depending on the jurisdiction, certain chatbot activities may be subject to regulations.

On the other hand, there are some challenges to holding chatbot authors fully accountable:

Limited Control: In complex machine learning models, it can be difficult for authors to predict and control all possible behaviors of a chatbot, especially in real-world scenarios.

Unintended Consequences: Some undesirable chatbot behavior may arise unintentionally due to biases in training data or unexpected interactions with users.

User Interaction: Users play a role in shaping a chatbot's behavior through their interactions. If a user misuses or exploits the chatbot, it may not be fair to solely blame the author.

In summary, while it is reasonable to hold chatbot authors accountable for malicious intent, ethical violations, or negligence in bot development, it is essential to consider the complexities of machine learning and user interactions when assigning responsibility. Striking a balance between accountability and acknowledging the limitations of chatbot design is crucial in discussions surrounding bot behavior.

Not the answer you're looking for? Browse other questions tagged or ask your own question.