10

It has recently been revealed that Israel has been heavily relying on an Artificial Intelligence (AI) system, Lavender, to form kill lists and select air strike targets. The system has been subject to little or no human oversight.

During the early stages of the war, the army gave sweeping approval for officers to adopt Lavender’s kill lists, with no requirement to thoroughly check why the machine made those choices or to examine the raw intelligence data on which they were based. One source stated that human personnel often served only as a “rubber stamp” for the machine’s decisions, adding that, normally, they would personally devote only about “20 seconds” to each target before authorizing a bombing — just to make sure the Lavender-marked target is male. This was despite knowing that the system makes what are regarded as “errors” in approximately 10 percent of cases, and is known to occasionally mark individuals who have merely a loose connection to militant groups, or no connection at all.

Are there any international laws and / or treaties that regulate the use of AI systems such as Lavender in combat and war? From my own research, it does not appear that there are specific laws in place to regulate such systems, though there does seem to be significant concern about their usage. If this is correct, are there any existing laws / treaties that were not designed with autonomous weapons in mind, that could be used to regulate them? Are there any plans afoot amongst the international community to draft such laws / treaties?


Edit:

To highlight the problematic nature of Lavender, it has recently come to light that one of the inputs into Lavender is membership of WhatsApp groups. That is, if a Gazan is in the same WhatsApp group as a suspected Hamas member (apparently Meta are happily handing over this info to Israeli intelligence), Lavender could recommend killing them. Since there is no oversight of Lavender decisions (beyond checking that the target is male), this could easily result in a strike and the individual being killed for the heinous crime of membership of a WhatsApp group.

8
  • 1
    Shouldn't this question be crossposted on law.stackexchange.com?
    – J-J-J
    Commented Apr 5 at 18:24
  • 5
    @J-J-J It shouldn't be crossposted, posting the same question in multiple SE sites should be avoided. It could have been posted at law instead of being posted here, it depends on what perspective is most important to OP.
    – quarague
    Commented Apr 6 at 16:51
  • 1
    You can go ahead and add the genocide tag to this question. Let's call it what it is.
    – Mentalist
    Commented Apr 12 at 5:21
  • 1
    While Meta are handing over What'sApp data for targeting, Google and Amazon have built Project Nimbus, which powers the facial detection. The project has been known about for years. It seems this recent 30-fold retaliatory massacre is the culmination of Nimbus' efforts. At least some workers at Google have a conscience and are protesting. WIRED: Google Workers Protest Cloud Contract With Israel's Government Search: "Project Nimbus", see articles from The Guardian and The Intercept as well.
    – Mentalist
    Commented Apr 17 at 4:29
  • 1
    ...aaand Google has responded by having those workers arrested and fired. I commend all of the protestors who had the courage and resolve to endure as long as possible. Most tech workers want to make a positive difference in the world, not help blow people up - especially when most of those people are not even militants, but collateral damage. Apparently Google considers "workplace behavior" over human rights.
    – Mentalist
    Commented Apr 19 at 5:44

2 Answers 2

21

The consensus seems to be no. The United States and 46 other states are endorsers of the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy, but that's just a declaration and lacks legal standing (and Israel isn't a signatory in any case). Even the declaration envisions AI use in military, it simply calls for it to be used ethically and with human control.

Human Rights Watch has called for a new treaty on autonomous weapons, i.e. weapons that don't even have the human "rubber stamp" that the Israeli program does. We're talking about theoretical machines that would deploy, identify, and kill combatants by themselves. That would seem to be an admission that they are not currently proscribed.

A recent Congressional Research Service report likewise stated that there is currently no international laws that would ban autonomous weapons, and several countries are pushing for such a treaty.

LAWS are not yet in widespread development, and some senior military and defense leaders have expressed concerns about the ethics of ever fielding such systems. For example, in 2017 testimony before the Senate Armed Services Committee, then-Vice Chairman of the Joint Chiefs of Staff General Paul Selva stated, “I do not think it is reasonable for us to put robots in charge of whether or not we take a human life.” Currently, there are no domestic or international legal prohibitions on the development of LAWS; however, an international group of government experts has begun to discuss the issue. Approximately 30 countries have called for a preemptive ban on the systems due to ethical considerations, while others have called for formal regulation or guidelines for development and use. DOD Directive 3000.09 establishes department guidelines for the development and fielding of LAWS to ensure that they comply with “the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement.”

All of which is to say that Israel could theoretical create a program that identifies militants and launches strikes without a human being at the console, and that in itself would not be explicitly banned by international law. Of course, the way it's used could violate international law, but that's true of any weapon.

1
  • 2
    Pretty much what is said in Army of None, written in 2018, by a subject matter expert. "20 seconds to review", while concerning, is also not the same same as the AI being demonstrably autonomous. Just the same way that the proportionality principle has no clear boundaries attached to it. Commented Apr 6 at 2:35
7

That international law already applies to AI also means that an international treaty for AI is not a given. AI does not exist in a legal vacuum and, as noted earlier, general protections and prohibitions under international law are still relevant. Before deciding whether or not a treaty is needed, States must better understand what the existing international legal framework looks like when applied to AI. When considering the pros and cons of such a treaty, relevant questions include: Is the existing legal framework sufficient? Does it leave gaps in the protection of certain values or groups, especially vulnerable persons? Is it adequate to address the new challenges and risks raised by AI? Does it need more granularity to achieve the right balance? Is there a suitable existing forum to have global discussions on AI? How can diversity of thought be meaningfully built into AI negotiations from the outset, including next-generation, female leaders, and Global South perspectives? Treaties take a lot of time, political will, and effort, and may be easily outpaced by the development of technology. The risk is also that in an attempt to reach a consensus, existing legal standards are watered down for AI.

https://www.justsecurity.org/90903/ai-governance-in-the-age-of-uncertainty-international-law-as-a-starting-point/

International law is technology neutral, so it means that everything that's considered legal and that doesn't use AI under international law if it uses AI, so if you can coordinate an airstrike without AI under international law, you can do so with AI. This is why there's no legally-binding international treaty for AI. It's hard to come up with a way to enforce the treaty and also independently investigate breaches and countries are unlikely to open themselves to international scrutiny since AI research with military application may be state secret.

1
  • One reason to ban AI weaponry would be to reduce the chance of a rogue AI exterminating humanity one day. See The Terminator movie for a visual explanation. Commented Apr 25 at 16:04

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .