17

I'm confused about why people claim that current legal system cannot handle any wrongdoings of algorithms that involve artificial intelligence. The claim is that it is impossible to find who is liable for the wrongdoing. This claim seems strange: isn't it obvious that it's the company who developed the algorithm that is liable for any issues that this algorithm caused?

Can someone explain where the current legal system/framework/laws break down when it comes to any harm caused by artificial intelligence?

7
  • Finding who is responsible might be possible, but your question makes things too simple. Often the software is developed according to the requirements provided by the client. Often in case of malfunction discussions over whether it is due to a bug in the software or a missing requirement go on for a long time. Furthermore in case of software based on machine learning techniques data provided by third parties might have an impact on the final behaviour.
    – FluidCode
    Commented Nov 10, 2021 at 23:51
  • 3
    @FluidCode: While I agree that this is more complex than what OP has described, IMHO people tend to overstate just how complicated it is. In real life, what would actually happen is that everyone who might be liable would have some sort of insurance coverage, and the insurance companies would figure it out on a no-fault, case-by-case basis, just as they currently do with human drivers. You could easily codify such an insurance requirement into law, although at present I'm not aware of any such statutes.
    – Kevin
    Commented Nov 11, 2021 at 18:24
  • 2
    For reference, I think this question is too focused on "Artificial Intelligence". The fact is that automated machinery has been injuring and killing people for a long time and companies that provide such equipment tend to be liable for the promises (implicit or explicit) that they make for the behavior of such machinery. The point at which most people were unable to fathom the inner workings of the decisions of software that could cause harm already occurred around 30-40 years ago. When I developed for industrial automation, I read code older than I was that did this stuff!
    – Clay07g
    Commented Nov 11, 2021 at 20:08
  • 1
    What most people think of as AI is actually "machine learning". The defining feature of machine learning is that it is trained on a set of test data rather than programmed by a person. The output of that learning process is a behavioral model, not some sort of computer code that can be read by a human. The best you can do is test the model against a lot of test cases. But its usually impossible to create, or even think of, every possible scenario the system might be exposed to. If they gathered say 100 years of test data, but then a failure occurred, is the company really at fault?
    – user4574
    Commented Nov 12, 2021 at 2:35
  • 1
    Actually I think that most of the examples in the answers do not give the correct idea of the problem. So, I'll add another example: someone is driving on a causeway between the coast and a sea island. Suddenly the driver sees a huge wave that is going to roll over the causeway, he pushes on the accelerator to flee, but the driving system does not allow to break the speed limit and the panicking driver does not remember how to override it. The driving assistant knows everything about what happens on the road, but it cannot have the broad knowledge of the world that is common to humans.
    – FluidCode
    Commented Nov 12, 2021 at 11:52

11 Answers 11

42

Real-world situations are rarely so clear-cut

Let's say, hypothetically, that I'm in the driver's seat of a car. The company told me that the car has "Full Self Driving" capabilities based on some sort of artificial intelligence, though they also said that these capabilities "are intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment." Let's say I was not fully attentive at a moment when the car's AI decides to swerve into oncoming traffic, and I fail to grab the wheel and prevent that.

Who's at fault? Is it the car company's fault for a bug that caused that? Is it my fault for failing to be fully attentive? Is it some combination of the two?

But wait, it can get more complicated: maybe the car company argues that they couldn't have reasonably anticipated the situation that caused it: maybe the lines were incorrectly drawn on the road, and indicated that the road continued in that direction. Maybe I argue that the car swerved quickly enough that even a fully attentive driver couldn't have recovered.

These and more are all facts that need to be sorted out in a trial. There's no way to simply say that "any issues that this algorithm caused" are entirely the company's fault.

In other words, this isn't really the legal system "breaking down"—it's working as intended, trying to figure out whose fault an event actually was. The law just isn't very developed yet as to the process a court would follow to assign liability.

9
  • 1
    Thank you @ryan-m! However, I was more thinking of a fully autonomous car without the need of human participation in the driving process. In that case, for your example with lines, it was company's responsibility to know how it would be understood by a car. Otherwise, a human being can claim the same: the lines were incorrectly drawn and confused him or her. Do you agree? I was also thinking more in terms of "foreseeability", "negligence", "chain of causation" terms that might get blurry because of AI (e.g., it's black box and the company cannot foresee what it would do).
    – Qwerty
    Commented Nov 10, 2021 at 10:56
  • 18
    @Qwerty "Otherwise, a human being can claim the same:" - but, surely they can? E.g. if you go through a green light and hit a car when the light was faulty and should have been red. Or the arrow on the road says you can turn left but actually it's a one-way street going the opposite way. The essence of this answer is that real life is complicated and the whole point of the court is to sort out the details. If it were as straightforward as "AI = developer is to blame", then what would we need courts for?
    – JBentley
    Commented Nov 10, 2021 at 21:15
  • What if the software developer doesn't have that are intended for use with a fully attentive driver, who has their hands on the wheel and is prepared to take over at any moment." liability remover- as they will have to have if they really want it be considered as self-driving. Commented Nov 11, 2021 at 0:33
  • 12
    The issues presented in this answer aren't limited to artificial intelligence, they are equally valid for the use of any old-time machinery and their safety systems. There can be similar issues with the grey area between an inattentive operator of the machine or the manufacturer of the machine being responsible.
    – vsz
    Commented Nov 11, 2021 at 5:14
  • 4
    Just to make it more complicated - imagine cars are equipped with a standardized cooperative avoidance system, two automated cars manufactured by different manufacturers have algorithm defects such that the AI causes them to hit each other. Except one defect caused a car to swerve into another lane, and the other just caused it not to detect the first car in time. Are they both equally at fault? Commented Nov 11, 2021 at 16:09
20

Error is not always Wrongdoing

The OP writes of "wrongdoings of algorithms". To me a "wrongdoing" is something that would be criminal, or at least involve civil liability. But not every time that something goes wrong is there any "wrongdoing" in this sense. Sometimes a bad outcome is simply an accident, and no one is liable, civilly or criminally.

That said, no algorithm today is, to the best of my understanding, anywhere near the point where we can speak of the wrongdoing of an algorithm. Algorithms make errors, people do wrong. If there is liability when an error results in damage, in may be the responsibility of the maker of the algorithm, or of some individual who worked on the algorithm, or of the user who was running the algorithm, or perhaps of some other person who was in some way involved. Determining which is one thing that the legal system must do, and it isn't always easy.

In many ways this is simply the problem of liability for the failure of a manufactured product, and is no different just because an algorithm or an AI is involved, although the situation may be more complex.

The OP writes in the question:

isn't it obvious that it's the company who developed the algorithm that is liable for any issues that this algorithm caused?

The law could take that approach, but in many cases it would work injustice. So it doesn't take that approach, at least not in the US or the UK, and I don't think it does in any current jurisdiction.

Let's consider a simple case, with a manufactured product but no algorithm at all. A carpenter is using a hammer to nail boards to studs in the framing of a house. S/he lifts the hammer back, and the head comes loose, flies away, and hits another worker in the head, injuring or killing that worker. Is the manufacturer of the hammer liable?

Possibly. If the making of the hammer used inferior parts or techniques, was not up to normal professional standards, and such a failure was reasonably foreseeable, then quite possibly the answer is "Yes". If the hammer was well-made and the failure was an unpredictable accident, then "No".

Was the carpenter liable? If s/he used an improper tool, perhaps using a light tack-hammer where a much heavier one was called for, stressing it so that failure was foreseeable, then perhaps "Yes". If the carpenter acted as a reasonable and skilled person would, then probably "No". In both cases foreseeability, and working to a reasonable standard of care, are key aspects for whether liability is imposed.

Possibly neither the manufacturer nor the carpenter is liable. The accident could be ruled exactly that, an accident with no liability from anyone.

Now let us take the case of the self-driving car. The car's AI makes an error, failing to curve when the road curves, driving into oncoming traffic, causing a crash and injuries. Is the company that made the car (or the subcontractor that wrote the software) liable? It will depend on the detailed facts.

Having a road swerve is a very forseeable situation, so the designers should have included handling it in the design, and should have tested such situations on a number of simulated and actual roads. The quality of both design and testing efforts would be evaluated in detail in assessing whether there is liability here. If the specific cause of the error can be found, that will help. If the cause was a misinterpreted or incorrect road marking, it will be a question if such markings are foreseeable, as they probably are. If a human driver was supposed to be monitoring and taking control in the case of an a=error, that driver might have partial liability.

But the law does not simply throw up its hands and say there is no way to determine cause or liability. It will attempt to apply the same general principles that it does to possible liability for accidents involving a hammer, a train, or any other manufactured product. The details will differ with the jurisdiction, and the specific facts, but whether the accident was reasonably foreseeable, and the degree of care used by the manufacturer will usually be important.

8

Any system that might endanger people must be reasonably safe. In the UK the likelihood of an accident must be "As Low As Reasonably Possible" (ALARP). In the event of an accident it is for the manufacturer and/or the operator to show that this was the case, otherwise they will be liable.

In practice "ALARP" is too vague a standard, so various industries have established standards which are more detailed. It is generally considered that if these standards are followed then the risk due to the system is considered ALARP. For automotive electronics that standard is ISO 26262. Other industries have similar ones. Most of these are descendants of IEC 61508. Following a standard like this is not a complete get-out-of-jail-free card, but its not far off.

The basic concept behind these standards is to start with a systematic enquiry into the question "what could possibly go wrong?". For a system that controls a car one of those things would be "car is steered into opposing traffic". A process called "HAZOP" is used to create a list of these things, and to rate them by severity. So "unnecessary emergency braking" would have a lower severity than "car is steered into opposing traffic" because the former is much less likely to kill someone (though its obviously not impossible). A diligent HAZOP should identify all the foreseeable ways in which an accident might occur, and hence could be used as a defence to show that something outside the HAZOP was not reasonably foreseeable.

Once the hazard list is identified the system must be designed to manage these hazards. As part of the system design process the designers will consider how each component might fail and how that would affect the system as a whole. For instance, if the wiper motor fails, what happens to the car? The system designers have a number of options, including redundant systems, fallback systems with less capability but more reliability, and of course human intervention.

Human intervention tends to be the difficult one. Its very tempting to simply hand off ultimate authority to the operator and then declare that the responsibility for accidents therefore lies with the operator. It is also not going to work. A system consists of people, processes and technology, not just the technology. Any safe system design must include the failure modes of the human operators just as much as the sensors, actuators and other technical components. Humans are known to be bad at paying attention to routine matters that don't require interaction, so a system design which assumes a permanently alert operator waiting to override an error is not going to be safe, and its manufacturer isn't going to be able to escape liability merely by pointing to an inattentive operator.

AI doesn't change anything fundamental about this. If you want to put an AI in charge of a car you need to consider its failure modes and the ways in which they might lead to an accident, just like you would for any other component in the system.

Conventionally the implementation of safety systems is based on detailed requirements (which are themselves analysed against the hazard list for safety) followed by careful implementation and testing to ensure that the resulting system meets these requirements. (I'm skipping lots of irrelevant detail here). AIs that use trained neural networks don't have the traceability from detailed requirement to implementation, so safety assurance is much more of a headache. Right now we don't have any standards for this kind of work. So when an AI kills someone by mistake we don't have a proper framework for judging liability. Were the risks ALARP or not? Ultimately it would be for a jury to decide by looking at the evidence.

So to answer the question, no it is not impossible to determine who is responsible when an AI causes an accident, but until we have enough experience to write the appropriate standards it's going to be a toss-up. A lay jury is not going to have the expertise and knowledge to judge whether an AI was safe enough, so a legal case is likely to descend into a battle of the experts.

Because of the uncertainty this creates there are some specific legal frameworks designed to manage the liability for such systems.

One of the issues with any safety problem is that rules like ALARP can make the best the enemy of the good. Suppose you have a situation with 10 accidents per year. You introduce a new safety system, and now there are only 3 accidents per year. However 2 of those accidents are directly caused by the new system. Is the system safe? AI autopilots for cars might well be safer than the human drivers. In this case it seems a little unreasonable to hand liability for these systems to the manufacturers instead of the driver and their insurers, where it currently lies.

7

The involvement of AI is irrelevant to liability; what actually happened is.

Let's say a company uses an AI during their hiring process to eliminate potential employees who are not suitable. And it takes a year until someone figures out that the AI rejects systematically all candidates with black skin colour or names that seem to indicate black heritage. That's illegal discrimination, and the company is 100% liable. "Our AI got it wrong" is no excuse.

About intent: The intent in this case seems to be to use what the AI says as a base of your decisions. So if the decisions of the AI are racist discrimination, then that's on you.

10
  • 3
    @Qwerty but in using a black box, one can foresee that there will be unforeseeable behaviours! Commented Nov 10, 2021 at 14:38
  • 4
    @Qwerty Algorithms, including AIs are rarely pure black boxes. Designers have significant insight into what inputs will cause what outputs in many cases. Also it is foreseeable not just that there will be errors, but particular kinds of errors. It is reasonable to consider what precautions have been taken against a particular kind of error, and that testing be done to try to reveal unexpected error situations. Testing cannot possibly cover all possible inputs. But it can cover many situations, including ones likely to stress the system. Commented Nov 10, 2021 at 19:18
  • 3
    "Our AI got it wrong" may be an excuse, depending on the details of the situation. If one could show that reasonable design decisions had led to this unwanted outcome, and that it was unwanted, there might not be liability such as there would be for intentional discrimination. It would be interesting to see such a case litigated in a real court. Commented Nov 10, 2021 at 19:21
  • 6
    Intent in a major component of discrimination liability. It's not the only component, but a company that has a deliberate policy of not hiring black people is going to be treated very differently from one that adopts an algorithm that discriminates against black people without the company so intending. Commented Nov 10, 2021 at 21:10
  • 4
    @Qwerty AI is not a black box. Most algorithms are very predictable. Even the largest operational web intelligence (Facebook) can explain almost all calculations done by the algorithms. But the problem is, they just don't want to fix it most of the time. When twitter trains its models to get the most interactivity out of its users, the models come to a conclusion that offending people is the best way to get user interaction. Why should twitter change its business model and stop making money just to avoid social unrest? Commented Nov 11, 2021 at 6:20
5

No Robot Law needed

I am going to stick my head out here and say that (whilst not wanting to minimise the dangers and difficulties of technology) I think that the terms in which this concern is often voiced mainly reflects that fact that most legal commentators have little scientific understanding and are thinking in terms of the androids depicted in science fiction which are portrayed as conscious moral beings.

The more algorithms are used to make decisions the more complicated it may be to assign legal responsibility but, as others have pointed out, the law does that on a case by case basis according to existing legal principles such as the tort of negligence considering the evidence which the parties bring to the court.

There is no immediate likelihood that the absence of so-called "robot law" will result in the human race being enslaved by HAL in league with R2D2 and 3PO.

6
  • 2
    -1 I don't see how this answers the question in anyway. It seems to offer more background in agreeance with OP's opinions, but it doesn't answer "[W]here [does] the current legal system/framework/law break down when it comes to any harm caused by Artificial Intelligence?"
    – TCooper
    Commented Nov 10, 2021 at 22:44
  • The OP has actually asked two questions in the form "isn't A the case? if not can you explain why not?" I have answered: Yes, A is the case.
    – Nemo
    Commented Nov 10, 2021 at 22:48
  • 1
    I fundamentally disagree with your interpretation of the question then; I cannot re-read, while actively trying, and be left with the question "Isn't A the case?" from OP's question. They state a common claim, and ask how/why the claim is valid, looking for an explanation of why people make this claim - at no point is the question asked "Is this claim accurate?"
    – TCooper
    Commented Nov 10, 2021 at 23:10
  • John Doe invents a new job, that allows him to earn a living, but forces him to spend a lot of time driving from one place to another. With this job there is a high probability to be involved in car accidents that are not his fault, and by bad luck it happens. Since algorhitms do not take into account that probability they classify John Doe as someone very likely to cause car accidents and all insurance companies refuse to insure him and he is forced to give up his job. Who is to blame?
    – FluidCode
    Commented Nov 11, 2021 at 12:08
  • 1
    @Nemo Isn't obvious you're answering a rhetorical question?
    – TCooper
    Commented Nov 11, 2021 at 17:59
1

The legal system may "break down" by evaluating AI in inappropriate ways.

The whole idea of AI is to design computers that would react in ways not fully predictable or controllable by the programmers.

Of course we're still far from being at a point where an AI can be considered a moral agent independent of their creator (if this ever happens), and the creator can predict and control its behaviour to a large degree by training and evaluating it on certain example data.

So really the legal system shouldn't be asking:

  • Would a competent and moral human reasonably have performed the same wrongdoing? nor
  • Was this due to an error or omission in the AI design?

What one should be asking is:

  • Does the AI perform better than a human would in general? and
  • If there was an error or omission in the AI design that caused the wrongdoing, was that due to clear negligence or malice?

The former puts an unreasonable burden on the AI creator and severely stifles the progress of AI. The latter holds people to a standard of responsibly designing AI that is an improvement to the way we currently do things. We don't want a standard of "perfect or nothing", as that would pretty much prevent the use of AI altogether (just like we don't want to require that medicine has no side effects, as that would pretty much prevent the use of medicine altogether).

I'm not entirely up to date on lawsuits involving AI, but I wouldn't say "our current legal system breaks down" there. At worst you'd need to get a higher court to rule that AI should be evaluated in the above way. Responsibly-designed AI systems are already compared to whatever the existing process is during the design process (if possible), meaning this data would be available, and lawsuits about something like medicine would already have a somewhat similar precedence where only one individual suffering any documented or unknown side effect would generally not lead to a successful lawsuit against a medicine manufacturer (negligence or malice would typically be required for that). Although there are rules and laws around medicine that require a certain amount of transparency, trials and approval by independent agencies, which is perhaps something that should happen with AI as well (at least if we're talking about life-or-death situations).

11
  • 2
    I do not think this answer represents the state of law anywhere. In particular court cases will look at the specifics of the case at hand, not how the AI performs in general, and it is frequently not required for there to be clear negligence or malice for the company to be liable for harm caused, particularly in medicine.
    – Dave
    Commented Nov 11, 2021 at 14:58
  • 1
    @Dave "court cases will look at the specifics of the case at hand, not how the AI performs in general" - this 100% supports the point I'm making though. The question asks "where the current legal system/framework/laws break down", and I pointed out the flawed way in which AI is or may be evaluated in the legal system, and how it should be evaluated instead. I very much doubt that one individual would be able to successfully sue a medicine manufacturer for suffering any documented or unknown side effect.
    – NotThatGuy
    Commented Nov 11, 2021 at 15:26
  • 1
    @NotThatGuy - if someone’s foot is cut by a lawnmower the issue will not be decided by how well it cuts grass on the whole or how safe it is compared to a manual mower. It seems that this is an analog to your view of how AI might be judged. Commented Nov 11, 2021 at 15:49
  • 1
    @GeorgeWhite And that's exactly how we shouldn't think about AI, because AI is more like medicine with unpredictable or unavoidable side effects than it is like a lawnmower with clear usage guidelines and clear manufacturing processes to avoid injury.
    – NotThatGuy
    Commented Nov 11, 2021 at 15:59
  • 1
    @NotThatGuy I think that you will find that following all those good practics will not keep a drug maker safe if there are too many bad outcomes, even if there is large net social benefit,.In l;iability suits the net social benefit is not considered much, the effects on those who suffer bad outcomes are. That is why there are special laws for vaccines. And I think you will also find that courts do not routinely compare an AI's actions with those of a human in similar circumstances. Commented Nov 12, 2021 at 5:52
1

It does

Your assumptions are fully wrong. Legal systems do handle the liability of the operator of the machine, making him fully liable for the damage done to the other parties by the machine.

The fact that the machine can work autonomously doesn't limit your liability, under circumstances it can increase it. For example, you're not only fully liable for the damage your dog has done to the other party, but you can also face criminal charges for not having enough control over it's actions.

2
  • 4
    I agree that the law is currently adequate but beyond that I do not think this is correct. It is not just operators of machines. Manufactures of machines also have liability. A car crash might be caused by a negligent driver or by a negligent design or manufacture of an axle, for example. Commented Nov 13, 2021 at 20:39
  • @Danubian Sailor I tink you are mistaken that "the operator of the machine" is "fully liable for the damage done to the other parties by the machine.' Several examples given in this thread seem to show otherwise. Can you cite any source for that rather strong statement? Commented Nov 25, 2021 at 4:07
0

I noticed a lot of confusion in the comments of one of the leading answers regarding what can be expected. My intended comment quickly ballooned into something that was more like an answer, even though it is not directly law.

In software development, we have the idea of a V&V cycle: verification and validation. In very rolled up terms, "verification" is proving that the software works according to some specification, and "validation" is proving that the specification solves a problem. The former is a very procedural process, while the latter is notoriously fluid.

If a company failed their own verification tests, it is easy to pin all of the fault on the company. There was a straight-forward procedure, and they didn't follow it enough. However, validation is trickier. In the situation of a self driving car, 0% of all drivers are considering the spec for the car. That's not a "round down to 0" that's "none of them." Heck, they may not even legally be able to get their hands on the spec. The actual legal concept of responsibility for an accident would come down to how well the driver understood what the car was validated against, and how well they could be expected to understand it.

If I grab a fork from my silverware drawer, it doesn't come with a "WARNING: Do not stick repeatedly in eye" sticker. The society around these forks expects that I understand the consequences of such an action well enough that the fault is placed squarely on me.

A can of engine starting fluid (Diethyl ether... nasty stuff) comes with a warning not to incinerate the can. I think society generally understands that throwing a can of engine starter in the fire is a bad idea, but we find the legal need to put a warning on the can, just to say "we told you!"

My new car comes with an owner's manual full of warnings. I read them all, but it would be a major challenge to remember all of them simultaneously in the event of an accident. I am certain some set of lawyers identified what list of warnings they thought needed to presented in a prominent manner and which could just go in the manual.

A taxi cab comes with a bunch of warnings written in an eyebleed font, but really most of this is handled with the company/driver assuming all liability for getting you there safely. Even then, there are unhandled cases. It's on a passenger to know not to suddenly distract a driver in a key moment by putting their hands over the driver's eyes.

The challenge with self driving cars is that the capability is a moving target. What understanding is expected of the user is changing. Our usual approach of putting the warnings in the right places is changing. And that makes any of these things tricky.

3
  • If an AI is known to be incapable of adequately dealing with a specific situation, the ideal solution would be to just fix the AI or explicitly send those situations for human review, not to just add that situation into a manual of warnings. Although you may add it as a warning if the AI can't reasonably be made to detect or handle those situations. The most common errors in a good AI would be in unexpected situations, which would be rather difficult to explicitly specify (as then they wouldn't be unexpected).
    – NotThatGuy
    Commented Nov 11, 2021 at 14:35
  • @NotThatGuy Agreed, although the caveat would be if it were the ideal solution, the AI's would never make any mistakes that need to be approached by legal proceedings. Unfortunately, that is not an easy ask of any modern software.
    – Cort Ammon
    Commented Nov 11, 2021 at 22:55
  • "It's on a passenger to know not to suddenly distract a driver in a key moment by putting their hands over the driver's eyes." Jeez, that happened to me once when I was about 18.
    – gnasher729
    Commented Nov 15, 2021 at 11:36
0

The legal system can handle it. The problem is the liability. In particular, its quite likely that liability would shift to car makers. Currently if you cause an accident, Ford or Toyota have essentially no involvement (there's the rare case of manufacturing defects but those are extremely uncommon and usually won't even be considered unless there's a sudden string of similar, inexplicable crashes).

If the AI is driving on the other hand - especially as we approach L5/L6 (ie: fully self-driving with no human intervention or even attentiveness required) - it will get harder and harder to place liability on the vehicle's owner. But of course car manufacturers don't want that (and really, neither do you - the cost of new cars will increase to offset the liability risk). The problem the legal system "can't handle" is nobody willing to take responsibility.

It gets more complicated though. If you were riding in a brand new AI-driven car and got in an accident, not a problem. But what happens in 10 years when your car is getting on and you haven't bothered properly maintaining it? How bald do your tires have to get before liability shifts to you? The "obvious" answer is that car makers should just have their vehicles monitor their condition and provide increasing warnings to the owners (and finally self-disable until repaired) but locking somebody out of a thing they paid tens of thousands of dollars for is also a bit of a legal quagmire, EULA or not.

Others have pointed out that automated systems have been in use (and have caused harm) for decades at this point, but self-driving cars are a bit different: They're intended to be marketed to average consumers. Industrial machinery in a factory is quite a different legal story and usually comes with contracts and other documentation that firmly establishes liability boundaries, maintenance regimes, etc. That's not something you can reasonably expect the average Jim or Karen to comprehend even if a pushy car salesmen gets them to sign a 14 page document stating that they do. It would be nice if everyone had the level of legal competence and patience to comprehend such things but few of us do.

-1

"isn't it obvious that it's the company who developed the algorithm that is liable for any issues that this algorithm caused"

It appears that you are assuming that AI algorithms are actually programmed by a person working for the company. What most people call AI is based on "machine learning". The defining feature of machine learning is that it is trained rather than programmed by a person.

Most of the time companies don't develop the machine learning system from scratch. Instead they use a generic third party library and then train it by exposing it to test data. If the resulting model passes the tests well enough for all the test data and real world tests then it gets deployed into a product, if not they train it more.

The developers of the third party generic AI library probably can't be found at fault if they had no involvement in what third parties would use it for.

As for the company deploying the AI system...
Practically speaking its impossible to test a system for all possible combinations of inputs. Supposing I make 100 systems and run them each for 1 year in their intended environment with no failures, I could say that I had 100 years worth of test data. But then the system gets deployed and there is a failure. Who would fault a manufacturer for a failure at that point? Should they have ran 1000 years, or a million years of test data? At some point they have to stop and declare it safe.

Also, AI is getting better all the time. It seems likely that not too far in the future we may have generalized AI that can do almost anything a person can. At that point trying to predict what that system would do is going to be about as difficult as predicting what other humans would do.

At least in the United States, parents are not usually held liable for the actions of their adult offspring, and it would seem that a similar principle would apply to the relationship between an AI development company and a generalized AI product (assuming it was adequately tested before release).

11
  • Science Fiction Commented Nov 12, 2021 at 15:13
  • @GeorgeWhite partly... first three paragraphs are true. And it has been proven that the input data has a BIG influence on what the AI learns to react to/output/etc. A good example is the issues facial recognition tech has with people who have darker skin.
    – ivanivan
    Commented Nov 13, 2021 at 1:32
  • @ivanivan The fourth paragraph is true also. The number of test cases grows exponentially with the number of inputs. Imagine a system that made a decision based on 20 toggle switches. There are 1,048,576 possible test cases for 20 switches. I have worked as an engineer for over 12 years now designing embedded systems for various companies and in all but the simplest cases, testing all possible combinations of inputs has never been possible.
    – user4574
    Commented Nov 13, 2021 at 2:14
  • @GeorgeWhite Its definitely not science fiction. My primary job is as an electrical engineer who designs embedded system for large companies. You can go on the internet right now and download any number of free machine learning libraries , and use the exact training process I described. As for creating a human level AI, the human brain is estimated to perform around 10^16 operations per second. Companies like Intel and AMD have already created exa-scale super computers performing 10^18 operations per second. If progress is made on refining the learning algorithms, its a solved problem.
    – user4574
    Commented Nov 13, 2021 at 2:30
  • It the GAI I was reacting to. It has been 25 years in the future for 50 years. I’m a retired EE and I worked on LISP machines in the ‘80s and saw the AI winter of GOFAI. I know ML has way surpassed those approaches but analog massively parallel brains are nothing like conventional computers. It’s not a matter if refining, we do not really know how brains work. I understand this is a big debate with smart people on each side and I guess we are on different sides. Commented Nov 13, 2021 at 3:05
-2

I just published an article on this topic. The answer is not the law. The issue is devising a mandatory insurance scheme whenever AI is used. That would take care of accidental AI infringement same as car insurance takes care of accidental car accidents. Insurance is the issue not the law.

1
  • 1
    And where exactly would this mandate come from? (It's certainly not going to be passed down on stone tablets)
    – user4657
    Commented Nov 25, 2021 at 4:33

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .