Freya Gompertz

Lady Margaret School, West London

 

A Levels in Maths, Further Maths, Physics and Biology

 

Degree aspiration: Mathematics
 

Who should be liable when autonomous systems cause harm? Why should we allocate responsibility in that way?

We judge who is liable for harm based on the causal actions of the people involved. This is a key principle of civil (tort) law. If it is the case where the consequence would not have occurred ‘but for’ their action,1 the person’s action would be an instrumental “but for” cause. Their action must be the intervening action that breaks the chain of causation leading to the incident to be given full liability for the incident. When there are only people involved, we are usually able to extract a detailed analysis of each process leading up to the consequence, and from this confidently identify the principal causes of the consequence. For instance, if a doctor prescribes drugs to a patient who the doctor knows would suffer serious side effects when taking these drugs, the doctor would be the intervening action that gives them full liability to the harm caused.

 

The problem arises when a complex machine replaces the doctor as its actions may be inexplicable, but since the machine has caused harm someone must be held liable for the machine’s decision. We assume that there is no manufacturing fault in the hardware and no obvious fault in the software; that without this incident an engineer would not consider it as defective. This is because with artificial intelligence (AI), the operators or manufacturers may be unable to predict nor explain the actions a machine might take, particularly where machine learning is introduced. This gives rise to black box machine intelligence, where the reasoning behind the machine’s decision is inadvertently opaque and irretrievable.2 This may be due to the complexity of the system, which is likely to be based on artificial neural networks. An option more easily interpreted is decision trees, but these are more limited with complex classifications so are rarely used.3 If a machine makes a decision that causes harm, the opacity of neural networks means that we may not be able to find the individual causes of this decision. We may be uncertain about whether the cause was beyond the control of the operator or manufacturer, and it is therefore uncertain who to hold liable for the harm caused.

 

AI Personhood

 

We could consider the machine itself as a person.2  This is called AI personhood. The machine, and only the machine, is given liability for the harm caused, and its actions may be treated as negligence on the machine’s part. This is already observed in the personhood of corporations and the like; Michael Dorf, a professor of law at Cornell University, states that “personhood is a legal status for which sentience is neither necessary nor a sufficient condition,”11 so there seems to be no issue in the suitability of allocating personhood to our AI. The AI can be considered its own entity in the courtroom; but it cannot provide its own financial retribution to the plaintiff. Either the company employing the AI or the AI’s manufacturers must pay8 for the negligence of the AI. This does appear to undermine the whole concept of personhood and render the concept void.

 

Holding the Employer Responsible

 

We could alternatively use vicarious liability, placing all liability on the corporation that buys the AI from the manufacturers. In this way we can say that the corporation has “employed” the AI. The clear advantage of this are the “deeper pockets” the employer tends to have, from which an injured person can most properly seek compensation.9 In this context, we must consider the deterring effect differently; any deterrent would be mostly financial. In this case, vicarious liability would make the employer be more selective in the technologies he uses, and consider which he is financially prepared to protect legally. Perhaps he would be too selective; it is not favourable to stunt the growth of AI development by avoiding their employment. Furthermore, the employer is not really culpable for the AI’s opaque decision, yet still has to pay all the costs. This could seem unreasonable.

 

Holding the Manufacturers Responsible

 

The Consumer Protection Act 1987 implemented strict liability in the UK. If the plaintiff can prove that the “safety of the product is not such as persons generally are entitled to expect,4” then the manufacturer will have to face liability for the machine. The manufacturer may have liability insurance to cover this. However, this act also protects manufacturers when:

The state of scientific and technical knowledge at the relevant time was not such that a producer of products of the same description as the product in question might be expected to have discovered the defect if it had existed in his products while they were under his control.5

This is reasonable, considering that in many situations concerning deep learning, manufacturers may not have noticed the error because the machine was so complex and not because of the manufacturer’s failed duty of care. As the system uses more information of its own experience, its behaviour is increasingly less and less directly attributable to human programming.6 Furthermore, an AI which performs better than its predecessor is unlikely to be called defective.14 It seems clear that strict liability will probably relieve the manufacturer of liability unless they are really at fault. This appears to be fair for the manufacturer, but the plaintiff should still receive financial compensation for any injuries.

Another angle on this issue is using the concept of “reactive fault.” Fisse and Brathwaite define this as an “unreasonable corporate failure to devise and undertake satisfactory preventive or corrective measures in response to the commission of the actus reus of an offence.”7  The actus reus is the “guilty act” of the corporation. When applied to AI, corrective measures to avoid reactive fault could include implementing supervised learning on the AI, where the data used to train the AI is pre-labeled by the designers. This makes the machine’s decisions more predictable8 and is practically more feasible. However, there are limited corrective measures to take, and this does not resolve the key issue of which party should pay the compensation required.

 

Common Enterprise Liability

 

If neither the manufacturer nor the employer are at fault, we could use the model of common enterprise liability. This is where all parties involved in the AI jointly indemnify the plaintiff when it is impossible to determine fault.10 Where we have to place liability on one of these parties, this appears to be a fair solution, but ideally none of the parties would need to pay compensation.

 

No Fault?

 

A solution to the problem of compensation can be found in the model of no-fault compensation. This model was recently used in UK Finance’s initial response to authorised push payment fraud. This is when a customer is tricked into paying into an account that they believe is legitimate, but is in fact controlled by a criminal.12 If the customer and bank took reasonable care, yet were scammed nonetheless,13 the customer would be compensated because their loss was essentially not their fault. The funding for this compensation comes from money the banks contributed.

This could be applied to our AI. If the manufacturers of all AI products paid into a mandatory compensation fund, this fund could be used for the insurance of each AI.8 This removes the issue of foreseeability, personhood and financial liability.

Nonetheless, if this model were adopted, we should still attempt to improve the machine. We could develop upon the use of the concept of “reactive fault” and demand that the manufacturers attempt to make the AI more predictable where this is possible. This would presumably make the AI safer, and if the manufacturer had to pay for this without the insurance, it would also provide a suitable deterrent against failed duty of care.

 

 

It appears that the most appropriate attribution of liability is making no one exclusively liable at all. By applying no-fault compensation and requiring the manufacturers to make alterations for a safer machine, we burden no party with the excessive costs of financial retribution and in doing so encourage the further use and development of AI in the future. In any case, it is startlingly clear that the current tort law for injury is insufficient when applied to cases concerning harm by black-box AI. Introducing no-fault compensation would even require an overhaul of the tort law system. If fields in AI are to develop rapidly, developments in the legislation of AI must be just as rapid.

 

References

 

1R v. Hughes (2013). 12

2Sullivan, H. and Schweikart, S. (2019). Are Current Tort Liability Doctrines Adequate for Addressing Injury Caused by AI?. AMA Journal of Ethics

3Cruz, J. and Wishart, D. (2007). Applications of Machine Learning in Cancer Prediction and Prognosis. 2 Cancer Informatics. 64

4s3(1)

5s4(1)e

6Buyers, J. (2015). Liability Issues in Autonomous and Semi-Autonomous Systems. Osborne Clarke LLP

7Fisse, B. and Braithwaite, J. (1993). Corporations, Crime and Accountability. Cambridge University Press. 48

8Rachum-Twaig, O. (forthcoming, 2020). Whose Robot is it Anyway?. University of Illinois Law Review. 8, 27-29

9Chung, J. and Zink, A. (2017). Hey Watson, Can I Sue You for Malpractice? Examining the Liability of Artificial Intelligence in Medicine. Asia-Pacific Journal of Health Law, Policy and Ethics

10Vladeck, D. (2014). Machines Without Principles: Liability Rules and Artificial Intelligence. Washington Law Review

11Faife, C. (2016). When Does Artificial Intelligence Become a Person. Medium

12 APP Scams Voluntary Code: interim funding for scam victim compensation to continue to 31 December 2020. (2020). UK Finance

13Payment scam victims more likely to be reimbursed. (2019). BBC News

14Reed, C. (2018). How should we regulate artificial intelligence?. The Royal Society

OxNet

01865 286277

felix.slade@oxnet.org

Pembroke College, St Aldates, Oxford, OX1 1DW 

  • White Instagram Icon
  • White YouTube Icon
  • White Facebook Icon
  • Twitter

©2019 by University of Oxford. Proudly created with wix.com