Harvey Balaam

What is the biggest ethical issue posed by artificial intelligence?

INTRODUCTION

“[Algorithms] reflect the inequalities of our society” (Condliffe, 2019). In a world of rapidly advancing technologies, Artificial Intelligence has become a key topic of ethical concern. This rapid development leads to a lack of foresight or consideration of the consequences of the way we create new technology. It is no secret that humans carry bias towards everything we interact with or perceive. It follows that a technology designed to think for itself, designed and constructed by humans, would inevitably acquire some form of bias.

Due to the accelerating integration of artificial intelligence (hereafter, AI) in modern technology, the largest ethical issue we face regarding AI is bias. Without careful consideration of the development of AI, neural networks and machine leaning, bias is an issue that can quickly become a problem to everyone. Furthermore, unlike many other ethical questions regarding AI, bias is rapidly becoming a tangible issue, rather than a harmless thought experiment.

To present a compelling argument, some ideas and principles must first be covered and explained. Modern AI technologies generally utilise a process known as Deep Learning. Essentially, this is a modern term for a more historical format of AI known as neural networks. These neural networks are comprised of millions of connected processing nodes not dissimilar to that of neurones in the human brain. These networks work on a principle of weighting. Each node has a ‘weight’ (numerical multiplier). These weights are randomly determined at the start of a training process, and each node multiplies any incoming number by its weight, sums them and sends the result on. The weights of the nodes are fine-tuned until the network produces an ideal value from the training set of data (Hardesty, 2017). When millions of nodes make up multi-layer networks, they quickly become very complex. A neural network could be described as ‘transparent’ if it is easily interpretable and each of its components can be inspected. Also, how a transparent network comes to a decision is traceable throughout the process.

 

The Ethical Issue of Bias in AI

Biased systems are by no means a new concept spawned by the influx of AI development and implementation. In fact, computer systems have expressed (or have had the potential to express) bias for a long time. One clear example of this occurred in the 1980s, regarding airline reservation systems. By design, the two of the most dominant reservation systems (typically used by travel agents) favoured flights that used the same airline for all parts of the journey. So, by this logic, a flight of multiple parts that started with American Airlines would rank the same airline higher for the later segments, even if all other criteria were equal (Friedman and Nissenbaum, 1996). Clearly, this is unethical in the sense that other airlines do not get a fair level of exposure to customers booking via travel agents. This issue is easy to rectify, in that this unfair judgement was a product of the construction of the system, and thus can be fairly easily identified and edited. However, this is where I think the problem of bias in AI in particular is exposed.

Although the previous case presented fairly obvious bias that could easily be mitigated, AI systems based on complex neural networks can be far less clear cut. Consider the following scenario: A bank has implemented an AI system to accept or reject mortgages based on criteria set out by the bank. A customer reports that the system is racially prejudiced and declines some races more than others. Race was purposefully left out of the decision process for the algorithm yet testing shows that it is indeed biased against certain races. If the system was built upon a complex network, there may be no way to reliably conclude why the AI made the decision it did (Bostrom and Yudkowsky, 2011).

Herein, I believe, lies the key to why bias is the largest issue posed by AI. A system that has not been built with transparency in mind has the capacity to create a meaningful, negative social impact, without leaving any way to trace back to the cause. Neural networks that are not transparent are also unpredictable. If the construction of a network means that the reasoning behind a decision it makes (even after the fact), cannot be deduced, it cannot be considered predictable. An unpredictable algorithm with a decision process that leads to a palpable social consequence is of great ethical concern. Furthermore, a biased system in operation may not be apparent to any individual subjected to it. If each decision is treated in isolation (in this case, each mortgage application), then identifying a trend of bias becomes incredibly difficult. As such, the damage done by the algorithm can potentially go unnoticed. The problem only grows when considering that there exists no simple solution or mitigation – partly because some of the bias lies in the creators of the system (Friedman and Nissenbaum, 1996).

While it may not be possible to determine the exact mechanism or calculation that is causing the biased outcome of the AI system, it is possible to suggest where the roots of the problem lie. I think that, primarily, this is in the training of the neural networks. (Mittelstadt et al., 2016) suggest that “An algorithm’s design and functionality reflects the values of its designer and intended uses”. Any output generated by the network has to be compared to that of an ideal outcome. What an ideal outcome actually is must be defined by a person, who like any other possesses preferences and bias. I think this further speaks to the fact that solving the problem of biased AI is not a trivial task. With AI already playing a large part in our lives, and an increasing adoption by companies across the globe, the ethical issue of bias is already having an impact on humans.

One area in which AI and Machine Learning has already shown the impact of bias is in predictive policing. Typically, an algorithm is given past documented crimes and the locations they took place as training data. Now the system is able to predict where police will need to patrol the most. The problem here is that crime is only documented where it is found, so areas that are patrolled most often are recognised as crime hotspots, when this may not be the case. Of particular importance here is racial discrimination. If police discriminate against a community, and police it more stringently, the system infers that there is more crime in this area. A positive feedback loop is now created, where the more the predictive software points to a location, the more that location will be identified as a crime hotspot (Lum and Isaac, 2016). By design, the software is serving its purpose perfectly, yet bias still presents and, in this case, causes a discriminatory loop. Any system that can affect the data that it is fed can potentially cause this issue. There are few other ethical concerns over AI that present this much of an issue currently. AI systems are not yet at a point where, for example, their moral status or sentience needs to be considered, let alone having any meaningful social impact.

Admittedly, at this point one could argue that, by the same logic, if AI is not at a point where it has any moral status, sentience or understanding of human social values, then surely an AI system in itself cannot be biased. By extension suggesting that the ethical issue of bias is not one regarding AI, but rather the humans that build them. I suppose there is some validity to this point of view, but the bias of a person alone cannot itself create an ethical issue – rather in my opinion, an issue only presents when a bias is acted on. In the same way then, I argue that a neural network acting on a bias – even though it is not its own – is what presents an ethical issue concerning AI.

 

Devising a Solution

In order to begin proposing a solution to this problem, I will assume that bias is intrinsic to every human, and something that cannot be removed from them. Or, at least in relation to a complex application such as neural networks which are built on the premise of outputting an ideal numerical outcome. With this in mind, I think that the most feasible solution would not be to attempt to prevent bias from getting into a system in the first place. Rather I think the most realistic approach is to tackle the development of bias in a system, mitigating it before it can do harm. Personally, I believe that the best solution to this is to focus on regulating the transparency of neural networks that have / are being created. The EU’s GDPR (General Data Protection Regulation), specifically Article 15 [1. h.], (Art. 15 GDPR - Right of access by the data subject, 2016) states that a person has “[the right to obtain] meaningful information about the logic involved” with regards to automated decision making. In principle of course this is ideal, but as shown by (Bostrom, N. and Yudkowsky, E., 2011) it is not always possible to provide this information when dealing with complex neural networks.

 

How one should go about creating a transparent neural network is not particularly easy to say. (Mascharka et al., 2018) propose an approach to the construction of neural networks where the overall process of creating a decision is decomposed into smaller parts. They created a modular neural network known as ‘TbD-net’ (TbD meaning Transparency by Design). This method allows the inspection of each mechanism the network uses to arrive at a final decision. As a result, any bias that may have been inherited by the network can now be pinpointed. Additionally, if a particular module is found to be the sole or main cause of a biased output, it can be edited in isolation from the rest of the system. Overall I think that this is the most applicable solution to biased AI. As previously discussed, the bias of those that create the system cannot be removed (though an increased diversity in the field could remedy this (Ciston, 2019)). Therefore, this proposed solution of modular designs for neural networks best solves the issue, an in a manner that can be achieved with technology currently available (as demonstrated by TbD-net).

 

Conclusion

“Algorithms inevitably make biased decisions” (Mittelstadt et al., 2016). The very nature of modern AI and the neural networks they are built upon cannot escape the bias of their creators, regardless of how perfectly they are constructed. A system with no moral bearing cannot stop itself from acting on the bias that it is provided with, which is why it is essential that systems are built to allow detailed inspection as well as predictability. Technologies already in use have caused a great social impact – particularly in relation to discrimination. New implementations of AI should not be judged solely on their efficiency or accuracy, but also their transparency. While I fully believe that bias is the biggest ethical issue posed by AI, I do believe it’s a problem that can be solved. Right now, bias exists in AI; it needs to be dealt with sooner, not later.

 

Bibliography

Bostrom, N. and Yudkowsky, E., 2011. The Ethics of Artificial Intelligence. Cambridge Handbook of Artificial Intelligence,.

Ciston, S., 2019. Intersectional Artificial Intelligence Is Essential: Polyvocal, Multimodal, Experimental Methods to Save AI. Journal of Science and Technology of the Arts, [online] 11(2), pp.1-6. Available at: <https://doi.org/10.7559/citarj.v11i2> [Accessed 7 April 2020].

Condliffe, J., 2019. The Week In Tech: Algorithmic Bias Is Bad. Uncovering It Is Good.. [online] Nytimes.com. Available at: <https://www.nytimes.com/2019/11/15/technology/algorithmic-ai-bias.html> [Accessed 1 April 2020].

Friedman, B. and Nissenbaum, H., 1996. Bias in computer systems. ACM Transactions on Information Systems (TOIS), [online] 14(3), pp.330-331. Available at: <https://dl.acm.org/doi/pdf/10.1145/230538.230561> [Accessed 2 April 2020].

GDPR.eu. 2016. Art. 15 GDPR - Right Of Access By The Data Subject. [online] Available at: <https://gdpr.eu/article-15-right-of-access/> [Accessed 6 April 2020].

Hardesty, L., 2017. Explained: Neural Networks. [online] MIT News. Available at: <http://news.mit.edu/2017/explained-neural-networks-deep-learning-0414> [Accessed 1 April 2020].

Knight, W., 2019. AI Is Biased. Here's How Scientists Are Trying To Fix It. [online] Wired. Available at: <https://www.wired.com/story/ai-biased-how-scientists-trying-fix/> [Accessed 1 April 2020].

Lum, K. and Isaac, W., 2016. To predict and serve?. Significance, [online] 13(5), pp.14-19. Available at: <https://rss.onlinelibrary.wiley.com/doi/epdf/10.1111/j.1740-9713.2016.00960.x> [Accessed 6 April 2020].

Mascharka, D., Tran, P., Soklaski, R. and Majumdar, A., 2018. Transparency by Design: Closing the Gap Between Performance and Interpretability in Visual Reasoning. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp.4942-4950. [Accessed 6 April 2020]

Mittelstadt, B., Allo, P., Taddeo, M., Wachter, S. and Floridi, L., 2016. The ethics of algorithms: Mapping the debate. Big Data & Society, [online] 3(2), pp.1-15. Available at: <https://journals.sagepub.com/doi/full/10.1177/2053951716679679> [Accessed 2 April 2020].

Poole, D. L. and Mackworth, A. K. (2017) “Artificial Intelligence and Agents,” in Artificial Intelligence: Foundations of Computational Agents. 2nd edn. Cambridge: Cambridge University Press, pp. 3–48. doi: 10.1017/9781108164085.002.

Press, G., 2018. AI In 2019 According To Recent Surveys And Analysts' Predictions. [online] Forbes. Available at: <https://www.forbes.com/sites/gilpress/2018/12/15/ai-in-2019-according-to-recent-surveys-and-analysts-predictions/#5279f4c314c3> [Accessed 1 April 2020].

Rodriguez, J., 2018. Transparent Reasoning: How MIT Builds Neural Networks That Can Explain Themselves. [online] Medium. Available at: <https://towardsdatascience.com/transparent-reasoning-how-mit-builds-neural-networks-that-can-explain-themselves-3aea291cd9cc> [Accessed 6 April 2020].

Wachter, S., Mittelstadt, B. and Floridi, L., 2017. Transparent, explainable, and accountable AI for robotics. Science Robotics, 2(6).

OxNet

01865 286277

felix.slade@oxnet.org

Pembroke College, St Aldates, Oxford, OX1 1DW 

  • White Instagram Icon
  • White YouTube Icon
  • White Facebook Icon
  • Twitter

©2019 by University of Oxford. Proudly created with wix.com