Building an ethical approach to AI deployment

Following the “AI winter” of the early 1990s, our digital world has seen an unprecedented boom in the development and deployment of artificial intelligence (AI) across a range of decision-making processes; including advertising, security, employment, and credit scoring. In the past two years alone, the Covid-19 pandemic has been instrumental in accelerating adoption of AI, notably across the healthcare and public sectors, with numerous apps deployed for virus track and tracing, and programmes aiding in distributing vaccinations and further understanding of the virus (1)(2). Whilst other forms of AI are used to determine different types of cancer, dementia, and other diseases; one key, everyday use is that it picks up our online activity and influences to a large degree what we buy, the news we receive, and the content we see (3)(4). 

Despite its benefits, the increased use of AI has also raised concerns. Mass surveillance, the gathering of large-scale data to power AI systems, infringements of privacy rights, emotion recognition and manipulation, and autonomous weapons are just some of the less ethical ways in which AI has been, and continues to be, developed and deployed - with surveys showing “techlash” (backlash against technology) is becoming increasingly common worldwide.

Further ethical concerns over both how AI is deployed and the accuracy of its decision making in relation to AI projects that can, or will influence social behaviours have also been considerably discussed. Fears have been expressed globally about AI perpetuating existing biases in society, with a study by the Oxford Commission on AI & Good Governance revealing that 43% of people in Europe fear AI over the next 20 years will be harmful (5). A KPMG study surveying public opinion and attitudes towards AI across 5 countries also found that, although most people have little, real understanding of AI, there is a desire to know more about it and an overall acceptance of its use depending on the purpose (6). The study also revealed that people want greater regulation of AI and improved safeguards against its misuse - whilst most people accept or tolerate AI, they have little trust in it. 

The accelerated adoption of AI into public life has opened up a wider debate around the ethics of its usage:

  • What are ethics in the context of AI? 

  • Whose ethics are we talking about? 

  • How should ethical principles be incorporated into the design and applied?

  • Who is the arbiter of what is ethical?

  • Should ethics be voluntarily adopted or officially enforced through legislation?


What are ethics in the context of AI?

Ethics can be simply defined as values and morality i.e. what is good or bad, right or wrong, and how this impacts the quality of life. Ethics can be applied to technology and AI, as well as society more generally. 

In the context of AI, the prioritisation of what is good or bad can differ depending on the intended outcome of its application, and its potential impact on human life and/or society. 

Many of the ethical considerations for AI are centred around its connection with personal or sensitive data and automated decision making. As AI systems are powered by vast pools of data, there are ethical considerations around the types of data fed into these systems; how they are acquired, how accurate in representation they are, and how they are processed. 

A key ethical consideration in deploying AI is the potential misuse of this technology

For example, AI-generated deep-fakes of political leaders and other public figures have been used to mislead, manipulate, and misinform the public (7). Such misuse of AI goes beyond moral questions to raise concerns of national security. This fear becomes even more pertinent when considering how some governments and regimes can misuse and abuse AI for the purposes of excessive surveillance, repression, and curbing dissent. 

With this in mind, ethics relating to AI must also be considered from both a company, and a technology point of view. 

The critical components in an AI solution supply-chain are:

  • the people who build it

  • the company behind it

  • the algorithms and data training sets used

  • the end applications and products

  • who it is sold to, and 

  • how the product is deployed.

Along that chain, different ethical considerations come into play - both from technical and business perspectives - to arrive at decisions that can dramatically affect the impact of their AI on society. Companies seeking an ethical approach to AI may also question who they sell their products to and how those products are deployed, in a bid to ensure the ethics of their product and their company, even after it is out of their hands. 

In 2019,  the research company OpenAI delayed releasing their AI text generator model GPT2 for fear of it being used maliciously and further contributing to online trolling, disinformation, and radicalisation (8)(9). In June 2020, researchers from OpenAI also voiced concerns relating to the improved GPT3, which had been trained on over 200 billion words and surpassed its predecessor (the GPT2) in capability. Researchers found it hard to distinguish news stories generated by the machine between those from humans, highlighting its ability to spew hate speech and generate radicalising text that supported neo-Nazi and white supremacist discourse, which would be harmful in automating hate speech, if used by extremist groups (10). 


Whose ethics are we talking about?: AI and bias

AI is largely developed using training data and algorithms. Since training data is often collected from publicly available sources (such as images of celebrities), or created by companies themselves through methods like annotation, AI is easily susceptible to bias.

One of the main fears about AI is that it amplifies existing biases and systems of oppression in society, particularly when used for purposes relating to the criminal justice system, education, and policing.

Computer scientists Joy Buolamwini and Timnit Gebru have discussed how facial recognition AI systems (trained predominantly on white and male faces) often misidentify women and people of colour (11). A 2019 National Institute of Standards and Technology (NIST) study on facial recognition also showed that women are often misidentified and that black and Asian faces were misidentified 10 to 100 times more frequently than white, male faces (12). 

Another ethical consideration is the way in which automated decision making can be impacted by bias. 

In 2018, Amazon scrapped an internal project which used AI to vet job applicants, after it was discovered that the system discriminated against female candidates (13). The AI system used was fed historical data relating to hiring decisions, and as the technology sector is predominantly male-dominated, the AI reinforced the inherent bias; penalising candidates from all-women universities and rejecting CVs containing the word: “women’s”. 

 

 
Algorithms are only as good as the data that they are trained on. So if a dataset includes the historical biases of an organisation, then the predictions it makes will reflect that historical behaviour... for example, if a company spent decades promoting white males with Ivy League degrees into positions of authority, then an algorithm trained to identify future leadership talent might focus on that same type of individual, and ignore people who don’t belong to that group.
— Chris Nicholson, CEO of Skymind (14)
 

 

Another case study by ProPublica (2016) showed how the AI used in the American criminal justice system discriminated against black people in its predictions of offenders likely to reoffend (15). 

In these particular cases, the machines were making mathematical decisions based on the data fed into them, and replicating the world based on it. AI can and does reproduce hierarchies and systems of oppression that already exist in society: i.e. racism, sexism, ableism, etc. AI expert, Kate Crawford, has suggested considering power to prevent AI from mirroring existing societal problems (16). This suggests developers must take existing social power dynamics into account (be it along the lines of gender, race, sexuality, etc.) as they develop and deploy AI, and seek more diverse input; for example, from some of the marginalised communities impacted by those power dynamics to reduce the risk of AI exacerbating existing inequities. Whilst no system can be completely free of bias, education around this issue can only help to minimise it. 

The issue of bias in AI brings the key issue back to the importance of data as a key component of ethical AI. 

As AI learns to replicate the training data, programmers are increasingly wary of the ethical implications of the data being used, along with the bias which may be inherent within it - and are therefore making efforts to try and combat it where possible. For it to work well, AI needs high-quality and diverse training data sets, and to be retrained regularly with new data to maintain the accuracy and efficiency of the model (17). 

However,  trying to diversify and expand data sets also raises other ethical considerations - namely data privacy and where data is derived. 

Data protection laws such as the GDPR impose limits on the use of personal and sensitive data including images, names, etc. So, acquiring large and diverse data sets needs to be achieved through strictly compliant methods, and with privacy rights in mind. 

Some tech companies are making efforts to research building ethical data sets using synthetic data to augment existing data sets. For example, Gretel AI is helping to create anonymised synthetic data sets, based on their actual sets, to use in analytics and improve machine learning models (18). By developing synthetic data, they hope to minimise the risk of data breaches and preserve privacy rights whilst still maintaining effective models. The collection of data for data sets can also prompt bias (unless developers actively try and avoid this whilst gathering and annotating data); however, one way in which companies are addressing this is by using machine learning to detect over-representation in data sets (19). 


How should ethical principles be incorporated into the design and applied?

One approach could be excluding the deployment of AI systems where their use (or misuse) could interfere with people’s fundamental human rights. 

This could mean a facial recognition company refusing to sell their product to a government with a poor record of respecting free speech, as these tools could be used to locate and target civilians during protests.

There are also concerns that companies may only pay lip service to ethical accountability in AI.

For example, Timnit Gebru made news when she was ousted by Google following a research paper she co-authored, exploring the risks of AI models (20)(21). Her paper raised a number of ethical issues, including the potential for bias within language models, and the high levels of carbon emissions generated by the machinery to create them.

This incident offers a lesson to companies seeking an ethical approach to AI, to allow space for those designing the machinery to voice any concerns and make efforts to encourage academics to continue researching the impact of their products for any necessary improvements. 

Companies can also garner trust by improving the explainability of their AI - for example, tackling the “black box” problem. 

“Black boxes” are essentially neural networks - complex mathematical algorithms surrounding the inputs and outputs of the machinery; whereby data is inputted, functions are performed, and a result is outputted. While the way in which the machine comes to its decision is determined by the data it is fed, it is unknown how or why it comes to its decision. The opaqueness of black boxes has spurred unease and confusion around AI, especially regarding automated decision making as the AI outcomes can severely impact individuals and groups. 

Whilst ethics may not yet be a codified legal requirement, companies can enhance their reputation and build trust by demonstrating a clear commitment to protecting people’s data privacy, ensuring transparency in data processing, deploying AI responsibly, and constantly evaluating performance and bias throughout development and deployment. 


Who is the arbiter of what is ethical?

As there are currently no specific (or mandatory) frameworks, companies have the scope to decide what they believe is ethical. 

For some companies, ethical AI comes down to mission statements or principles on how they carry out their operations. An analysis of some of the ethical principles adopted by different online platforms found common buzzwords including “transparency”, “justice”, “responsibility”, and “privacy” (22). But how they actually define these terms remains unclear, making it difficult to win public trust. Direct and declarative statements of intent like “we will not give your data to third parties without permission” - as opposed to vague terms and concepts - better demonstrate a company’s commitment to delivering AI in the most ethical way. 

Even though some companies align their commercial decisions with their stated ethics, others simply adopt a broad set of principles - meaning that ethical AI deployments become tricky, as there can be a clear tension between a commitment to ethical principles and seeking to maximise profit (23). 

Companies using AI also need to consider the social and political consequences. 

For example, IBM recently announced that they would stop developing and selling facial recognition technology, citing fears of racial and gendered bias (24). This decision was set against the wider context of the resurgence of the Black Lives Matter movement last year, and subsequent concerns around the use of facial recognition software by police to crack down on activists (25). 

A commitment to ethical AI requires AI companies to more rigorously interrogate their activities and hold themselves accountable. 

This demands that companies question if their activities are contributing to the improvement of the human experience, and in a way that respects privacy and human rights. However, whilst some companies are successfully taking the initiative to implement their own ethical frameworks, it raises concerns that self-governance and self-regulation without legislation is not enough, and that specific laws are needed for ethical AI (26). 

 

 
Weaponised in support of deregulation, self-regulation or hands off governance, “ethics” is increasingly identified with technology companies’ self-regulatory efforts and with shallow appearances of ethical behaviour.
— Elettra Bietti, Affiliate at the Berkman Klein Center (27)
 

 

Arguably, these ethical frameworks are toothless if there is no centralised legislation and sanctions to buttress them and hold companies accountable. 

 

 
When companies or research institutes formulate their own ethical guidelines, regularly incorporate ethical considerations into their public relations work, or adopt ethically motivated “self-commitments”, efforts to create a truly binding legal framework are continuously discouraged.
— Thilo Hagendorf, AI Ethics expert (28)
 

 

Legislators across jurisdictions, in collaboration with academics, are perhaps best placed to decide what this legal framework should be, and create the steps forward to implement these frameworks. Objective input from those without financial stakes in AI could ensure a fairer and measured approach to regulation, that would help in guiding the process towards ethical AI, and ensure compliance by making these considerations obligatory as opposed to optional. 


Should ethics be voluntarily adopted or officially enforced through legislation? 

There is a movement to adopt legal frameworks to regulate and monitor how ethical AI is created, developed, and deployed. 

In April 2021, the EU released draft proposals for the Artificial Intelligence Act which would be the first major legal framework for AI both in Europe and globally and would introduce sweeping changes to the way different types of AI are designed, deployed, and regulated. This proposed legislation would impose compliance burdens and take a risk-based approach to AI regulation, with a more stringent focus on  “high risk” systems “likely to pose high risks to fundamental rights and safety”. These high-risk systems could affect access to areas like education, employment, and the administration of justice (29). The regulation also proposes to ban the use of AI for the purposes of manipulation, social control, indiscriminate surveillance, and illegitimate social scoring systems (30).  

From a practical perspective, regulation can be problematic. 

With so many different types of AI, legislation would have to keep abreast of technology and its rapid pace of development. There needs to be a balance between onerous burdens on developers and not hampering development.

Adopting a strong ethical framework enforced at a centralised level, however, could be key to creating trust and alleviating people’s fears about AI’s misuse, as well as creating a means of redress and accountability. 

Whilst it is fair to perhaps question why legislators are the arbiters of what is ethical, as governments provide legal frameworks on a range of different areas, it makes sense that AI also comes under that remit. The EU in particular has demonstrated a willingness to take the lead in wide-scale legislation surrounding data protection and regulation of businesses, and so this regulatory move should be welcomed, and may also encourage other governments to develop their own national frameworks. 


Rather than seeing ethics as ‘black and white’, it may be more useful to think of ethics as a spectrum along which AI can create more or less ethical outcomes depending on its use (31). 

For some, AI is neither good nor bad and its impact is purely the outcome of how it is used. Others argue that machines are amoral with no concept of what is right or wrong, and it is the coding and training data fed into the machines which determine either positive or negative impact. 

Companies and tech professionals creating and deploying AI need a fundamental understanding of ethics and a clear framework from which to operate. 

Creating provisions for testing bias within AI systems, diversifying training data, and constantly evaluating (and correcting if necessary) algorithms fed into the machinery are all part of this approach. Whilst it is almost impossible to completely eradicate bias from AI, companies can continue to take the approach of removing as much unwanted bias as possible, and an understanding that the AI needs to be designed around technical and ethical weaknesses that are already going to be ingrained in it. 

Companies can approach this by being honest that they aren’t always going to get it right, but demonstrating active efforts to try and minimise unwanted bias where possible. 

There are still further strides to be made in ethically developing and deploying AI, both from the perspective of governmental intervention and the activities of companies. Alongside an encouraging push towards further regulation, organisations are also taking measures including appointing independent ethics advisory boards which include members with no financial stake in the company, as well as encouraging education and empowering academics in the sector. 

The potential problems with AI and risks of AI misuse are not new, and developers are already making active efforts to make changes accordingly. 

However, this is predominantly from a commercial perspective and what is also needed is concrete legislation with input from a diverse array of people, so clearer guidelines can be established. Without an awareness of ethical considerations, existing inequalities and systems of oppression in society can unwittingly be codified into programmes where they are magnified and spread even wider. 

If AI is to be universally viewed as good for society, a strong and coherent ethical legal framework must be developed.


References: 

  1. https://www.datarobot.com/covid/ 

  2. https://hash.ai/blog/vaccine-distribution-scenario-modeling 

  3. https://www.alzheimersresearchuk.org/ai-could-help-diagnose-dementia-in-a-day/#:~:text=Researchers%20at%20Cambridge%20University%20and,information%20about%20that%20person's%20prognosis  

  4. https://www.nature.com/articles/d41586-020-03157-9 

  5. https://oxcaigg.oii.ox.ac.uk/wp-content/uploads/sites/124/2020/10/GlobalAttitudesTowardsAIMachineLearning2020.pdf 

  6. https://home.kpmg/content/dam/kpmg/au/pdf/2021/trust-in-ai-multiple-countries.pdf  

  7. https://www.cbsnews.com/news/doctored-nancy-pelosi-video-highlights-threat-of-deepfake-tech-2019-05-25/ 

  8. https://www.wired.com/story/ai-text-generator-too-dangerous-to-make-public/ 

  9. https://www.theverge.com/2019/2/21/18234500/ai-ethics-debate-researchers-harmful-programs-openai 

  10. https://www.nature.com/articles/d41586-021-00530-0 

  11. http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf 

  12. https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf 

  13. https://www.theverge.com/2018/10/10/17958784/ai-recruiting-tool-bias-amazon-report 

  14. https://enterprisersproject.com/article/2019/9/artificial-intelligence-ai-fears-how-address 

  15. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing 

  16. https://www.adalovelaceinstitute.org/blog/role-arts-humanities-thinking-artificial-intelligence-ai/ 

  17. https://appen.com/blog/ai-ethics-the-guide-to-building-responsible-ai/ 

  18. https://techcrunch.com/2021/10/07/gretel-ai-raises-50m-for-a-platform-that-lets-engineers-build-and-use-synthetic-datasets-to-ensure-the-privacy-of-their-actual-data/ 

  19. https://towardsdatascience.com/use-machine-learning-to-detect-errors-in-a-dataset-2028ffdf2aa1 

  20. https://dl.acm.org/doi/10.1145/3442188.3445922 

  21. https://www.fastcompany.com/90608471/timnit-gebru-google-ai-ethics-equitable-tech-movement 

  22. https://link.springer.com/content/pdf/10.1007/s43681-020-00008-1.pdf

  23. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3513182 

  24. https://www.theguardian.com/technology/2020/jun/09/ibm-quits-facial-recognition-market-over-law-enforcement-concerns 

  25. https://www.theverge.com/2020/8/18/21373316/nypd-facial-recognition-black-lives-matter-activist-derrick-ingram 

  26. https://link.springer.com/content/pdf/10.1007/s11023-020-09517-8.pdf 

  27. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3513182 

  28. https://link.springer.com/content/pdf/10.1007/s11023-020-09517-8.pdf 

  29. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206 

  30. https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52021PC0206 

  31. https://montrealethics.ai/wp-content/uploads/2021/04/SAIER-Apr2021-Final.pdf 

Images:

  1. Image 1 = shutter stock

  2. Image 2 = shutter stock

  3. Image 3 = https://www.wired.com/story/ai-research-is-in-desperate-need-of-an-ethical-watchdog/

  4. Image 4 = https://www.pewresearch.org/internet/2021/06/16/experts-doubt-ethical-ai-design-will-be-broadly-adopted-as-the-norm-within-the-next-decade/

  5. Image 5 = https://governmentciomedia.com/next-topic-ai-master-ethics

  6. Image 6 = https://www.roboticsbusinessreview.com/events/legal-and-safety-issues-are-looming-around-ethics-ai-and-robots/

Further sources:


Previous
Previous

How can companies learn to grow user trust and prioritise user privacy?

Next
Next

Do you value your data privacy over child safety?