Ethical Implications of AI and Ubuntu as an Intervention

August 29, 2019

We will be looking into the ethical implications of artificial intelligence and derive a few simple guidelines using the framework of Ubuntu, that can help us to build better tools that can safeguard society.

Ethics

Ethics describes the protocol for human interaction. Ethics concerns itself with the social attitudes and actions necessary to create and preserve harmony within society. These are the acceptable social attitudes and social actions relied upon to provide security and prosperity for humans within a society. The pan-African Ubuntu philosophy tells us that Humans are interconnected and there must be processes of cooperation and collaboration to create and maintain social harmony. As Artificial Intelligence becomes more involved in automating human action, and becomes more prevalent in society, there are standards that are needed to ensure that the wellbeing of society is preserved.

How to preserve the wellbeing of society is defined in different ways in different societies. Different ethical systems define the role and relationship an individual must have to others in different ways. In individualistic societies this role and relationship is limited. Individuals are seen as autonomous, self complete, self-contained, and must, as much as is legally permissible, prioritize their own interests over others. In more communal societies one’s obligation to society is emphasized and one must take into account the wellbeing of society.

The value systems which can be empirically observed from the large internet tech giants tend to be more individualistic. It seems profit is placed over people, especially people who are historically marginalized like those in the developing world. We have seen companies respond slowly to their platforms being misused to spread hate and incite violence and genocide against other communities and those in developing countries. We have seen this in Myanmar, Sri-Lanka and Nigeria. Before Artificial Intelligence was being used to interfere with U.S. elections, or to lead to Brexit, artificial intelligence powered by social media was used in various African elections as early as 2012. The impact of such meddling can be devastating in countries and regions that are unstable and have histories of human rights violations.

The large tech companies often do not feel any particular real need to be responsible or accountable to society. The construction of value matters. If AI systems can be used to successfully maximize the wellbeing of society, value systems that are designed with the society in mind must be adopted. The current narrow and individualistic system is exclusionary and does not work.

What exactly is Artificial Intelligence?

I will be providing a simplified definition of Artificial Intelligence which will be based on contrasting Artificial Intelligence to traditional software. Traditionally, when software engineers create software they provide series of instructions that define a task and how a task should be accomplished. For those who have written software, they understand that attention to detail truly matters. Every single detail necessary in the accomplishment of a task must be painstakingly specified. Simple tasks can be modeled mathematically, for example how to send a message from one person to another person across the world. This is trivial. However to model more complex phenomena to a computer, like how a human face looks like or how to translate isiXhosa to English and determine the sentiment and tone of speech or human emotion, or even describing the diagnosis of malaria or breast cancer, becomes very complex to accomplish, if not impossible to perform with traditional software.

Artificial Intelligence removes the complexity in programming complex tasks in that when a machine is given a task the machine itself figures out or “learns” how to perform that task. The task is modeled as a large algebraic equation with millions of variables that must be solved for using a series of mathematical operations. The machine is given the correct output and it must determine how to produce the correct output. Teaching or “training” a machine to solve a particular task requires having the machine try thousands of different ways of solving a task. A machine eventually settles on a way that produces the best result. This is how Deep Learning, essentially works.

When a machine figures out how to solve a particular task, it cannot tell us why it chose that particular route, or how that method works or why it even works, only that method allows it to solve a task. This is a case of machines using correlations. Machines finding patterns in data and replicating those patterns. When the data used is from human actions the patterns a machine learns essentially replicate human action.

Therefore machines are not thinking. And it truly is a misnomer, and a misguiding one, that we would call this type of correlation as “artificial intelligence”. They’re not able to determine cause and effect. They simply apply patterns they have discovered. A machine may accomplish a task but it has no way of introspection or reflection to determine if a task was good. If a machine denies someone a financial loan or denies an applicant a job, it has no way to determine if its decision was made by racial undertones or any other form of discrimination. Machines are not moral agents on their own, they do not possess a consciousness and a freedom of will that they can employ in decision making – so-called “intelligent” machines are simply extensions of our own moral agency.

When considering the legal implications of these technologies it is interesting to consider how can justice be applied on an object that has no moral agency. When an AI system causes harm, perhaps a self-driving car, who should be responsible for the damages? What does justice look like in such instances? For justice to be performed ,AI systems must be correctly recognized as part of an ecosystem, tools that emerge from social structures and tools that share the moral agency of their creators. The first key takeaway I’d like to propose is this: The ecosystem of creating AI must be regulated if we are to use AI to create meaningful tools that positively affect society .

The use of Artificial Intelligence in predicting and prescribing human actions may have negative effects in society. This has been demonstrated. Artificial Intelligence has been used to hinder democratic processes, remove children from their families, send people to jails. Often the people disproportionately affected by these tools are from marginalized communities. This is primarily due to the fact that technology is a byproduct of social processes. The ability to shape what tools are built: through acquired knowledge, capital and capacity is largely shaped by social factors. These social factors influence who has access and opportunity to the design and benefits of this technology. The power structures and imbalances that exist in society end up being reproduced digitally. This is unavoidable. Humans use technology and humans shape technology. Artificial Intelligence, fashioned in the image of the privileged and powerful, may lead to the exclusion of others and worsen discrimination. The exclusion of others ultimately leads to the justification of what most people would consider inequality and injustice. It should not be surprising that Artificial Intelligence systems can optimize oppression and injustice for exclusion is at the heart of injustice.

A prominent application of artificial intelligence is in the classification of texts, images and sound: The classification of information. How we classify people & ideas and objects, has often been used to support imbalanced power structures. One can simply look at how Nelson Mandela, the human rights activist was once classified as a radical and a terrorist by the western world when he was fighting against the oppressive system of Apartheid in South Africa. After 27 years in prison he was later awarded the Nobel peace prize, classified as a human rights hero. These classifications served as a way of regulation by world powers. In similar manner, the Knowledge systems from indigenous communities have often been classified as inferior, unscientific, and lacking any rationality. Indigenous knowledge systems, ways of shaping, seeing and sensing the world have been seen as systems unfit for intelligent and advanced societies. This is a form of colonization, epistemic genocide and destruction. Generally speaking, the western world has long awarded itself the right to classify and label humans and to define acceptable human customs, actions, political, religions and economic systems. The academic community has created these classifications and libraries, as storehouses and gatekeepers of knowledge, perpetuate these classifications.

Artificial Intelligence now provides the ability to automatically classify and to offer recommendations based on these classifications and on data collected on users. The data collected on users raises privacy concerns, but there are other significant issues that the law cannot easily prevent. What if the recommendations offered on most subjects tend to prioritize western-centric views? what if a simple search for the keyword “Africa”, the second largest continent and having over 2,000 languages, immediately shows me articles and items related to “safaris”? What narratives are being perpetuated? In the classification used in natural language processing, a process known as sentiment-analysis, Non-white english sounding names have been shown to systematically receive negative scoring compared to white sounding names. These biases have been shown to exist in widely used popular natural language models. What if the narratives and perspectives of indigenous communities never see the light of day due to negative recommendations based on negative classifications? Furthermore, what are the effects to the access of information when computers are not able to understand the majority of the world’s languages spoken by the world’s poor?

I can imagine in the future a library catalog where one may upload a picture of an object or a person to find related material. What if someone uploading a black woman, returns material and images about gorillas, similar to what happened when Google first made a feature to search by using an image. The embedded biases within society, and within the library and information sciences even, will only be replicated through artificial intelligence unless forms of intervention are taken.

Intervention

My framework of intervention is based on Ubuntu philosophy

  1. We must include the communities affected in the building of AI systems. These are the expert Stakeholders. When we look at the effects of how our technology is used, it allows us to escape the narrow focus on optimization and efficiency.

  2. We must work to account for biases embedded within AI data and processes.

  3. We must take notice of the social structures and the ways in which these tools will be used by whom and on whom. If access to the benefits of technology is a function of power then those who have power will be the ones to benefit the most from technology. Those with lesser power will remain marginalized.

  4. Decentralization of Data: allow individuals to have more ownership of their data and we must distribute the computing tools and power necessary to create AI more fairly.