A powerful genie – The Gradient Institute on ethical AI

By Gradient InstituteFebruary 4th, 2019

The recently-launched Gradient Institute is an independent not-for-profit organisation founded to research the ethics of artificial intelligence (AI) and develop ethical AI-based systems. Data61 is an official partner with the Gradient Institute. 2019 will be a big year for these issues –we’re happy to republish their first two blog posts here. You can find the originals, and more information about the work of the Gradient Institute team, here.

Part one – “Helping Machines to Help Us

AI is deciding what happens to us

Never in history has the human race been as powerful as it is today. The technology we are developing is reshaping our societies and our planet at an ever-increasing pace. In the space of decades, artificial intelligence (AI) systems have migrated from science fiction, to the lab, to the real world. To date, AI has been applied to accomplish many tasks, but perhaps the greatest significance of such systems is that they are also making consequential decisions about what happens to us. The AI technology humans are building is already steering the lives of billions of people, and the sophistication and reach of this influence are growing very rapidly. Data-driven algorithms are deciding who gets insurance, who gets a loan, and who gets a job. Parole and sentencing risk scores, social media feeds, web search results, traffic routes, advertising, job recruitment and online dating recommendations are all consequential decisions that are already algorithmically personalised today. Powered by algorithms and data, AI is becoming increasingly capable of influencing, in a highly specific and targeted manner, what actually happens to each one among billions of humans.

We’ve already delegated a lot of important decisions to AI systems, and the list keeps growing. This poses fundamental questions: how much good will come out of these decisions, and how will that good be distributed? What will the ethical balance sheet look like for a world shaped by AI?

 

AI does what we ask

Artificial Intelligence is a vague concept that can have different meanings for different people as well as in different contexts. For our purposes in this post, we will focus on a rather specific technology that has enabled many of the feats associated with AI: machine learning. Conceptually, what a machine learning algorithm does is quite simple: you feed it data and a goal to be achieved, and it learns from the data which decisions, predictions or actions need to be made to best achieve that goal. It doesn’t take anything else into account in order to make its decisions: only the goal and the data provided.

Although conceptually simple, this capability is of great significance. Machine learning is so powerful because it can create new knowledge from data and use that knowledge to relentlessly drive towards the goal the human programmer has specified.

Perhaps you can now see what could go wrong with this approach. As the saying goes, “be careful what you wish for”. In a way, a machine learning algorithm today behaves similarly to the allegorical genie: it is both powerful (since it creates knowledge from the data) and obedient (since it uses that knowledge to obsequiously pursue the precise goal set by its human master).

If only this was just a silly analogy. Unfortunately, whereas we can laugh at a story of a genie granting naive wishes which have unintended consequences, in fact, we currently live with the sober reality of machine learning systems doing the same.

 

Unintended consequences

There are many settings in which a straightforward implementation of machine learning may inadvertently cause harm.

Consider a retail chain using machine learning to generate personalised specials based on previous purchases. If the algorithm were asked to maximise the profit resulting from the offers, it is likely that over time it would identify a subgroup of shoppers who are particularly responsive to promotions for highly processed and unhealthy foods such as soft drinks. Since machine learning algorithms can draw inferences about an individual not only from their own previous purchases but also from the behaviour of people it deems to be similar, for example because they live in the same area or have a last name common in the same ethnic group, such an algorithm is likely to amplify existing patterns in consumption. Which kind of society are we going to build if shoppers in wealthy inner city regions receive promotions and discounts for fresh produce, while those in more disadvantaged suburbs only get offers for soft-drink and chips?

Or take the growing use of machine learning techniques in recruitment. Many companies routinely use machine learning algorithms to screen candidates for interviews based on their CVs. These systems are typically optimised to drive the goal of selecting suitable candidates based on previous applicants that have been deemed suitable in the data sample the algorithm learns from. Whenever certain groups of the population have been in the past underrepresented in certain job categories, the resulting automated system will make more mistakes than for well-represented groups (as there is less data to learn from for those groups). This naturally leads to a bias favouring overrepresented groups. For instance, if most CVs for software developers come from male candidates as a result that in the past there have been more male software developers than females, an off-the-shelf machine learning algorithm will consistently reject qualified females at a higher rate than qualified males. Are we happy to create a world in which females equally qualified as their male counterparts are less likely to get a job precisely because historically there have been less females in that occupation?

The above are simply illustrative examples. These issues are widespread and can occur whenever machine learning algorithms are deployed for automated decision making – in other words in almost every area of modern life. What we are describing is a fundamental problem arising from how we currently interact with the machine learning technology that is available to us, at this point in time. At present, the genie is powerful and obedient, and the wisher is naive. Something must change.

 

Towards ethically-aware AI

It is crucial that we learn the right lesson here. The problem is not technology. The problem is not AI. The problem is not letting AI make decisions and the solution is not to replace AI decisions with human decisions.

To gain clarity about the matter, it is helpful to remember that machine learning is a technology. And there are two very important yet self-evident observations to be made about technology that are relevant to our discussion.

First, technology is made by people.

That means, we build the thing according to our choices. We are ultimately in charge of determining the form that these AI systems take. Machine learning isn’t like the climate or the economy, which we can at best hope to influence or nudge – we actually get to design and build the stuff, and from the ground up. AI is engineered, not unlike a bridge or a car. From this perspective, technologists are lucky to find themselves in a much better position than climate change activists or economists and world leaders. They have a much higher degree of control over the systems they can and want to improve.

Second, technology evolves with knowledge.

During the Victorian era, there were many attempts to build new types of bridges with new materials. Several of these bridges collapsed due to our lack of knowledge at the time on how to build and deploy safe bridges. Similarly, more than a million people are killed on the roads every year – because of our lack of knowledge on how to build and deploy safe autonomous vehicles. Machine learning systems make ethically-unaware, consequential decisions due to our lack of the necessary knowledge to build and deploy ethically-aware machine learning systems. It’s a knowledge game. We need to pursue more knowledge and embed that into the technology.

These two self-evident observations help provide clarity about the potential of algorithmic decision making and reveal the path forward.

 

A science of ethical AI

Current AI-based decisions may be unintuitive and resist explanations that can be easily understood by individual people. Is this a problem? Some think that AI decisions should be intuitive and explainable to ordinary human beings. Others are not so sure.

It’s clearly desirable that AI makes decisions that are auditable from an ethical standpoint. We need to aim towards AI that can be assured to make decisions whose ethical valence can be tested reliably, so that we can continuously refine the AI design according to the outcomes of such tests.

However, such tests cannot rely on intuitive explanations or subjective notions of “understandability”. A decision may make perfect intuitive sense to a human, may appear to cause good, and yet can cause harm. In our next blog post, we show how letting an intuitive interpretation of a situation (as opposed to detailed knowledge and rigorous analysis) can create harm if used to guide our choices about AI decision-making.

We seem to have a challenge here. We need to ensure AI makes ethical decisions and yet we can’t and shouldn’t rely only on our intuition, interpretation or subjective “understanding” to do that. What can we do?

We can and must do science. Science is the best approach we know of to attain reliable objective knowledge. We need to pursue a deeper scientific understanding of the ethics of automated decision making, as opposed to trusting intuitive and language-based rationalisations and explanations. We then need to use that science to support the engineering of reliable ethical decision making systems.

Even to this day we don’t yet have an intuitive understanding of quantum phenomena, yet we developed a powerful science of quantum mechanics that gave us enough knowledge to engineer devices such as transistors, the key building blocks of modern computers. Similarly we don’t need to wait for a mature theory of AI ethics in order to make progress. History has examples of how the systematic use of empirical and quantitative approaches can be useful to attain just the right amount of knowledge to make reliable progress. For instance, long before the theory of evolution was elucidated, methods of artificial breeding were successfully used for thousands of years to dramatically transform the appearance and behaviour of living things, allowing for the domestication of plants and animals. In other words, neither intuitive understanding nor a mature theory of how a system works is required to take advantage of it for our own good. It often suffices to have a working theory substantiated by the results of trials and experiments. When we are faced with facts or challenges that we don’t understand intuitively, the most productive response is to turn our attention towards them, deepen our understanding, and then make use of that understanding to find effective solutions. It won’t be different for AI and machine learning, over which we have even more control than over quantum or biological systems.

 

The way ahead

Since machine learning driven decisions are everywhere and we have control over the design choices of these systems, we are presented with perhaps one of the greatest opportunities in history for creating good at massive scale. What we have to do is clear: research which design choices for machine learning will lead to more ethical outcomes, apply the research findings to build and spread decision-making systems that are more ethically-aware, and educate individuals and society so they can become active contributors in a world shaped by AI. This cannot and must not be made by technologists in isolation.

In research, technologists have to work in a true scientific expedition alongside experts in areas such as ethics, law, policy and behavioural sciences. Their collective job is to clarify the complex web of causes and effects going from design choices for AI systems all the way up to the wellbeing of individuals and society.

In application, they have to work with businesses, governments and non-governmental organisations to deploy those AI systems for the benefit of the people served by these organisations. They also have to put their boots on the ground and engage consistently over time with government to build the trust required to help shape policy and regulation informed by the latest research findings.

In education, they must work with experts in other fields to support a holistic training of the next generation of ethical machine learning scientists and engineers, as well as actively engage in public forums to promote an informed and reasoned dialogue about AI.

We’ve so far built a genie that is powerful but naively obedient. As we grant it more power, it is critical that we also make it wiser.

 

Part Two – “Ignorance Isn’t Bliss – How human intuitions about AI can lead to unfair outcomes”

Societies are increasingly, and legitimately, concerned that automated decisions based on historical data can lead to unfair outcomes for disadvantaged groups. One of the most common pathways to unintended discrimination by AI systems is that they perpetuate historical and societal biases when trained on historical data. This is because an AI has no wider knowledge to distinguish between bias and legitimate selection.

In this post we investigate whether we can improve the fairness of a machine learning model by removing sensitive attribute fields from the data. By sensitive attributes we mean attributes that the organisation responsible for the system does not intend to discriminate against because of societal norms, law or policy – for example, gender, race, religion.

How would you feel if you knew that someone was using your gender or race when determining your suitability for a job? What if that someone was an AI? Would you prefer a system that was ignorant of these attributes? Intuitively, we would expect ignoring such attributes to make the AI fair and therefore lead to more equitable outcomes. Let’s look at the behaviour of a machine learning system on an illustrative data set that we use to explore this scenario.

[Read the full post here]

1 comments

  1. Ethical dilemmas have been discussed since the beginning of the time. Science deals with structure and rigour well but no so well with fluid intuitive concepts. The only way science can be applied, I think, is to develop controls such as ethical boundaries within which these algorithms can operate. Even this is difficult. Western philosophical history is littered with examples of brilliant minds trying to apply rigour to ethics such as Kant’s categorical imperatives. Think about trolley problem which is very relevant to autonomous cars or consider guiding philosophies of societies…distribution of wealth or utilitarianism. It’s fraught with complexities beyond science. Is there a way to integrate human’s ethical intuition with AI through creative use of publicly available data. Can we find a tangible answer to ‘what would Jesus do?’

Commenting on this post has been disabled.