D61+ Live – Machine learning expert Lachlan McCalman, on the near future of AI

By Lachlan McCalmanAugust 17th, 2018

In September this year, we’ll be holding our annual showcase event in Brisbane, featuring the work of our experts, a series of guests, and packed with invitees. There will be booth, keynote talks, panels, discussions and demonstrations. This Algorithm series will feature snippets of insight from smart humans from inside and outside CSIRO’s Data61. We hope you enjoy them, and please check out the link below to find out more about D61+ Live.

Lachlan McCalman is Group Leader, Inference Systems Engineering at CSIRO’s Data61, and a prolific public commentator on the science, math and ethics of tools like machine learning and artificial intelligence. He’s speaking at D61+ live on a panel on artificial intelligence, and you can dive into a short video below, or an extended interview, with transcripts below each. These insights into the changes we’ll have to face as we reap the various benefits of these tools are a great watch – please enjoy, and please spread the word.

D61+ Live

Video Transcript

My team and I are increasingly focusing on the ethics of machine learning and ‘ethically aware machine learning’ is how we talk about it. If we don’t ask those hard questions, then what happens is we get disaster. And we’ve seen the reason this topic is on everyone’s mind at the moment is because we’ve seen a number of high-profile, big problems with automated decision systems not behaving ethically. Being racist and biased in various ways. Ultimately that is bad for society. And people are rightfully looking to how to correct that. So, you know, it’s a going to be a difficult challenge, but it’s one we have to tackle.

I’m on a panel at Data61+ LIVE that’s going to be talking about some of the issues around ethical A.I. I’m really excited about this. It is fundamentally a multidisciplinary problem. On the panel, my understanding is we’re going to have a really nice variety of expertise that I think can draw out quite a few of the issues here.

We’ve got an ethicist which is sort of dealing at the really meaty philosophical end of the spectrum of what it is to be ‘ethical’ in these difficult situations. We’ve got people from government who have experience building real systems and understanding the hard trade-offs that have to be made and experience in the trenches.

And we’ve got people like myself who have more of a technical background and see what are some of the practical challenges and questions that arise from the use of these systems. So I think it’s going to be a really interesting panel. I’m looking forward to speaking with the members and taking questions from the audience as well.

Extended Interview on AI and ethics

Video Transcript

My team and are increasingly focusing on the ethics of machine learning and ‘ethically aware machine learning’ is how we talk about it. This is an increasingly important issue in society because of the way that more and more parts of society are being automated. We’re taking decisions that are being made by humans and we’re giving them to algorithms to make these decisions.

And that comes with some advantages, like we can scale those decisions really to an extent we couldn’t with humans. We can make them really consistent. We can maybe personalise them to a degree and take into account more information than a human might be able to do. It also comes with some risks. And the risks are that these algorithms are very, very simple. They don’t really deserve the title “intelligence” that we give A.I. I mean, they are going to do precisely what we tell them to do. And that’s incentive, usually just to be accurate with respect to their predictions, or maximise profit in some simple way.

So where ethics comes in is to say, well, if we’re going to build these systems, we actually have to be really explicit about how we want them to behave. Not just to maximise profit or to achieve this primary objective, but also, are they fair? Are they equitable with risk? Do they disadvantage people who in society who are already disadvantaged. Considerations that a human decision-maker might take into account automatically, but that a machine won’t, unless we tell them about it.

So then there’s two challenges. One is to say okay well, for a given automated decision system, what is the ethical framework here? How do we trade off the profit or the efficiency of the system in some way, with notions of fairness? What’s the correct notion of fairness? For whom are we considering particular disadvantage with regard to this system? These are ethical questions. They’re not technical questions. They’re fundamentally ethical questions. And we need to answer those at quite a high level of detail if we want to encode them into the algorithm.

And that’s the second challenge. Which is, even once we’ve got them, how do we write an automated decision system that does achieve the fairness that we require. That achieves equality of impact or equality of outcome. Or or can efficiently trade-off it’s, you know, profit-making goals with treating these people fairly. These are difficult technical problems.

So there’s sort of two separate issues and both are really, really difficult. And as you can see it’s definitely a cross-disciplinary subject. We’ve got the technical problems all the way through to really fundamental philosophical and ethical problems. And sometimes we’re building systems that simply didn’t exist before. But often we’re replacing a human driven system with a machine driven system where, hopefully, the ethical norms that we would expect to have held in the past, should at least hold in the new system. And preferably, better stronger ones.

The big difficulty here in general is quantification. We’re not used to thinking about having to quantify these trade-offs to the extent that we probably have to if we’re going to encode them in an algorithm. And there’s a big challenge there because humans don’t really think like that. We’re not very good at writing down quantitative trade-offs. So there’s a big elicitation challenge here as well.

And there’s a political challenge and a social challenge as well. Where companies and governments are incentivised primarily to make profit or to produce savings, or you know there are often KPIs that they are beholden to, just because of the structure of our society, and so it’s really hard to ask them to make some of these trade-offs. But if we if we don’t ask those hard questions, then what happens is we get disasters.

The reason this topic is on everyone’s mind at the moment is because we’ve seen a number of high-profile, big problems with automated decision systems not behaving ethically. Being racist and and bias in various ways. And ultimately that is bad for society and people are rightfully looking to how to correct that. So you know it’s going to be a difficult challenge, but it’s one we have to tackle.