D61+ Live: A chat with Hilary Cinis, ethics expert and keynote speaker

By August 17th, 2018

In September this year, we’ll be holding our annual showcase event in Brisbane, featuring the work of our experts, a series of guests, and packed with invitees. There will be booth, keynote talks, panels, discussions and demonstrations. This Algorithm series will feature snippets of insight from smart humans from inside and outside CSIRO’s Data61. We hope you enjoy them, and please check out the link below to find out more about D61+ Live.

Hilary Cinis is group leader in user experience and design at Data61. We spoke to Hilary about her upcoming talk at D61+ Live, and her work on ethics and AI. Watch it below, or check out the transcript further down.

Video transcript: 

So I’m doing a talk and one of the examples in my talk is this pizza store in Norway that was showing billboards to passers by. It was showing women salad and it was showing men pizza. And that pissed everybody off. There’s a hidden camera and it’s making decisions about it’s making, you know, computer vision is being used, and algorithms are being used, to determine the gender and the age, and, like, making, making a menu choice on their behalf. And so, it’s creepy.

It’s reinforcing social norms around what women like to eat apparently. I’d eat a pizza over a salad any day of the week. And then also it’s not actually even doing anything new. Which is a really bad use of A.I. It’s not even revolutionising anything. It’s just being lazy. Lazy marketing. In this story, too, the screens broke down, so the ad stopped showing, and all the code came up. So people could see that they were being profiled. They could see ‘male’, ‘age group’, ‘ethnicity’. And a lot of people got very upset.

I’m doing a keynote presentation at the event. It’s like an opinion piece and a bit of a call to arms around the value of diverse teams and also the soft skills required to develop equitable A.I. When it starts looking at being a product, in use by people, that effect a large group of peoples’ lives, there are other considerations we need to start exploring, around power relationships and minimisation of harm and what the legal frameworks that already exist are. There’s lots of questions about how to proceed with ethical AI. I believe very strongly that there are a lot of answers to it. I’m sure there’s a lot of people doing really good work, but we just don’t hear about it. I am positive a lot of good work being done.