Blog icon

By Madeleine Clarke 18 October 2022 4 min read

A professional photo of Dr Melanie McGrath. She is standing in front of a gum tree with leaves in the background. She wears black rimmed glasses, a brown leather jacket and is smiling at the camera
Dr Melanie McGrath

It wasn't long ago that the idea of artificial intelligence (AI) and humans teaming up to save the world existed only in a galaxy far, far away. Fast forward to 2022, and two of CSIRO's Future Science Platforms (FSPs) are working to optimise human-AI teams and solve some of the most complicated challenges we face.

One of our people helping to understand and build these new teams is Melanie McGrath. An expert in social psychology, she has spent her career researching what makes humans tick and exploring how we interact, behave in groups and relate to others. Now working with the Collaborative Intelligence (CINTEL)[Link will open in a new window] and Responsible Innovation FSP[Link will open in a new window]s, she's turned her attention to the way humans relate to a different kind of other: AI. We sat down with her to peer into the future of collaborative intelligence.

This is your first time working in the AI and tech space. What inspired the jump?

I’ve always been drawn to uncharted territory and wide-open spaces. In my PhD, I studied how individuals differ in our understanding of ‘harmful’ concepts such as prejudice, trauma and bullying and how these notions influence our perceptions of others, which no one had worked with before. I was very much drawn to the idea of CINTEL as something new that we are trying to understand from the ground up.

Also, my area of psychology hasn’t traditionally had a lot of interaction with artificial or machine intelligence. That’s a big gap. There are a lot of people with a strong understanding of human behaviour who can contribute to these systems and vice versa and huge scope for linking our fields. I’m extremely excited by the opportunity to be part of doing that.

You’re working on understanding the nature and role of human trust in new collaborative intelligence capabilities. Why trust?

Trust is fundamental to any kind of collaboration taking place. Our first CINTEL use cases[Link will open in a new window] all feature technology or domain experts working with systems to unravel particularly knotty problems. To do this, you need to work on establishing a sustained, long-term relationship between teammates. You can't have that if one of the members of the relationship doesn't trust the other.

The challenges our human-AI teams are working to solve are hard to grasp and coloured by a degree of uncertainty. Trust will make it possible for the human and the AI to operate in this environment of uncertainty and accept the best possible outcome.

Search and rescue is one domain that sees humans and AI collaborate to save lives in dangerous environments

What’s our current understanding of trust in human-AI teams?

While we do have models of trust in automation and in AI, a lot of the time the outcomes these models are interested in are things like; are people going to use it? Are they going to comply with its recommendations? In CINTEL, the outcome of trust goes beyond that and looks at whether people want to collaborate with it. We’re talking about a major shift from seeing technology as a tool to seeing technology as a potential teammate.

Are there any significant barriers to trust in AI systems that we need to overcome?

The main barrier to trust in AI systems today is the performance of the system. Very often if the system fails early on, your trust in it will diminish very, very rapidly. It’s a slow and painstaking process to build it back. As we're developing novel capabilities designed to work in shifting and uncertain environments in CINTEL, we expect that at least in the beginning, AI systems will fail sometimes. Balancing reliability and uncertainty is likely to be a key area for us to focus on.

Another interesting problem to unpack is the potential human psychological barriers. We know that humans don't process probability-based information particularly well. This raises questions about how we can communicate and explain uncertainty in a way that a human can understand and use to make decisions. Then you need to work out how to program something to do that.

What’s the impact of increased trust in collaborative intelligence systems?

One exciting thing about CINTEL is that it takes a vastly different approach to AI. It shifts away from the idea that we're going to train AI functions to take over from humans and instead looks to find spaces where we can combine the massive, amazing processing power of an AI with the adaptability, novelty, and innovation of the human mind. This requires a whole new way of thinking and approaching software engineering to make it real.

I think one of the areas we’ll see the greatest impact is the workforce[Link will open in a new window]. We might be able to start looking at broader applications where we are able to bring in AI to work in a complementary, ongoing way with humans, rather than seeking to replace them.

Collaborative Intelligence has the potential to transform the workforce as we know it.

Contact us

Find out how we can help you and your business. Get in touch using the form below and our experts will get in contact soon!

CSIRO will handle your personal information in accordance with the Privacy Act 1988 (Cth) and our Privacy Policy.


First name must be filled in

Surname must be filled in

I am representing *

Please choose an option

Please provide a subject for the enquriy

0 / 100

We'll need to know what you want to contact us about so we can give you an answer

0 / 1900

You shouldn't be able to see this field. Please try again and leave the field blank.