A participant in the study.

We live in a world that is increasingly driven by automation, and much of our daily thoughts, actions and feelings are directly influenced by highly automated systems – from what we buy on Amazon, to who we vote for in elections, and how safe we are in airplanes.  

An increasing number of machines and applications provide us with recommendations, and how much we trust these pieces of guidance dictates the success of our interaction with these systems to a large extent.  

With the advent of all artificial intelligence (AI) systems, it has become crucial to assess and manage trust, so that people can continue to make decisions based on their own reasoning when systems fail, thus avoiding implicit trust. It is also important to help users embrace automation when it can perform better than humans, and therefore detecting distrust can be the key to deciding when a system needs to clarify its internal operations instead of asking the user to take a leap of faith. 

Trust, however, is a complex subject within the field of human-computer interaction. Often an intangible and abstract concept that is felt and experienced differently be the individual, measuring and placing a quantitive value on it is challenging. Using physiological and behavioural sensor data, researchers at CSIRO’s Data61 have undertaken a study exploring how to quantify the the trust we place in automated systems or human collaborators.

A novel approach to this challenge resulted in designing and implementing a user study where participants interacted with an automated system while attached to various body sensors, such as an EEG headset, GSR (to measure sweat in participants’ hands) recorder and eye tracking glasses that monitors blinks, pupil dilation and gaze direction

“Our aim is to generate meaningful insights on the measurability and dynamics of how users of automation come to trust or distrust different decision support systems,” said the lead researcher, Jerome Han.  

“This is an especially relevant area of inquiry given the increasing role that automation plays in our daily lives, and the impact that over-trust or under-trust can have on the successful operation of systems. Ultimately, we hope to demonstrate that trust – a concept that otherwise seems intangible and unquantifiable – can be monitored and perhaps actively managed in future systems. 

By combining physiological sensor data and behavioural metrics to act as data sources, the researchers were able to predict trust levels.  

“In our study, we asked participants to periodically rate how much they trust an automated system as they interact with it – this serves as our objective reading of user trust,” explains Mr Han.  

“We then apply machine learning algorithms on our sensor data to attempt to predict such user trust ratings. By doing so, we extract features in our data that are indicative of user trust levels, thus creating a way to measure something that would otherwise depend entirely on self-report mechanisms that are slow to administer and prone to bias.” 

One of the goals of this project is to create a trust-automation dataset that can be used in future studies that investigate the dynamics of trust in automated systems.  

Cybersecurity, for example, is a domain where this project can have huge applications. “Our results can help improve security and training in virtually all organisations, and we hope that through our work we can keep more people safe online,” he concludes. 

In the long run, research like this can provide designers of such systems with better and more accurate ways to measure user trust objectively, unobtrusively and in real-time, thus allowing them to build systems that function optimally in the hands of human operators.