A sustainable future for artificial intelligence
When we think about a future world where AI is embedded in everything, we tend to draw from the science-fiction portraits seared into our memories of human-like machines that are intelligent, uncompromising and sinister. As is so often the case with science fiction, this imagery is more reflective of anxieties at the time rather than accurate predictions of the future.
Machines that perform tasks that once required active human intelligence are increasingly prevalent in society, and we’re already seeing patterns play out that both confirm and contravene dark illustrations of the future. Avoiding societal and regulatory backlash against these potentially beneficial technologies requires active foresight right now – ethical and moral factors are just as important to the success of these technologies as raw engineering and computer science.
Efforts around artificial intelligence and the use of algorithms can be summarised as the act of handing over a collection of tasks (that require gargantuan human mental effort) to a machine intelligence. These tasks tend to be repetitive, precise and cognitively demanding.
Operating a vehicle is a great example of a task that requires dense, unshifting human attention. We scan the environment, collecting data from our eyes, ears and vestibular senses (balance and movement), process this information and send physical commands to the vehicle. We maintain this intensive process for the duration of our trip, and even slight alterations to the data collection process, or the biological processing of that information between our ears (e.g. alcohol or exhaustion) has catastrophic and immediate consequence.
Increasingly, physical devices can collect information from the environment and control a vehicle. Data61’s work in this area, particularly in mining, industrial and agricultural contexts illustrate the safety benefits of human effort with software and hardware. The Hot Metal Carrier, Load Haul Dump Vehicle, Science Roverand autonomous Gator are all machines that move based on algorithms not quite as susceptible to failure as the human brain, with significant benefits in safety and efficiency.
Our visual system also pales in comparison to machines when identifying subtle changes in scans of biological processes. In partnership with the University of Melbourne, Data61 developed AutoDensity – an algorithmic breast density measurement software package. Other Data61 work detects the development of new blood vessels, known to precede the growth of cancer. These serve as real-world illustrations of how outsourcing specific tasks to machines – following a set of detailed formulas we’ve designed – can improve outcomes for the lives of individuals, society and businesses. Ensuring the long-term sustainability of the usage of artificial intelligence in society has a range of tangible benefits.
The instructions we give to these machines don’t have to be static, either. Machine learning is a subset of artificial intelligence. In a sense, it’s an effort to enable machines to actively learn how to do tasks, rather than waiting on us to provide it with a series of set rules. Give a robot a battery, you feed it for a day. Teach it how to find a charger and plug itself in, you feed it for life. Data61’s researchers developed a program that feeds historical data into a machine related to pipe failures – it’s trained to detect patterns and then make predictions about future pipe failures. Other examples include monitoring the structural health of the Sydney Harbour Bridge by teaching a machine to learn where failures will crop up, doing the same for big ship engines, detecting patterns in transport in real-time using advanced analytics, and the development of software, called Determinant, that learns the likelihood of its own predictions being realised in reality, which can then guide researchers on the value of collecting new data.
Artificial intelligence and machine learning facilitate a super-powered ability to detect patterns in nature, industry and infrastructure, alongside a truly novel removal of our reliance on human perception. Accessing and comprehending the inner workings of these processes is a priority for many making decisions based on their outcomes, but the sheer complexity of the underlying processes means sometimes even the researchers have difficulty explaining this to users, and to the general public. Data61’s work on transparency in machine learning has highlighted something inherent and significant – as we hand tasks to machines, we’re profoundly disconnected from the inner-workings of a process that has important outcomes. Researchers and engineers are working hard on methods to close this gap – making machine learning more understandable, transparent and meaningful.
It’s within the complexities of human-machine interactions that the biggest risks to safe and broadly accepted future development of artificial intelligence lie. As Data61’s Lachlan McCalman says in his recent article in The Conversation, “By default, [algorithms] don’t understand the context in which they act, nor the ethical consequences of their decisions”. Societal expectations of ethical behaviour are major factors in decisions made by governments and corporations, and contravening these expectations necessarily drives an immediate backlash.
The imperative for researchers, scientists, engineers, businesses and governments all seem straightforward: consider the ethical implications of the real-world application of the algorithm you’re pushing out into the world. The broad adoption of a careful ethical approach in the deployment of these technologies will ensure they are used fairly and justly by a society that grew up on a diet of dark visions of a robot future. In the near term, Data61’s work on exploring the technical possibilities around artificial intelligence creates opportunities that highlight the imperative of ensuring the long term feasibility of these technologies.