Fiction, reality and robot commonsense – A Q&A with Professor Ronald Arkin on his D61+ Live talk
We had a quick chat with Professor Ronald Arkin, Regents’ Professor & Director of the Mobile Robot Laboratory, Georgia Tech. Ron’s going be delivering an exciting keynote at D61+ Live, on the ethical challenges of human-robot interactions, and how we can use frameworks as solutions to these challenges.
Hi Professor Arkin, how’s it going?
Good. Looking forward to returning to Brisbane!
You’re going to be delivering a keynote on Day 2 of D61+ Live, at 10:45. Can you give us a quick overview of your talk?
Sure – it will be addressing a series of issues regarding the ethical aspects of human-robot interaction, from warfare to healthcare to deception to robotic nudging. I will describe our research in these areas and potential software architectures to address some of the many issues associated with the proliferation of robotics in everyday life. The talk will be accessible to the public, not overly technical, as these details can be found in our numerous publications available on the internet,
You’ve done a lot of research into robot deception. What kind of circumstances could arise in which a robot might be allowed or even designed to deceive a human?
Certainly in warfare. Sun Tsu stated in the Art of War that all warfare is deception. But we also explore its use for other-deception, i..e., deception for the benefit of the one being deceived. This can occur in education, in health or trauma care, even in everyday life. I’ll touch on several examples and draw on a model from criminology to illustrate how other deception might be done.
What kind of civil rules might robots have to learn and adhere to, if they’re to integrate successfully into human society, and what challenges might they face during this process?
All sorts. Driverless cars for example, will have to know the rules of the road and the laws of the region in which they operate. Autonomous weapons should be provided with a respect for International Humanitarian Law. Even a childcare or eldercare robot will need to have some commonsense regarding what is acceptable in its behaviour and what isn’t.
Often, discussion around artificial intelligence bounces off misconceptions or misunderstandings. What’s one example that keeps coming up for you, that you’d love to see refuted?
Good question. Perhaps the confusion of lethal autonomous weapons with science fiction. How may articles have you seen that show the Terminator when discussing these systems? No one wants or is talking about building such science fiction artefacts.
In general a responsible AI scientist or roboticist must also take care in expectation management: not to overhype what can or will be done. The singularity (the hypothesised point when machine intelligence exceeds human intelligence) is one such overhyped threat, which to me at this time poses no real danger. There are far more pressing issues. but I am glad that some folks are thinking about it – but no need to scare everyone.
Do you have any examples of the depiction of robotics in science fiction that you think has been done particularly well?
I admire Asimov for bringing up the ethical quandaries associated with robotics in his Three Laws (there are actually four). But this was a literary device that illustrated what can go wrong, and perhaps too many people take these laws literally. there are other minor instances – such as a scene in Interstellar that talks about robot deception.
Any key takeaway messages or big points you’d like people to be thinking about, when they’re attending D61+ Live?
Robots are here and they will have a greater and greater impact on society. We need to discuss the manifold ethical questions they pose to society. Technological advances are outpacing our ability to regulate and legislate. Everyone is a stakeholder in this discussion so my hope is that more people will get engaged. It’s our future we’re talking about.
Header image – “Set love to 90%”, Simon Liu, Flickr, Creative Commons (CC BY-NC-SA 2.0)