Monday, April 17, 2017

Robots Designed to Act Morally?



NAO is the world’s most widely used humanoid robot for education, healthcare, and research. NAO is a fully programmable robot that can walk, talk, listen to you, and even recognise your face. However, robotic science is far from knowing how to instill human-like morality. How to build ethical robots is one of the challenges in artificial intelligence and machine ethics.


Boer Deng

In his 1942 short story 'Runaround', science-fiction writer Isaac Asimov introduced the Three Laws of Robotics — engineering safeguards and built-in ethical principles that he would go on to use in dozens of stories and novels. They were: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Fittingly, 'Runaround' is set in 2015. Real-life roboticists are citing Asimov's laws a lot these days: their creations are becoming autonomous enough to need that kind of guidance. In May, a panel talk on driverless cars at the Brookings Institution, a think tank in Washington DC, turned into a discussion about how autonomous vehicles would behave in a crisis. What if a vehicle's efforts to save its own passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a child, but risked hitting someone else nearby?

Read more here.

Related reading: The Ethics of Artificial Intelligence by Nick Bostrom


No comments: