Back in 1942, science-fiction author Isaac Asimov set down his famous Three Laws of Robotics as an organizing principle and unifying theme for his series of robot-related books and stories. The idea that human inventors should be compelled to program some kind of ethical behavior framework into their machines has inspired writers and technologists ever since.
Kate Darling, who was born in Switzerland and studied law at ETH Zurich, works at the MIT Media Lab in Cambridge, MA, as well as at Harvard’s Berkman Center for Internet & Society. She is hailed as an expert on human-machine interaction and feels that Asimov’s laws need to be updated sometime in the near future. “A faultily programmed robot can do lots of damage”, she believes. As robots become cognitive and self-teaching, limits must be hardwired into them that will ensure that they behave in a socially responsible fashion. After all, she maintains, robots will always be part of a human-dominated world.
When HitchBOT (an autonomous robot created by scientists in Canada and sent off to hitchhike on its own across the continent) was smashed by unknown vandals in Philadelphia in August, 2015, Darling was shocked. “It wasn’t just that they destroyed a machine, but because they destroyed a figure with which many people were able to connect emotionally. HitchBOT carries the hopes and dreams of its creators. The idea was to see what would happen if a robot that could speak but not move about on its own could rely on the help and cooperation of humans. Normally the question goes the other way around. It turns out that it’s the humans who are dangerous, not the robots.”
Maybe what we need are some Laws of Humanics.