Beware: the Singularity is near! Singularity being defined, in the words of the legendary computer scientist John von Neumann, as “the moment beyond which „technological progress will become incomprehensively rapid and complicated.“
And the pace is picking up. Never mind Moore’s law and its often predicted “end”, at least as far as silicon chips go. In fact, we are now experiencing exponential growth in many other parts of our lives, too. “Deep learning”, neural networking and cognitive computing are rewriting the rules in the game between mankind and machine. 20 years ago IBM’s Deep Blue first managed to beat the human chess master Gary Kasparov. In 2016, Google’s DeepMind beat Lee Se-dol, the world champion at the Chinese game of Go. Go is hugely more complex than chess. In fact it’s said to be the hardest game in the world, with some 2.08 times 10^170 possible moves; that’s a number with 171 digits; more possible moves than there are atoms in the universe.
Google’s DeepMind managed to do this without being programmed: it essentially taught itself the game. A great example when we consider how ‘humanly sustainable’ these technologies will or won’t be in the future. Or, as Gerd Leonhard writes: “Self-learning computers are not very likely to tolerate human inefficiencies just because we are used to them.”
So what to do? Leonhard believes what we need a new set of Laws and Ethics of Robotics and of Artificial Intelligence that will allow us to control our machines without necessarily understanding their computations. First, he believes, we need to face up to the problem. Second, we need to start making personal decisions about how we handle technology. This could mean that we no longer keep our smartphones on the nightstand, or that we choose to switch them off not only during flights, but at certain times of day or during holidays. On average, people in the United States check their phones 46 times per day, according to Deloitte. This could be a place to start.
Finally, Gerd Leonhard says, we need sharpen our awareness of what he calls the “unintended consequences of technology”, including:
- Outsourcing ‘thinking’ to intelligent software, the ‘smart cloud’ and mobile digital assistants like Google Maps
- Outsourcing of personal judgment to online peer platforms like Tripadvisor
- Amplification and substitution of human conversations, interactions and decisions through apps like PeopleKeeper, which offers to monitor your anxieties when communicating with ‘friends’ and will then offer to delete the ‘bad’ connections from your personal network.
Essentially, the book’s message is that humans must accept responsibility, both as individuals and as society, for choosing which decisions we want to delegate to want technology and just how much control we wish to maintain over our everyday routines. This will require a new kind of social dialog and the acceptance of our own responsibility for shaping the future relationship between man and machine. “Sapere aude”, the German thinker Immanual Kant demanded back in the first Era of Enlightement – “dare to think!” His words still ring true today.