Review published on March 23, 2015. Reviewed by jj redfearn
Nudge Reviewer Rating:
Will artificial intelligence save or destroy us? Zarkadakis’ answer is a resounding yes, no or maybe.
This book wasn’t what I was expecting. I expected something explaining, or possibly defining, the term Artificial Intelligence, quickly describing the many different forms it takes using examples from the past fifty years, suggesting how it might develop and finally discussing in length and detail how malevolent, benign or otherwise the effects might be. It’s not like that at all.
The first thing it doesn’t do is define Artificial Intelligence. There’s a mention of how we commonly use terms without defining them or even agreeing what they mean, but no definition or even extended description. There’s plenty of discussion about consciousness, self-awareness, mind, self-replication and so forth, all presented through a history of how the philosophy of mind and body has developed over the last three thousand years. The implication I drew from the first fourteen chapters was that Zarkadakis defines AI to mean a conscious, sentient, self-aware living entity that can reproduce and evolve itself. In effect a new species whose first representative will be created through human action. A being that can operate effectively in a world where statements like “This statement is False” can be made without it batting an eyelid or its brain entering an infinitely recursive loop.
Modern IT is based, like the law, science and most mathematics, on binary logic. Everything is either true or false. So binary 0 represents false and binary 1 represents true. Thats it. Every binary computer, with the notable exception of quantum state machines (which by definition are not binary), functions solely because the world can be modelled using nothing more than the simple binary true and false. The catch is, the world isn’t like that. Consider that statement “This statement is false”. If it’s true, then it’s false, and if it’s false then it’s true. In binary logic that statement is impossible, yet in the real world it exists. A fundamental problem for mathematics and computing and one that the best of the world’s logicians have been challenged with for well over a century. In fact it’s easy. It’s the logical version of the uncertainty principle. The statement isn’t a complex paradox, it’s a lie. Standard logic needs a third element, true, false, lie.
Back to the book. Zarkadakis uses the general true/false idea to suggest binary computers can never be AIs, but computers modelled on brains perhaps could. Brains don’t use binary logic; through their neurones they use a fuzzy version – yes-ish, no-ish. That allows us to use degrees of truth or falsehood, and even to accept things that are 100% true and 100% false simultaneously. It doesn’t deliver the mathematical certainty required by binary logic.
A curious fact about AI is that things are thought to represent artificial intelligence right up until they work. The idea of a switch that can turn on a light when it gets dark was thought to be intelligent. Samuel’s chequer-playing system was seen as an intelligent learning system once, but is regarded as rote now. Chess playing was intelligent until Deep Blue beat a chess Grand Master. Eliza briefly passed the Turing test when it fooled one of its creators. But put all that aside, it just means the definition keeps changing. The current description is good enough even if it’s worthy of Lincoln – if it can fool all of the people all of the time, it’s intelligent.
In effect, all that provides the lacking definition of intelligence. AIs are entities that thrive because they can interpret the world using simple and straightforward logic yet cope with great uncertainty and ambiguity. Entities that can learn and apply their knowledge and intelligence to thrive when situations and events that are completely outside of their previous experience occur. An Artificial Intelligence is a man-made entity that can do all that. I think that’s what’s being said here, hence the In Our Own Image bit of the title.
Chapters fifteen onward look at the history of computer-based AI and discuss where it’s been, where it’s going, whether full AI is achievable, and very, very, very much too briefly look at one or two of the implications if it is. I’m guessing again, but I think the conclusion is that:
IF we succeed in creating an artificial intelligence THEN we’re doomed and will be extinct in a generation ELSE we’re still doomed because non-intelligent machines will do everything necessary for our existence anyway and Darwinian ‘use it or lose it’ theory will result in losing everything ENDIF.
In Our Own Image is an interesting history lesson but it lacks completeness in its history of AI, it doesn’t discuss the cover issue about whether AI is a benefit or the end of life as we know it and there’s no punchline.
I think I’ll re-read Dune now.
Windhaven by George RR Martin & Lisa Tuttle