The problem with AI is that while it’s relatively easy to define the “A”, the “I” remains elusive. We don’t know what our own intelligence is, nor how we generate our familiar conscious experience, so it’s tricky to know how we might create an artificial consciousness, or indeed recognise it if we did. Algorithms can knit together plausible conversation by sampling enormous numbers of exchanges between humans, but they have no greater understanding of those exchanges than would an enormous set of punch cards speaking through a bellows and a brass trumpet. The old Turing test now looks sadly inadequate. A machine-learning program might well counterfeit human speech and yet fail to recognise a snow leopard standing on green grass because the image contains no actual snow, and therefore the cat does not meet the definition.
That distinction between true AI and the powerful machine learning tools of Google and Amazon is tackled head-on by Hector Levesque in Common Sense, the Turing Test, and the Quest for Real AI. A professor emeritus in the computer science department at the University of Toronto, Levesque fearlessly zips us through John Searle’s “Chinese room” argument and the problem of common sense before delving deeper to the complexities of the “Winograd Schema”. Don’t be alarmed: this book makes everything clear.
Taking a gentler way into one of the fundamental questions of human existence, Maryanne Wolf’s Proust and the Squid views human cognition through the lens of reading, and throws in a challenge to modern digital living along the way.
Fiction, too, loves to ponder what it means to be human – and to be familiar or other. Octavia Butler’s Dawn doesn’t deal directly with AI, but queries our received, or perhaps colonised, notions of identity and the easy binaries of conventional gender. That’s particularly crucial to the reality of machine learning because our efforts so far have been riven through with subconscious prejudice. Systems that supposedly predict crime may actually just be echoing human racisms, and in 2015 Facebook’s early face-recognition software notoriously tagged two African Americans as gorillas.
Butler’s uneasy biological blurring is even more relevant when you consider that there’s no reason AI should think in a human way. Perhaps it will be more like a siphonophore – a collection of individuals, each singular, yet functional as a whole – and in fact it might be made of biological parts rather than silicon ones. Miguel Nicolelis developed the robotic exoskeleton that allowed paralysed Juliano Pinto to perform the symbolic kick-off at the 2014 Brazil World Cup. Since writing Beyond Boundaries in 2011, his other work has included networking together the brains of two rats so that one was able to run a maze using information stored in the brain of the other. If AI emerges from biology before it is built in the machine, Nicolelis might be somewhere nearby.
We may not understand conscious intelligence but, biological or digital, it is certainly a functioning arrangement of information. Another way to begin to understand AI is to try to grasp our relationship with information as a structure that increasingly seems to underlie everything from the infinitesimal to the vast: what it is, and where it fits into the universe. James Gleick’s majestic The Information spans centuries, continents, space and time. It is surely the perfect place to start answering that pesky question of what it means to be a thinking thing.
- Nick Harkaway’s Gnomon is published in paperback by Heinemann. To order any of these books go to guardianbookshop.com.
via Artificial intelligence (AI) | The Guardian http://bit.ly/2iFrAme
July 23, 2018 at 02:36AM