Hardly a day goes by where we’re not reminded about how robots are taking our jobs and hollowing out the middle class. The worry is so acute that economists are busy devising new social contracts to cope with a potentially enormous class of obsolete humans.
Documentarian James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, is worried about robots too. Only he’s not worried about them taking our jobs. He’s worried about them exterminating the human race.
I’ll grant you that this premise sounds a bit…. dramatic, the product of one too many Terminator screenings. But after approaching the topic with some skepticism, it became increasingly clear to me that Barrat has written an extremely important book with a thesis that is worrisomely plausible. It deserves to be read widely. And to be clear, Barrat’s is not a lone voice — the book is rife with interviews of numerous computer scientists and AI researchers who share his concerns about the potentially devastating consequences of advanced AI. There are even think tanks devoted to exploring and mitigating the risks. But to date, this worry has been obscure.
In Barrat’s telling, we are on the brink of creating machines that will be as intelligent as humans….[O]nce we have achieved AGI [artificial general intelligence], the AGI will go on to achieve something called artificial superintelligence (ASI) — that is, an intelligence that exceeds — vastly exceeds — human-level intelligence.
Barrat devotes a substantial portion of the book explaining how AI will advance to AGI and how AGI inevitably leads to ASI. Much of it hinges on how we are developing AGI itself. To reach AGI, we are teaching machines to learn….
… Once a machine built this way reaches human-level intelligence, it won’t stop there. It will keep learning and improving. It will, Barrat claims, reach a point that other computer scientists have dubbed an “intelligence explosion” — an onrushing feedback loop where an intelligence makes itself smarter thereby getting even better at making itself smarter. This is, to be sure, a theoretical concept, but it is one that many AI researchers see as plausible, if not inevitable. Through a relentless process of debugging and rewriting its code, our self-learning, self-programming AGI experiences a “hard take off” and rockets past what mere flesh and blood brains are capable of.
And here’s where things get interesting. And by interesting I mean terrible.
The problem: the people doing this are powerful, and don’t care what we think. We have no say.They worship science and technology. To them, “reproducing” themselves by creating a race is better than just leaving a son or daughter – and who cares if the rest of the race is exterminated along the way? That just proves the new race – their child – is ‘better’, right?
Because, it turns out, ethics really is the line that separates a nice place to live from total nightmare…and reciprocity (the Golden Rule, aka doing unto others as you would have them do unto you) is the key to ethical behavior.