Google wants machines, not people
Google doesn’t want you at all, because you are a human.
Long live the machines. I thought I’d just put that out there for future reference, in case the machines take over and attempt to destroy humanity someday. If the prophecies of mankind’s enslavement at the … not hands, maybe “zettabytes” of our future digital masters ever come to fruition, no one will ever be able to claim that we didn’t see this machine-controlled dystopia coming.
“The Terminator”’s Skynet, “The Matrix” and Arthur C. Clarke’s fairly menacing sentient computer, HAL 9000, along with the cyber-brains and cybernetic prostheses found in classic cyberpunk cinematic gems like “Blade Runner” and “Ghost in the Shell,” have all foretold of the impending doom that will be wrought by the rise of the machines.
Now enter Google (friendly logo intact), with its confirmed acquisition of DeepMind, a company dedicated to the development of machine cognition. Google and DeepMind are set to change the artificial intelligence game. This isn’t science fiction, kids. This is the future, right now. Well, technically speaking, every second you don’t die is the future right now — but you get the point.
Google, which not too long ago shelled out more than $500 million for DeepMind Technologies — and beat out Facebook in the process — now finds itself in the awkward position of having to set up an ethics board with the express purpose of preventing smart machines from destroying some of our favorite mammals, mainly us (Homo sapiens). This is a fairly ironic turn of events, considering Google’s well-known “don’t be evil” code of conduct.
The problem kicks in, of course, as to how “evil” is currently defined, who interprets its meaning as accepted by the collective consciousness, and who (or what) will decipher what evil is or is not in the future. A sentient robot might one day decide that the greater good (or lesser evil) would be served by wiping out an entire village of people, because statistically speaking someone living in that village will tally a much greater murder toll down the road, in the predicted future.
Even Shane Legg (one of the founders of DeepMind) stated in an interview with Less Wrong that when human extinction inevitably comes about, “technology will likely play a part in this.” At least he reckons the machines, if they do decide to take us out, will do so efficiently, because why would we ever “deliberately design super intelligent machines to maximize human suffering.”
I’m jumping the gun here, and rushing toward a possible future dystopia. Let’s stay in the now, and take a look at some of the projects Google is currently working on.
The massive company, especially its secretive Google X division, has already set up a self-learning neural network of 16,000 computer processors, which taught itself (without supervision) how to recognize cats. (Cat videos suck everyone in — even smart computers.) Self-driving cars, Google Glass, cutting-edge glucose-monitoring contact lenses, Internet balloons, floating wind-power generators and tweaks to Google’s famous search algorithms (thwarting mug shot extortion artists) are just some of the forward-thinking projects Google engineers have developed, or are developing now.
The “mad scientists” at Google are given a ton of leeway in order to try new ideas. Even if a researcher flubs a few experiments from time to time, that’s perfectly fine. In a rare interview with the BBC, Astro Teller, the man in charge of Google’s “moonshot” laboratories, explained how risk-taking is actually rewarded at Google X. Corporate hierarchy, on the other hand, is frowned upon.
Solving some of the world’s biggest problems seems to be the mantra behind a lot of the research taking place at Google, and the brain trust advising the company’s upper echelons.
Thinking machines, designed by DeepMind and Google, could very well be a huge step toward solving many persistent, global problems. All the company has to do now is convince people that the coming advances in artificial intelligence won’t rob workers of their jobs faster than new ones can be created, or that the tools we build now — once they become sentient — won’t eventually decide that the nettlesome upkeep of the human race really isn’t worth the bother.