Menus Subscribe Search

Follow us


Plan Now for the Robot Apocalypse

• November 28, 2012 • 4:29 PM

You say he’s wistful. I say he’s plotting. (WALL-E image courtesy Disney)

Ah, lovable robots, so helpful and kind and compliant, who wouldn’t fall in love with them? And that was the gist of a popular story by Robert Ito in our current print edition. People, perhaps channeling the Sirius Cybernetics Corporation’s promise of a robot as “your plastic pal who’s fun to be with,” are growing inordinately fond of their mechanical friends:

What happens as robots become ever more responsive, more humanlike? Some researchers worry that people—especially groups like autistic kids or elderly shut-ins who already are less apt to interact with others—may come to prefer their mechanical friends over their human ones.

Sure, it’s always harmless fun with robots, until they take over the world (like they did in that series of really loud movies). But don’t take my fevered word, or Will Smith’s, on this. Take the word of a some boffins at Cambridge, where such rantings get serious notice. Thanks to the BBC.com, I learned that the “risk of robot uprising wiping out the human race” is being studied at the aptly named Centre for the Study of Existential Risk.

A trip to their still-Spartan website wiped the smirk right off my face.

Many scientists are concerned that developments in human technology may soon pose new, extinction-level risks to our species as a whole. Such dangers have been suggested from progress in AI, from developments in biotechnology and artificial life, from nanotechnology, and from possible extreme effects of anthropogenic climate change. The seriousness of these risks is difficult to assess, but that in itself seems a cause for concern, given how much is at stake.

The Cambridge Project for Existential Risk — a joint initiative between a philosopher, a scientist, and a software entrepreneur — begins with the conviction that these issues require a great deal more scientific investigation than they presently receive. Our aim is to establish within the University of Cambridge a multidisciplinary research centre dedicated to the study and mitigation of risks of this kind. We are convinced that there is nowhere on the planet better suited to house such a centre. Our goal is to steer a small fraction of Cambridge’s great intellectual resources, and of the reputation built on its past and present scientific pre-eminence, to the task of ensuring that our own species has a long-term future.

We’ve certainly looked at climate change around here and taken its threat seriously, and we’ve also considered some of the issues surrounding nanotechnology. Sure a robot apocalypse might seem far-fetched if our depiction of it goes straight from a Roomba to a Terminator, but less so if we start considering how artificial intelligence already contributes to flash crashes and medical tragedies without malevolence programmed in.

As two of the Cambridge centre’s founders, the philosopher and the entrepreneur, wrote about intelligent machines at the Australian website The Conversation:

The good news is that we probably have no reason to think they would be hostile, as such: hostility, too, is an animal emotion.

The bad news is that they might simply be indifferent to us – they might care about us as much as we care about the bugs on the windscreen.

People sometimes complain that corporations are psychopaths, if they are not sufficiently reined in by human control. The pessimistic prospect here is that artificial intelligence might be similar, except much much cleverer and much much faster.

If that sounds far-fetched, the pessimists say, just ask gorillas how it feels to compete for resources with the most intelligent species – the reason they are going extinct is not (on the whole) because humans are actively hostile towards them, but because we control the environment in ways that are detrimental to their continuing survival.

Even if you don’t have a taste for the dystopian, perhaps preferring George Jetson to HAL 9000, it still seems smart to worry a bit about too-smart robots now … while we can still fire them.

Michael Todd
Most of Michael Todd's career has been spent in newspaper journalism, ranging from papers in the Marshall Islands to tiny California farming communities. Before joining the publishing arm of the Miller-McCune Center, he was managing editor of the national magazine Hispanic Business.

More From Michael Todd

Tags: , , ,

If you would like to comment on this post, or anything else on Pacific Standard, visit our Facebook or Google+ page, or send us a message on Twitter. You can also follow our regular updates and other stories on both LinkedIn and Tumblr.

A weekly roundup of the best of Pacific Standard and PSmag.com, delivered straight to your inbox.

Follow us


Subscribe Now

Quick Studies

What Makes You Neurotic?

A new study gets to the root of our anxieties.

Fecal Donor Banks Are Possible and Could Save Lives

Defrosted fecal matter can be gross to talk about, but the benefits are too remarkable to tiptoe around.

How Junk Food Companies Manipulate Your Tongue

We mistakenly think that harder foods contain fewer calories, and those mistakes can affect our belt sizes.

What Steve Jobs’ Death Teaches Us About Public Health

Studies have shown that when public figures die from disease, the public takes notice. New research suggests this could be the key to reaching those who are most at risk.

Speed-Reading Apps Will Not Revolutionize Anything, Except Your Understanding

The one-word-at-a-time presentation eliminates the eye movements that help you comprehend what you're reading.

The Big One

One state—Pennsylvania—logs 52 percent of all sales, shipments, and receipts for the chocolate manufacturing industry. March/April 2014