Menus Subscribe Search

Follow us


Culture Essays

robots-lie

(Illustration: Emory Allen)

How Should We Program Computers to Deceive?

• September 03, 2014 • 6:00 AM

(Illustration: Emory Allen)

Placebo buttons in elevators and at crosswalks that don’t actually do anything are just the beginning. One computer scientist has collected hundreds of examples of technology designed to trick people, for better and for worse.

Just outside the Benrath Senior Center in Düsseldorf, Germany, is a bus stop at which no bus stops. The bench and the official-looking sign were installed to serve as a “honey trap” to attract patients with dementia who sometimes wander off from the facility, trying to get home. Instead of venturing blindly into the city and triggering a police search, they see the sign and wait for a bus that will never come. After a while, someone gently invites them back inside.

It’s rare to come across such a beautiful deception. Tolerable ones, however, are a dime a dozen. Human society has always glided along on a cushion of what Saint Augustine called “charitable lies”—untruths deployed to avoid conflict, ward off hurt feelings, maintain boundaries, or simply keep conversation moving—even as other, more selfish deceptions corrode relationships, rob us of the ability to make informed decisions, and eat away at the reserves of trust that keep society afloat. What’s tricky about deceit is that, contrary to blanket prohibitions against lying, our actual moral stances toward it are often murky and context-dependent.

In recent years, it has become common to hear that technology is making us more dishonest—that the Internet, with its anonymous trolls, polished social media profiles, and viral hoaxes, is a mass accelerant of selfish deceit. The Cornell University psychologist Jeffrey Hancock argues that technology has, at the very least, changed our repertoire of lies. Our arsenal of dishonest excuses, for instance, has adapted and expanded to buffer us against the infinite social expectations of a 24/7 connected world. (“Your email got caught in my spam folder!” “On my way!”) But while it’s true, according to Hancock, that the Internet affords us more tools to help manage how people perceive us, he also says that people are often more truthful in digital media than they are in other modes of communication. His research has found that we are more honest over email than over the phone, and less prone to lie on digital résumés than on paper ones. The Internet, after all, has a long memory; what it offers to would-be deceivers in the way of increased opportunity is apparently offset, over the long run, by the increased odds of getting caught.

But the slight moral panic over technology-induced lying sidesteps another, more interesting question: What kind of lies does our technology itself tell us? How has it been designed to deceive?

The slight moral panic over technology-induced lying sidesteps another, more interesting question: What kind of lies does our technology itself tell us?

The fake bus stop at the Benrath Senior Center is, in its way, a piece of deceptive technology: a “user interface” designed to perpetuate an expedient illusion. And it’s hardly the only example. Dishonest technology exists in various forms and for various reasons, not all of them obviously sinister. If you don’t know it already, you should: Many crosswalk and elevator door-close buttons don’t actually work as advertised. The only purpose of these so-called placebo buttons is to give the impatient person a false sense of agency. Similarly, the progress bars presented on computer screens during downloads, uploads, and software installations maintain virtually no connection to the actual amount of time or work left before the action is completed. They are the rough software equivalent of someone texting to say, “On my way!”

But these examples offer only a hint of what we’re liable to see in the near future. As more of our daily lives involves interacting with devices loaded with software, and as more of that software is designed to adjust to a dynamic environment and potentially even make predictions about a user’s behavior in order to serve up the best “value,” perhaps now is a good time to ask: How deceitful should our new technologies be?

“GOOD DESIGN IS HONEST.” So holds one of the Ten Principles of Good Design, a set of guidelines laid down by the iconic German industrial designer Dieter Rams in the 1970s. Today, Rams’ principles are printed up and sold on posters, and his most prominent admirer is no less than Jonathan Ive, the head of design at Apple. A good product, Rams’ guidelines continue, “does not attempt to manipulate the consumer with promises that cannot be kept.”

When honesty is prized so highly, thinking about deception in anything but reflexively negative terms can be difficult. Deceit, after all, is something a good designer doesn’t do. But is all dishonest design necessarily bad?

Last year, a paper trying to address that question was presented at a major conference on computer-human interaction in Paris. “Benevolent Deception in Human Computer Interaction” is the work of Eytan Adar, a computer scientist at the University of Michigan, and Desney Tan and Jaime Teevan, two scholars at Microsoft Research.

Adar says he became interested in deceptive technology when, as an undergraduate in computer science in the 1990s, he learned about the history of early telephone networks. In the 1960s, the hardware that comprised the byzantine switching systems of the first electronic phone networks would occasionally cause a misdial. Instead of revealing the mistake by disconnecting or playing an error message, engineers decided the least obtrusive way to handle these glitches was to allow the system to go ahead and patch the call through to the wrong number. Adar says most people just assumed the error was theirs, hung up, and redialed. “The illusion of an infallible phone system was preserved,” he writes in the paper.

Since then, Adar has collected hundreds of examples of deceptive design, manifesting in a formidable stack of papers on his desk. As the stack grew, Adar discovered a spectrum of design falsehoods that mirrored what passes between ordinary humans every day: a lot of deception by designers, some of it benign and some problematic, and very little discussion about it. “It’s not clear that designers have a good grasp of how to make design decisions that involve transparency or deception,” he says.

Adar wanted to move away from treating deception in design as taboo and toward thinking more systematically about it, and to identify ways in which deceptive technology might help rather than harm us. He began looking for a clear line separating benevolent deception, which benefits the user of a technology, from malevolent deception, which benefits a system owner at the expense of the user. The goal of Adar and his co-authors’ paper was to showcase and classify examples that fall along this spectrum.

You’re probably familiar with malevolently deceptive software: the roaming online ads that trick you into clicking on them when all you really want to do is close them so you can read an article; the privacy settings on Facebook that, according to critics, rely on confusing jargon and user interfaces to trick people into sharing more about themselves than they intend. (This has come to be called “Zuckering,” after the company’s founder.) A website called darkpatterns.org is dedicated to tracking these kinds of tricks and abuses.

Pretty much everyone agrees that this sort of thing is rotten, and these malevolent deceptions have been well studied, mainly with an eye toward detecting and policing them. But many other varieties of deceptive design fly below the radar. One relatively benign class of examples occurs when an operating system fails in some way and a piece of software is programmed to cover up the glitch. The misdials of the early phone switching system fall into this category. Similarly, reports Adar, when the servers at Netflix fail or are overwhelmed, the service switches from its personalized recommendation system to a simpler one that just suggests popular movies.

Designers of technology engage in another kind of relatively neutral deception when they manipulate users to behave in ways that will help improve system performance. Some speech-recognition software functions better if it can analyze a person’s normal speech, as opposed to the sort of halting robot-speak many people instinctively use when talking to a machine. Thus, designers attempt to make the software sound more like a person than a strict commitment to honesty in design would probably allow.

And then there’s more straightforward benevolent deception. Placebo buttons and other calming interfaces, like the digital signs that over-estimate wait times for lines at amusement parks, arguably fall into this category by giving people the illusion of control, or by soothing anxious nerves. Coinstar kiosks, the coin-counting machines stationed in Walmart and other stores, are rumored to take longer than necessary to tally change because designers learned that customers find a too-quick tally disconcerting. Another example: robotic systems designed to help people overcome their own perceived limits. Researchers have experimented with rehabilitation robots that under-report the force a patient exerts, to help her move past a sense of learned weakness and recover from injury faster.

Many crosswalk and elevator door-close buttons don’t actually work. The purpose of these placebo features is to give the impatient person a false sense of agency.

In the non-robot realm, your personal trainer might also use deceit for your benefit when she covers the treadmill display so you can’t see your running speed, spurring you to run faster than you thought possible. The term benevolent deception itself seems to have its roots in medicine, where it has been a matter of discussion for years. Some doctors believe that being too bluntly honest about a diagnosis can do more harm than good in some instances, so they omit some details or avoid direct answers.

That the idea of benevolent deception originated in a domain where, much as in technology, there is a huge asymmetry in information—and power—between users and providers, is telling. Doctors and tech workers have similar reputations for arrogance, and any known practice of benevolent deception might easily breed resentment in users and patients. But how much complexity do we want to be saddled with in the name of full disclosure, and how much can we safely, expediently navigate? Consider that the standard user interface on your computer—a desktop with folder and trashcan icons—is perhaps the most familiar deception of all, hiding a universe of code behind a simple, “usable” facade.

ADAR’S SIMPLE TAXONOMY OF deception bears some resemblance to that of Thomas Aquinas, who claimed there were three types of lies: malicious lies (meant to do harm; mortal sins), jocose lies (told in fun; pardonable), and officious lies (helpful; pardonable)—a hierarchy that is itself a simplification of St. Augustine’s eight types of lies, established nearly a thousand years before. Separated by centuries, these systems are all attempts to schematize the complex emotional and social landscape of deception in human affairs.

Human-computer affairs are not so different. Software that always deceives in a way that is detectable is repellent to us, just as are people who always lie. Software that is inconsistent with its truthfulness or deceit may breed mistrust and annoyance. And software that deceives in a way that benefits the person using it may be as easily forgiven as a personal trainer who’s helping you get in shape. It’s not hard to understand that the designer behind a workout program or physical therapy robot is looking out for your own good. Besides, it seems easy enough to tweak the settings if you don’t like the lies.

But deceptive technology is liable to evolve. Since the advent of computers, people have grown accustomed to being in charge of their machines: You type on a keyboard or click a mouse and the computer responds. Sure, it may increasingly seem like we are the ones who are programmed to react to the beeps and buzzes of our devices. But in most cases, each interaction with a computer starts with an input from us (we are the ones who opted to receive those “push notifications”) and ends with us. Right now, the computer, phone, or robot is simply an intermediary, a messenger—a conduit for a human-human interaction.

In the future, true artificial intelligence systems will alter the game significantly. They will make mistakes and recover and learn as a human would. And ultimately they will be able to scan their environment for contextual clues about how to behave and respond. For instance, engineers working on partially self-driving cars are busy envisioning how a human operator might best share responsibility for driving with the car itself. Here’s one possibility: The car’s software may use embedded sensors to look for biological cues (variations in heart rate, skin conductance, eye movement) that indicate distraction or impairment in a human driver, and then take over if those cues are detected.

In other words, engineers are already thinking about how machines might sense a person’s state of mind.

That’s what we do when we suss out whether it might be best to fudge the truth with someone we care about. It’s what children are doing when, as they begin to formulate a “theory of mind” at age three or four, they tell their first semi-competent lies. And it’s what a doctor does when she must tell her patient he is dying. Since each patient is different, the doctor must intuit the patient’s range of responses to various versions of the news and then select the best one for this patient. The doctor then makes the decision to redirect the conversation, to gently administer the blow, or to be blunt. We call this having a good bedside manner. In contexts far beyond medicine, something like it will be important for artificial intelligence systems to learn. A good AI system will be able not just to reach logical conclusions, but to present them in a sensitive way.

In Albert Camus’ novel The Stranger, the main character Meursault is, according to the author, “a hero for the truth,” unable or unwilling to lie. During the trial in which Meursault is accused of murder, the prosecutor argues that his brutally honest demeanor is that of a “monster, a man without morals.” To be unyieldingly truthful, then, is to become a sort of inhuman grotesque.

It is an uncomfortable truth that, if the goal is to make artificial intelligence as human-like as possible, these smart machines will, almost by definition, have to be programmed to know when to be honest—and when to lie.


For more on the science of society, and to support our work, sign up for our free email newsletters and subscribe to our bimonthly magazine. Digital editions are available in the App Store (iPad) and on Google Play (Android) and Zinio (Android, iPad, PC/MAC, iPhone, and Win8).

Kate Greene
Kate Greene is a San Francisco-based writer who covers science and technology for Wired, Discover, the Economist, and others. Follow her on Twitter @kgreene.

A weekly roundup of the best of Pacific Standard and PSmag.com, delivered straight to your inbox.

Recent Posts

October 24 • 8:00 AM

What Do Clowns Think of Clowns?

Three major players weigh in on the current state of the clown.


October 24 • 7:13 AM

There Is No Surge in Illegal Immigration

The overall rate of illegal immigration has actually decreased significantly in the last 10 years. The time is ripe for immigration reform.


October 24 • 6:15 AM

Politicians Really Aren’t Better Decision Makers

Politicians took part in a classic choice experiment but failed to do better than the rest of us.


October 24 • 5:00 AM

Why We Gossip: It’s Really All About Ourselves

New research from the Netherlands finds stories we hear about others help us determine how we’re doing.


October 24 • 2:00 AM

Congratulations, Your City Is Dying!

Don’t take population numbers at face value.


October 23 • 4:00 PM

Of Course Marijuana Addiction Exists

The polarized legalization debate leads to exaggerated claims and denials about pot’s potential harms. The truth lies somewhere in between.


October 23 • 2:00 PM

American Companies Are Getting Way Too Cozy With the National Security Agency

Newly released documents describe “contractual relationships” between the NSA and U.S. companies, as well as undercover operatives.


October 23 • 12:00 PM

The Man Who’s Quantifying New York City

Noah Davis talks to the proprietor of I Quant NY. His methodology: a little something called “addition.”


October 23 • 11:02 AM

Earliest High-Altitude Settlements Found in Peru

Discovery suggests humans adapted to high altitude faster than previously thought.


October 23 • 10:00 AM

The Psychology of Bribery and Corruption

An FBI agent offered up confidential information about a political operative’s enemy in exchange for cash—and they both got caught. What were they thinking?


October 23 • 8:00 AM

Ebola News Gives Me a Guilty Thrill. Am I Crazy?

What it means to feel a little excited about the prospect of a horrific event.


October 23 • 7:04 AM

Why Don’t Men Read Romance Novels?

A lot of men just don’t read fiction, and if they do, structural misogyny drives them away from the genre.


October 23 • 6:00 AM

Why Do Americans Pray?

It depends on how you ask.


October 23 • 4:00 AM

Musicians Are Better Multitaskers

New research from Canada finds trained musicians more efficiently switch from one mental task to another.


October 22 • 4:00 PM

The Last Thing the Women’s Movement Needs Is a Heroic Male Takeover

Is the United Nations’ #HeForShe campaign helping feminism?


October 22 • 2:00 PM

Turning Public Education Into Private Profits

Baker Mitchell is a politically connected North Carolina businessman who celebrates the power of the free market. Every year, millions of public education dollars flow through Mitchell’s chain of four non-profit charter schools to for-profit companies he controls.


October 22 • 12:00 PM

Will the End of a Tax Loophole Kill Off Irish Business and Force Google and Apple to Pay Up?

U.S. technology giants have constructed international offices in Dublin in order to take advantage of favorable tax policies that are now changing. But Ireland might have enough other draws to keep them there even when costs climb.


October 22 • 10:00 AM

Veterans in the Ivory Tower

Why there aren’t enough veterans at America’s top schools—and what some people are trying to do to change that.


October 22 • 8:00 AM

Our Language Prejudices Don’t Make No Sense

We should embrace the fact that there’s no single recipe for English. Making fun of people for replacing “ask” with “aks,” or for frequently using double negatives just makes you look like the unsophisticated one.


October 22 • 7:04 AM

My Politicians Are Better Looking Than Yours

A new study finds we judge the cover by the book—or at least the party.


October 22 • 6:00 AM

How We Form Our Routines

Whether it’s a morning cup of coffee or a glass of warm milk before bed, we all have our habitual processions. The way they become engrained, though, varies from person to person.


October 22 • 4:00 AM

For Preschoolers, Spite and Smarts Go Together

New research from Germany finds greater cognitive skills are associated with more spiteful behavior in children.


October 21 • 4:00 PM

Why the Number of Reported Sexual Offenses Is Skyrocketing at Occidental College

When you make it easier to report assault, people will come forward.


October 21 • 2:00 PM

Private Donors Are Supplying Spy Gear to Cops Across the Country Without Any Oversight

There’s little public scrutiny when private donors pay to give police controversial technology and weapons. Sometimes, companies are donors to the same foundations that purchase their products for police.


October 21 • 12:00 PM

How Clever Do You Think Your Dog Is?

Maybe as smart as a four-year-old child?


Follow us


Politicians Really Aren’t Better Decision Makers

Politicians took part in a classic choice experiment but failed to do better than the rest of us.

Earliest High-Altitude Settlements Found in Peru

Discovery suggests humans adapted to high altitude faster than previously thought.

My Politicians Are Better Looking Than Yours

A new study finds we judge the cover by the book—or at least the party.

That Cigarette Would Make a Great Water Filter

Clean out the ashtray, add some aluminum oxide, and you've (almost) got yourself a low-cost way to remove arsenic from drinking water.

Love and Hate in Israel and Palestine

Psychologists find that parties to a conflict think they're motivated by love while their enemies are motivated by hate.

The Big One

One company, Amazon, controls 67 percent of the e-book market in the United States—down from 90 percent five years ago. September/October 2014 new-big-one-5

Copyright © 2014 by Pacific Standard and The Miller-McCune Center for Research, Media, and Public Policy. All Rights Reserved.