Menus Subscribe Search

Follow us


Culture Essays

robots-lie

(Illustration: Emory Allen)

How Should We Program Computers to Deceive?

• September 03, 2014 • 6:00 AM

(Illustration: Emory Allen)

Placebo buttons in elevators and at crosswalks that don’t actually do anything are just the beginning. One computer scientist has collected hundreds of examples of technology designed to trick people, for better and for worse.

Just outside the Benrath Senior Center in Düsseldorf, Germany, is a bus stop at which no bus stops. The bench and the official-looking sign were installed to serve as a “honey trap” to attract patients with dementia who sometimes wander off from the facility, trying to get home. Instead of venturing blindly into the city and triggering a police search, they see the sign and wait for a bus that will never come. After a while, someone gently invites them back inside.

It’s rare to come across such a beautiful deception. Tolerable ones, however, are a dime a dozen. Human society has always glided along on a cushion of what Saint Augustine called “charitable lies”—untruths deployed to avoid conflict, ward off hurt feelings, maintain boundaries, or simply keep conversation moving—even as other, more selfish deceptions corrode relationships, rob us of the ability to make informed decisions, and eat away at the reserves of trust that keep society afloat. What’s tricky about deceit is that, contrary to blanket prohibitions against lying, our actual moral stances toward it are often murky and context-dependent.

In recent years, it has become common to hear that technology is making us more dishonest—that the Internet, with its anonymous trolls, polished social media profiles, and viral hoaxes, is a mass accelerant of selfish deceit. The Cornell University psychologist Jeffrey Hancock argues that technology has, at the very least, changed our repertoire of lies. Our arsenal of dishonest excuses, for instance, has adapted and expanded to buffer us against the infinite social expectations of a 24/7 connected world. (“Your email got caught in my spam folder!” “On my way!”) But while it’s true, according to Hancock, that the Internet affords us more tools to help manage how people perceive us, he also says that people are often more truthful in digital media than they are in other modes of communication. His research has found that we are more honest over email than over the phone, and less prone to lie on digital résumés than on paper ones. The Internet, after all, has a long memory; what it offers to would-be deceivers in the way of increased opportunity is apparently offset, over the long run, by the increased odds of getting caught.

But the slight moral panic over technology-induced lying sidesteps another, more interesting question: What kind of lies does our technology itself tell us? How has it been designed to deceive?

The slight moral panic over technology-induced lying sidesteps another, more interesting question: What kind of lies does our technology itself tell us?

The fake bus stop at the Benrath Senior Center is, in its way, a piece of deceptive technology: a “user interface” designed to perpetuate an expedient illusion. And it’s hardly the only example. Dishonest technology exists in various forms and for various reasons, not all of them obviously sinister. If you don’t know it already, you should: Many crosswalk and elevator door-close buttons don’t actually work as advertised. The only purpose of these so-called placebo buttons is to give the impatient person a false sense of agency. Similarly, the progress bars presented on computer screens during downloads, uploads, and software installations maintain virtually no connection to the actual amount of time or work left before the action is completed. They are the rough software equivalent of someone texting to say, “On my way!”

But these examples offer only a hint of what we’re liable to see in the near future. As more of our daily lives involves interacting with devices loaded with software, and as more of that software is designed to adjust to a dynamic environment and potentially even make predictions about a user’s behavior in order to serve up the best “value,” perhaps now is a good time to ask: How deceitful should our new technologies be?

“GOOD DESIGN IS HONEST.” So holds one of the Ten Principles of Good Design, a set of guidelines laid down by the iconic German industrial designer Dieter Rams in the 1970s. Today, Rams’ principles are printed up and sold on posters, and his most prominent admirer is no less than Jonathan Ive, the head of design at Apple. A good product, Rams’ guidelines continue, “does not attempt to manipulate the consumer with promises that cannot be kept.”

When honesty is prized so highly, thinking about deception in anything but reflexively negative terms can be difficult. Deceit, after all, is something a good designer doesn’t do. But is all dishonest design necessarily bad?

Last year, a paper trying to address that question was presented at a major conference on computer-human interaction in Paris. “Benevolent Deception in Human Computer Interaction” is the work of Eytan Adar, a computer scientist at the University of Michigan, and Desney Tan and Jaime Teevan, two scholars at Microsoft Research.

Adar says he became interested in deceptive technology when, as an undergraduate in computer science in the 1990s, he learned about the history of early telephone networks. In the 1960s, the hardware that comprised the byzantine switching systems of the first electronic phone networks would occasionally cause a misdial. Instead of revealing the mistake by disconnecting or playing an error message, engineers decided the least obtrusive way to handle these glitches was to allow the system to go ahead and patch the call through to the wrong number. Adar says most people just assumed the error was theirs, hung up, and redialed. “The illusion of an infallible phone system was preserved,” he writes in the paper.

Since then, Adar has collected hundreds of examples of deceptive design, manifesting in a formidable stack of papers on his desk. As the stack grew, Adar discovered a spectrum of design falsehoods that mirrored what passes between ordinary humans every day: a lot of deception by designers, some of it benign and some problematic, and very little discussion about it. “It’s not clear that designers have a good grasp of how to make design decisions that involve transparency or deception,” he says.

Adar wanted to move away from treating deception in design as taboo and toward thinking more systematically about it, and to identify ways in which deceptive technology might help rather than harm us. He began looking for a clear line separating benevolent deception, which benefits the user of a technology, from malevolent deception, which benefits a system owner at the expense of the user. The goal of Adar and his co-authors’ paper was to showcase and classify examples that fall along this spectrum.

You’re probably familiar with malevolently deceptive software: the roaming online ads that trick you into clicking on them when all you really want to do is close them so you can read an article; the privacy settings on Facebook that, according to critics, rely on confusing jargon and user interfaces to trick people into sharing more about themselves than they intend. (This has come to be called “Zuckering,” after the company’s founder.) A website called darkpatterns.org is dedicated to tracking these kinds of tricks and abuses.

Pretty much everyone agrees that this sort of thing is rotten, and these malevolent deceptions have been well studied, mainly with an eye toward detecting and policing them. But many other varieties of deceptive design fly below the radar. One relatively benign class of examples occurs when an operating system fails in some way and a piece of software is programmed to cover up the glitch. The misdials of the early phone switching system fall into this category. Similarly, reports Adar, when the servers at Netflix fail or are overwhelmed, the service switches from its personalized recommendation system to a simpler one that just suggests popular movies.

Designers of technology engage in another kind of relatively neutral deception when they manipulate users to behave in ways that will help improve system performance. Some speech-recognition software functions better if it can analyze a person’s normal speech, as opposed to the sort of halting robot-speak many people instinctively use when talking to a machine. Thus, designers attempt to make the software sound more like a person than a strict commitment to honesty in design would probably allow.

And then there’s more straightforward benevolent deception. Placebo buttons and other calming interfaces, like the digital signs that over-estimate wait times for lines at amusement parks, arguably fall into this category by giving people the illusion of control, or by soothing anxious nerves. Coinstar kiosks, the coin-counting machines stationed in Walmart and other stores, are rumored to take longer than necessary to tally change because designers learned that customers find a too-quick tally disconcerting. Another example: robotic systems designed to help people overcome their own perceived limits. Researchers have experimented with rehabilitation robots that under-report the force a patient exerts, to help her move past a sense of learned weakness and recover from injury faster.

Many crosswalk and elevator door-close buttons don’t actually work. The purpose of these placebo features is to give the impatient person a false sense of agency.

In the non-robot realm, your personal trainer might also use deceit for your benefit when she covers the treadmill display so you can’t see your running speed, spurring you to run faster than you thought possible. The term benevolent deception itself seems to have its roots in medicine, where it has been a matter of discussion for years. Some doctors believe that being too bluntly honest about a diagnosis can do more harm than good in some instances, so they omit some details or avoid direct answers.

That the idea of benevolent deception originated in a domain where, much as in technology, there is a huge asymmetry in information—and power—between users and providers, is telling. Doctors and tech workers have similar reputations for arrogance, and any known practice of benevolent deception might easily breed resentment in users and patients. But how much complexity do we want to be saddled with in the name of full disclosure, and how much can we safely, expediently navigate? Consider that the standard user interface on your computer—a desktop with folder and trashcan icons—is perhaps the most familiar deception of all, hiding a universe of code behind a simple, “usable” facade.

ADAR’S SIMPLE TAXONOMY OF deception bears some resemblance to that of Thomas Aquinas, who claimed there were three types of lies: malicious lies (meant to do harm; mortal sins), jocose lies (told in fun; pardonable), and officious lies (helpful; pardonable)—a hierarchy that is itself a simplification of St. Augustine’s eight types of lies, established nearly a thousand years before. Separated by centuries, these systems are all attempts to schematize the complex emotional and social landscape of deception in human affairs.

Human-computer affairs are not so different. Software that always deceives in a way that is detectable is repellent to us, just as are people who always lie. Software that is inconsistent with its truthfulness or deceit may breed mistrust and annoyance. And software that deceives in a way that benefits the person using it may be as easily forgiven as a personal trainer who’s helping you get in shape. It’s not hard to understand that the designer behind a workout program or physical therapy robot is looking out for your own good. Besides, it seems easy enough to tweak the settings if you don’t like the lies.

But deceptive technology is liable to evolve. Since the advent of computers, people have grown accustomed to being in charge of their machines: You type on a keyboard or click a mouse and the computer responds. Sure, it may increasingly seem like we are the ones who are programmed to react to the beeps and buzzes of our devices. But in most cases, each interaction with a computer starts with an input from us (we are the ones who opted to receive those “push notifications”) and ends with us. Right now, the computer, phone, or robot is simply an intermediary, a messenger—a conduit for a human-human interaction.

In the future, true artificial intelligence systems will alter the game significantly. They will make mistakes and recover and learn as a human would. And ultimately they will be able to scan their environment for contextual clues about how to behave and respond. For instance, engineers working on partially self-driving cars are busy envisioning how a human operator might best share responsibility for driving with the car itself. Here’s one possibility: The car’s software may use embedded sensors to look for biological cues (variations in heart rate, skin conductance, eye movement) that indicate distraction or impairment in a human driver, and then take over if those cues are detected.

In other words, engineers are already thinking about how machines might sense a person’s state of mind.

That’s what we do when we suss out whether it might be best to fudge the truth with someone we care about. It’s what children are doing when, as they begin to formulate a “theory of mind” at age three or four, they tell their first semi-competent lies. And it’s what a doctor does when she must tell her patient he is dying. Since each patient is different, the doctor must intuit the patient’s range of responses to various versions of the news and then select the best one for this patient. The doctor then makes the decision to redirect the conversation, to gently administer the blow, or to be blunt. We call this having a good bedside manner. In contexts far beyond medicine, something like it will be important for artificial intelligence systems to learn. A good AI system will be able not just to reach logical conclusions, but to present them in a sensitive way.

In Albert Camus’ novel The Stranger, the main character Meursault is, according to the author, “a hero for the truth,” unable or unwilling to lie. During the trial in which Meursault is accused of murder, the prosecutor argues that his brutally honest demeanor is that of a “monster, a man without morals.” To be unyieldingly truthful, then, is to become a sort of inhuman grotesque.

It is an uncomfortable truth that, if the goal is to make artificial intelligence as human-like as possible, these smart machines will, almost by definition, have to be programmed to know when to be honest—and when to lie.


For more on the science of society, and to support our work, sign up for our free email newsletters and subscribe to our bimonthly magazine. Digital editions are available in the App Store (iPad) and on Google Play (Android) and Zinio (Android, iPad, PC/MAC, iPhone, and Win8).

Kate Greene
Kate Greene is a San Francisco-based writer who covers science and technology for Wired, Discover, the Economist, and others. Follow her on Twitter @kgreene.

A weekly roundup of the best of Pacific Standard and PSmag.com, delivered straight to your inbox.

Recent Posts

November 21 • 4:00 PM

Why Are America’s Poorest Toddlers Being Over-Prescribed ADHD Drugs?

Against all medical guidelines, children who are two and three years old are getting diagnosed with ADHD and treated with Adderall and other stimulants. It may be shocking, but it’s perfectly legal.



November 21 • 2:00 PM

The Best Moms Let Mess Happen

That’s the message of a Bounty commercial that reminds this sociologist of Sharon Hays’ work on “the ideology of intensive motherhood.”


November 21 • 12:00 PM

Eating Disorders Are Not Just for Women

Men, like women, are affected by our cultural preoccupation with thinness. And refusing to recognize that only makes things worse.


November 21 • 10:00 AM

Queens of the South

Inside Asheville, North Carolina’s 7th annual Miss Gay Latina pageant.


November 21 • 9:12 AM

‘Shirtstorm’ and Sexism in Science

Following the recent T-shirt controversy, it’s clear that sexism in science persists. But the forces driving the gender gap are still being debated.


November 21 • 8:00 AM

What Makes a Film Successful in 2014?

Domestic box office earnings are no longer a reliable metric.



November 21 • 6:00 AM

What Makes a City Unhappy?

According to the National Bureau of Economic Research, Dana McMahan splits time between two of the country’s unhappiest cities. She set out to explore the causes of the happiness deficits.


November 21 • 5:04 AM

Sufferers of Social Anxiety Disorder, Your Friends Like You

The first study of friends’ perceptions suggest they know something’s off with their pals but like them just the same.


November 21 • 4:00 AM

In 2001 Study, Black Celebrities Judged Harshly in Rape Cases

When accused of rape, black celebrities were viewed more negatively than non-celebrities. The opposite was true of whites.


November 20 • 4:00 PM

Women, Kink, and Sex Addiction: It’s Not Like the Movies

The popular view is that if a woman is into BDSM she’s probably a sex addict, and vice versa. In fact, most kinky women are perfectly happy—and possibly healthier than their vanilla counterparts.


November 20 • 2:00 PM

A Majority of Middle-Class Black Children Will Be Poorer as Adults

The disturbing findings of a new study.


November 20 • 12:00 PM

Standing Up for My Group by Kicking Yours

Members of a minority ethnic group are less likely to express support for gay equality if they believe their own group suffers from discrimination.


November 20 • 10:00 AM

For Juvenile Records, It’s ‘Justice by Geography’

A new study finds an inconsistent patchwork of policies across states for how juvenile records are sealed and expunged.


November 20 • 8:00 AM

Surviving the Secret Childhood Trauma of a Parent’s Drug Addiction

As a young girl, Alana Levinson struggled with the shame of her father’s substance abuse. But when she looked more deeply into the research on children of drug-addicted parents, she realized society’s “conspiracy of silence” was keeping her—and possibly millions of others—from adequately dealing with the experience.



November 20 • 6:00 AM

Extreme Weather, Caused by Climate Change, Is Here. Can Nike Prepare You?

Following the approach we often see from companies marketing products before big storms, Nike focuses on climate change science in the promotion of its latest line of base-layer apparel. Is it a sign that more Americans are taking climate change seriously? Don’t get your hopes up.


November 20 • 5:00 AM

How Old Brains Learn New Tricks

A new study shows that the neural plasticity needed for learning doesn’t vanish as we age—it just moves.


November 20 • 4:00 AM

The FBI’s Dangerous Misrepresentation of Encryption Law

The FBI no more deserves a direct line to your data than it deserves to intercept your mail at the post office. But it doesn’t want you to know that.


November 20 • 2:00 AM

Brain Drain Is Economic Development

It may be hard to see unless you shift your focus from places to people, but both destination and source can benefit from “brain drain.”


November 19 • 9:00 PM

Gays Rights Are Great, but Ixnay on the PDAs

New research suggests both heterosexuals and gay men are uncomfortable with public same-sex kissing.


November 19 • 4:00 PM

The Red Cross’ Own Employees Doubt the Charity’s Ethics

Survey results obtained by ProPublica also show a crisis of trust in the charity’s senior leadership.



November 19 • 2:00 PM

Egg Freezing Isn’t the Feminist Issue You Think It Is

New benefits being offered by Apple and Facebook probably aren’t about discouraging women from becoming mothers at a “natural” age.


Follow us


Sufferers of Social Anxiety Disorder, Your Friends Like You

The first study of friends' perceptions suggest they know something's off with their pals but like them just the same.

Standing Up for My Group by Kicking Yours

Members of a minority ethnic group are less likely to express support for gay equality if they believe their own group suffers from discrimination.

How Old Brains Learn New Tricks

A new study shows that the neural plasticity needed for learning doesn't vanish as we age—it just moves.

Ethnic Diversity Deflates Market Bubbles

But it's not in the rainbow and sing-along way you'd hope for. We just don't trust outsiders' judgments.

Online Brain Exercises Are Probably Useless

Even under the guidance of a specialist trainer, computer-based brain exercises have only modest benefits, a new analysis shows.

The Big One

One company, Comcast, will control up to 40 percent of Internet service coverage in the U.S., and 19 of the top 20 cable markets, if a proposed merger with Time Warner Cable is approved by regulators. November/December 2014

Copyright © 2014 by Pacific Standard and The Miller-McCune Center for Research, Media, and Public Policy. All Rights Reserved.