Menus Subscribe Search

Follow us

Culture Essays


(Illustration: Emory Allen)

How Should We Program Computers to Deceive?

• September 03, 2014 • 6:00 AM

(Illustration: Emory Allen)

Placebo buttons in elevators and at crosswalks that don’t actually do anything are just the beginning. One computer scientist has collected hundreds of examples of technology designed to trick people, for better and for worse.

Just outside the Benrath Senior Center in Düsseldorf, Germany, is a bus stop at which no bus stops. The bench and the official-looking sign were installed to serve as a “honey trap” to attract patients with dementia who sometimes wander off from the facility, trying to get home. Instead of venturing blindly into the city and triggering a police search, they see the sign and wait for a bus that will never come. After a while, someone gently invites them back inside.

It’s rare to come across such a beautiful deception. Tolerable ones, however, are a dime a dozen. Human society has always glided along on a cushion of what Saint Augustine called “charitable lies”—untruths deployed to avoid conflict, ward off hurt feelings, maintain boundaries, or simply keep conversation moving—even as other, more selfish deceptions corrode relationships, rob us of the ability to make informed decisions, and eat away at the reserves of trust that keep society afloat. What’s tricky about deceit is that, contrary to blanket prohibitions against lying, our actual moral stances toward it are often murky and context-dependent.

In recent years, it has become common to hear that technology is making us more dishonest—that the Internet, with its anonymous trolls, polished social media profiles, and viral hoaxes, is a mass accelerant of selfish deceit. The Cornell University psychologist Jeffrey Hancock argues that technology has, at the very least, changed our repertoire of lies. Our arsenal of dishonest excuses, for instance, has adapted and expanded to buffer us against the infinite social expectations of a 24/7 connected world. (“Your email got caught in my spam folder!” “On my way!”) But while it’s true, according to Hancock, that the Internet affords us more tools to help manage how people perceive us, he also says that people are often more truthful in digital media than they are in other modes of communication. His research has found that we are more honest over email than over the phone, and less prone to lie on digital résumés than on paper ones. The Internet, after all, has a long memory; what it offers to would-be deceivers in the way of increased opportunity is apparently offset, over the long run, by the increased odds of getting caught.

But the slight moral panic over technology-induced lying sidesteps another, more interesting question: What kind of lies does our technology itself tell us? How has it been designed to deceive?

The slight moral panic over technology-induced lying sidesteps another, more interesting question: What kind of lies does our technology itself tell us?

The fake bus stop at the Benrath Senior Center is, in its way, a piece of deceptive technology: a “user interface” designed to perpetuate an expedient illusion. And it’s hardly the only example. Dishonest technology exists in various forms and for various reasons, not all of them obviously sinister. If you don’t know it already, you should: Many crosswalk and elevator door-close buttons don’t actually work as advertised. The only purpose of these so-called placebo buttons is to give the impatient person a false sense of agency. Similarly, the progress bars presented on computer screens during downloads, uploads, and software installations maintain virtually no connection to the actual amount of time or work left before the action is completed. They are the rough software equivalent of someone texting to say, “On my way!”

But these examples offer only a hint of what we’re liable to see in the near future. As more of our daily lives involves interacting with devices loaded with software, and as more of that software is designed to adjust to a dynamic environment and potentially even make predictions about a user’s behavior in order to serve up the best “value,” perhaps now is a good time to ask: How deceitful should our new technologies be?

“GOOD DESIGN IS HONEST.” So holds one of the Ten Principles of Good Design, a set of guidelines laid down by the iconic German industrial designer Dieter Rams in the 1970s. Today, Rams’ principles are printed up and sold on posters, and his most prominent admirer is no less than Jonathan Ive, the head of design at Apple. A good product, Rams’ guidelines continue, “does not attempt to manipulate the consumer with promises that cannot be kept.”

When honesty is prized so highly, thinking about deception in anything but reflexively negative terms can be difficult. Deceit, after all, is something a good designer doesn’t do. But is all dishonest design necessarily bad?

Last year, a paper trying to address that question was presented at a major conference on computer-human interaction in Paris. “Benevolent Deception in Human Computer Interaction” is the work of Eytan Adar, a computer scientist at the University of Michigan, and Desney Tan and Jaime Teevan, two scholars at Microsoft Research.

Adar says he became interested in deceptive technology when, as an undergraduate in computer science in the 1990s, he learned about the history of early telephone networks. In the 1960s, the hardware that comprised the byzantine switching systems of the first electronic phone networks would occasionally cause a misdial. Instead of revealing the mistake by disconnecting or playing an error message, engineers decided the least obtrusive way to handle these glitches was to allow the system to go ahead and patch the call through to the wrong number. Adar says most people just assumed the error was theirs, hung up, and redialed. “The illusion of an infallible phone system was preserved,” he writes in the paper.

Since then, Adar has collected hundreds of examples of deceptive design, manifesting in a formidable stack of papers on his desk. As the stack grew, Adar discovered a spectrum of design falsehoods that mirrored what passes between ordinary humans every day: a lot of deception by designers, some of it benign and some problematic, and very little discussion about it. “It’s not clear that designers have a good grasp of how to make design decisions that involve transparency or deception,” he says.

Adar wanted to move away from treating deception in design as taboo and toward thinking more systematically about it, and to identify ways in which deceptive technology might help rather than harm us. He began looking for a clear line separating benevolent deception, which benefits the user of a technology, from malevolent deception, which benefits a system owner at the expense of the user. The goal of Adar and his co-authors’ paper was to showcase and classify examples that fall along this spectrum.

You’re probably familiar with malevolently deceptive software: the roaming online ads that trick you into clicking on them when all you really want to do is close them so you can read an article; the privacy settings on Facebook that, according to critics, rely on confusing jargon and user interfaces to trick people into sharing more about themselves than they intend. (This has come to be called “Zuckering,” after the company’s founder.) A website called is dedicated to tracking these kinds of tricks and abuses.

Pretty much everyone agrees that this sort of thing is rotten, and these malevolent deceptions have been well studied, mainly with an eye toward detecting and policing them. But many other varieties of deceptive design fly below the radar. One relatively benign class of examples occurs when an operating system fails in some way and a piece of software is programmed to cover up the glitch. The misdials of the early phone switching system fall into this category. Similarly, reports Adar, when the servers at Netflix fail or are overwhelmed, the service switches from its personalized recommendation system to a simpler one that just suggests popular movies.

Designers of technology engage in another kind of relatively neutral deception when they manipulate users to behave in ways that will help improve system performance. Some speech-recognition software functions better if it can analyze a person’s normal speech, as opposed to the sort of halting robot-speak many people instinctively use when talking to a machine. Thus, designers attempt to make the software sound more like a person than a strict commitment to honesty in design would probably allow.

And then there’s more straightforward benevolent deception. Placebo buttons and other calming interfaces, like the digital signs that over-estimate wait times for lines at amusement parks, arguably fall into this category by giving people the illusion of control, or by soothing anxious nerves. Coinstar kiosks, the coin-counting machines stationed in Walmart and other stores, are rumored to take longer than necessary to tally change because designers learned that customers find a too-quick tally disconcerting. Another example: robotic systems designed to help people overcome their own perceived limits. Researchers have experimented with rehabilitation robots that under-report the force a patient exerts, to help her move past a sense of learned weakness and recover from injury faster.

Many crosswalk and elevator door-close buttons don’t actually work. The purpose of these placebo features is to give the impatient person a false sense of agency.

In the non-robot realm, your personal trainer might also use deceit for your benefit when she covers the treadmill display so you can’t see your running speed, spurring you to run faster than you thought possible. The term benevolent deception itself seems to have its roots in medicine, where it has been a matter of discussion for years. Some doctors believe that being too bluntly honest about a diagnosis can do more harm than good in some instances, so they omit some details or avoid direct answers.

That the idea of benevolent deception originated in a domain where, much as in technology, there is a huge asymmetry in information—and power—between users and providers, is telling. Doctors and tech workers have similar reputations for arrogance, and any known practice of benevolent deception might easily breed resentment in users and patients. But how much complexity do we want to be saddled with in the name of full disclosure, and how much can we safely, expediently navigate? Consider that the standard user interface on your computer—a desktop with folder and trashcan icons—is perhaps the most familiar deception of all, hiding a universe of code behind a simple, “usable” facade.

ADAR’S SIMPLE TAXONOMY OF deception bears some resemblance to that of Thomas Aquinas, who claimed there were three types of lies: malicious lies (meant to do harm; mortal sins), jocose lies (told in fun; pardonable), and officious lies (helpful; pardonable)—a hierarchy that is itself a simplification of St. Augustine’s eight types of lies, established nearly a thousand years before. Separated by centuries, these systems are all attempts to schematize the complex emotional and social landscape of deception in human affairs.

Human-computer affairs are not so different. Software that always deceives in a way that is detectable is repellent to us, just as are people who always lie. Software that is inconsistent with its truthfulness or deceit may breed mistrust and annoyance. And software that deceives in a way that benefits the person using it may be as easily forgiven as a personal trainer who’s helping you get in shape. It’s not hard to understand that the designer behind a workout program or physical therapy robot is looking out for your own good. Besides, it seems easy enough to tweak the settings if you don’t like the lies.

But deceptive technology is liable to evolve. Since the advent of computers, people have grown accustomed to being in charge of their machines: You type on a keyboard or click a mouse and the computer responds. Sure, it may increasingly seem like we are the ones who are programmed to react to the beeps and buzzes of our devices. But in most cases, each interaction with a computer starts with an input from us (we are the ones who opted to receive those “push notifications”) and ends with us. Right now, the computer, phone, or robot is simply an intermediary, a messenger—a conduit for a human-human interaction.

In the future, true artificial intelligence systems will alter the game significantly. They will make mistakes and recover and learn as a human would. And ultimately they will be able to scan their environment for contextual clues about how to behave and respond. For instance, engineers working on partially self-driving cars are busy envisioning how a human operator might best share responsibility for driving with the car itself. Here’s one possibility: The car’s software may use embedded sensors to look for biological cues (variations in heart rate, skin conductance, eye movement) that indicate distraction or impairment in a human driver, and then take over if those cues are detected.

In other words, engineers are already thinking about how machines might sense a person’s state of mind.

That’s what we do when we suss out whether it might be best to fudge the truth with someone we care about. It’s what children are doing when, as they begin to formulate a “theory of mind” at age three or four, they tell their first semi-competent lies. And it’s what a doctor does when she must tell her patient he is dying. Since each patient is different, the doctor must intuit the patient’s range of responses to various versions of the news and then select the best one for this patient. The doctor then makes the decision to redirect the conversation, to gently administer the blow, or to be blunt. We call this having a good bedside manner. In contexts far beyond medicine, something like it will be important for artificial intelligence systems to learn. A good AI system will be able not just to reach logical conclusions, but to present them in a sensitive way.

In Albert Camus’ novel The Stranger, the main character Meursault is, according to the author, “a hero for the truth,” unable or unwilling to lie. During the trial in which Meursault is accused of murder, the prosecutor argues that his brutally honest demeanor is that of a “monster, a man without morals.” To be unyieldingly truthful, then, is to become a sort of inhuman grotesque.

It is an uncomfortable truth that, if the goal is to make artificial intelligence as human-like as possible, these smart machines will, almost by definition, have to be programmed to know when to be honest—and when to lie.

For more on the science of society, and to support our work, sign up for our free email newsletters and subscribe to our bimonthly magazine. Digital editions are available in the App Store (iPad) and on Google Play (Android) and Zinio (Android, iPad, PC/MAC, iPhone, and Win8).

Kate Greene
Kate Greene is a San Francisco-based writer who covers science and technology for Wired, Discover, the Economist, and others. Follow her on Twitter @kgreene.

A weekly roundup of the best of Pacific Standard and, delivered straight to your inbox.

Recent Posts

December 18 • 2:00 PM

Women in Apocalyptic Fiction Shaving Their Armpits

Because our interest in realism apparently only goes so far.

December 18 • 12:00 PM

The Paradox of Choice, 10 Years Later

Paul Hiebert talks to psychologist Barry Schwartz about how modern trends—social media, FOMO, customer review sites—fit in with arguments he made a decade ago in his highly influential book, The Paradox of Choice: Why More Is Less.

December 18 • 10:00 AM

What It’s Like to Spend a Few Hours in the Church of Scientology

Wrestling with thetans, attempting to unlock a memory bank, and a personality test seemingly aimed at people with depression. This is Scientology’s “dissemination drill” for potential new members.

December 18 • 8:00 AM

Gendering #BlackLivesMatter: A Feminist Perspective

Black men are stereotyped as violent, while black women are rendered invisible. Here’s why the gendering of black lives matters.

December 18 • 7:06 AM

Apparently You Can Bring Your Religion to Work

New research says offices that encourage talk of religion actually make for happier workplaces.

December 18 • 6:00 AM

The Very Weak and Complicated Links Between Mental Illness and Gun Violence

Vanderbilt University’s Jonathan Metzl and Kenneth MacLeish address our anxieties and correct our assumptions.

December 18 • 4:00 AM

Should Movies Be Rated RD for Reckless Driving?

A new study finds a link between watching films featuring reckless driving and engaging in similar behavior years later.

December 17 • 4:00 PM

How to Run a Drug Dealing Network in Prison

People tend not to hear about the prison drug dealing operations that succeed. asks a veteran of the game to explain his system.

December 17 • 2:00 PM

Gender Segregation of Toys Is on the Rise

Charting the use of “toys for boys” and “toys for girls” in American English.

December 17 • 12:41 PM

Why the College Football Playoff Is Terrible But Better Than Before

The sample size is still embarrassingly small, but at least there’s less room for the availability cascade.

December 17 • 11:06 AM

Canadian Kids Have a Serious Smoking Problem

Bootleg cigarette sales could be leading Canadian teens to more serious drugs, a recent study finds.

December 17 • 10:37 AM

A Public Lynching in Sproul Plaza

When photographs of lynching victims showed up on a hallowed site of democracy in action, a provocation was issued—but to whom, by whom, and why?

December 17 • 8:00 AM

What Was the Job?

This was the year the job broke, the year we accepted a re-interpretation of its fundamental bargain and bought in to the push to get us to all work for ourselves rather than each other.

December 17 • 6:00 AM

White Kids Will Be Kids

Even the “good” kids—bound for college, upwardly mobile—sometimes break the law. The difference? They don’t have much to fear. A professor of race and social movements reflects on her teenage years and faces some uncomfortable realities.

December 16 • 4:00 PM

How Fear of Occupy Wall Street Undermined the Red Cross’ Sandy Relief Effort

Red Cross responders say there was a ban on working with the widely praised Occupy Sandy relief group because it was seen as politically unpalatable.

December 16 • 3:30 PM

Murder! Mayhem! And That’s Just the Cartoons!

New research suggests deaths are common features of animated features aimed at children.

December 16 • 1:43 PM

In Tragedy, Empathy Still Dependent on Proximity

In spite of an increasingly connected world, in the face of adversity, a personal touch is most effective.

December 16 • 12:00 PM

The ‘New York Times’ Is Hooked on Drug du Jour Journalism

For the paper of record, addiction is always about this drug or that drug rather than the real causes.

December 16 • 10:00 AM

What Is the Point of Academic Books?

Ultimately, they’re meant to disseminate knowledge. But their narrow appeal makes them expensive to produce and harder to sell.

December 16 • 8:00 AM

Unjust and Unwell: The Racial Issues That Could Be Affecting Your Health Care

Physicians and medical students have the same problems with implicit bias as the rest of us.

December 16 • 6:00 AM

If You Get Confused Just Listen to the Music Play

Healing the brain with the Grateful Dead.

December 16 • 4:00 AM

Another Casualty of the Great Recession: Trust

Research from Britain finds people who were laid off from their jobs expressed lower levels of generalized trust.

December 15 • 4:00 PM

When Charter Schools Are Non-Profit in Name Only

Some charters pass along nearly all their money to for-profit companies hired to manage the schools. It’s an arrangement that’s raising eyebrows.

December 15 • 2:00 PM

No More Space Race

A far cry from the fierce Cold War Space Race between the U.S. and the Soviet Union, exploration in the 21st century is likely to be a much more globally collaborative project.

Follow us

Apparently You Can Bring Your Religion to Work

New research says offices that encourage talk of religion actually make for happier workplaces.

Canadian Kids Have a Serious Smoking Problem

Bootleg cigarette sales could be leading Canadian teens to more serious drugs, a recent study finds.

The Hidden Psychology of the Home Ref

That old myth of home field bias isn’t a myth at all; it’s a statistical fact.

A Word of Caution to the Holiday Deal-Makers

Repeat customers—with higher return rates and real bargain-hunting prowess—can have negative effects on a company’s net earnings.

Crowdfunding Works for Science

Scientists just need to put forth some effort.

The Big One

One in two United States senators and two in five House members who left office between 1998 and 2004 became lobbyists. November/December 2014

Copyright © 2014 by Pacific Standard and The Miller-McCune Center for Research, Media, and Public Policy. All Rights Reserved.