I don’t have much art in my apartment. But on an otherwise bare wall in the living room hangs a single canvas: a portrait of my girlfriend and our baby daughter. Their faces are spectral silhouettes against a backdrop of dark, diagonal flecks of color. It’s a moody piece, one we commissioned a few weeks ago at a gallery near where we live in Paris. I’d say the artist himself handed it to me, but he—or rather, it—doesn’t have hands.
Our portraitist was something called the Painting Fool, a piece of “emotionally aware” software written by a British artificial intelligence expert named Simon Colton.
Katy and the baby sat for their portrait—it took all of five minutes—one night during a bustling exhibition of work by other artists who happen to be software: painters, composers, poets, even chefs. The occasion was a showcase of accomplishments in the divisive young field of computational creativity. The Painting Fool was the star of the show on that particular night, offering portraits gratis and on-demand to a long line of Parisians. Here’s how it works: Before the Fool paints somebody’s visage, it scans a feed of articles from the British newspaper The Guardian to derive a mood from the words describing world news. That emotional information then determines the mood of a portrait—and, for that matter, whether the Fool paints a portrait at all. World news being what it is, for a few minutes that evening, dispatches from Syria entered the Fool’s feed and shut it down completely. As we waited, awkwardly contemplating the hors d’oeuvres, the artist was paralyzed by what can only be described as digital melancholy.
Colton, the Fool’s creator, is a long-haired Englishman with dimples and an affinity for corduroy jackets. He began working on the software about 12 years ago, and seven years ago gave it a name. His goal, he says, is to create a program that would one day be taken seriously as an artist in its own right. “The vast majority of computer scientists are trained in universities to write software that is dependable, that does exactly what it’s required to do,” Colton tells me. “I say take all that and throw it out the window: We want software that is moody, undependable, surprising.”
For the past 40 years or so, most of the brainpower in artificial intelligence research has focused on what Colton calls the problem-solving paradigm. The most celebrated feats in AI Deep Blue’s win against Garry Kasparov in chess; IBM’s Watson’s victory against Ken Jennings in a game of Jeopardy!—have involved computers matching wits with humans to solve a set of cognitive challenges. Colton and his colleagues work in a different paradigm, one he calls artifact creation.
The accomplishments of the problem-solving paradigm are measurable and impressive. But as skeptics of artificial intelligence often point out, strategy games and trivia contests don’t quite strike at the heart of what it means to be human. (And it’s the most rudimentarily human tasks, like understanding ordinary language, that often cause computers to stumble in such contests.) Creative genius, on the other hand, is one of those qualities people often name when trying to explain what sets us apart; belonging to the same species as Bach and Rembrandt counts for a lot. But creativity, it turns out, comes more naturally to software than most people think.
The U.S. Patent and Trademark Office defines an innovation as something that is a novel, useful, and non-obvious extension of existing ideas. Psychologists often define human creativity along similar lines—as a knack for arriving at novel combinations of existing ideas. “I define creativity as the ability of a system to associate two things that might not seem logically connected, and to produce something humans see as valuable,” says David Cope, a California composer and artificial intelligence researcher. “That’s not only possible with machines, it’s easy—much easier than AI.”
Consider, for instance, another artist-computer I discovered at the Paris gallery. In 2011, three students who studied with the computer scientist Dan Ventura at Brigham Young University developed a virtual chef named PIERRE the Pseudo-Intelligent Evolutionary Real-time Recipe Engine. PIERRE begins its creative process by scraping the Internet for recipes that human chefs have tried, uploaded, and ranked highly. It then recombines ingredients from those highly ranked recipes in search of novel dishes. Its recombinatory method is an example of something called evolutionary computation, an approach to programming that borrows its logic—and vocabulary—from the field of genetics. Just as certain alleles cling together during the process of genetic recombination, PIERRE’s evolutionary computation groups certain gastronomic elements together (like basil and balsamic vinegar, or garlic and butter) even as it slices recipes apart. Once recombined into new recipes, the resulting clusters of ingredients are weighed against PIERRE’s benchmark of a good dish. After that fitness test—which is not unlike the moment when a Galápagos finch chooses a suitable mate—the best recipes are selected to be further refined through recombination until the software deems them edible. PIERRE can run through 50 generations of this process in a few seconds, winnowing its output down until it produces about 10 recipes ready to be cooked. “We’re not absolutely sure how it works,” says Ventura with a playful smile, “but we have some pretty good ideas.”
At the gallery show, Ventura presented three of PIERRE’s most recent recipes: Broth of Pure Joy, Scrumptious Broth With Bean, and my favorite, Divine Steak of Water. (The dishes’ names, like their recipes, were computer-generated.) For the occasion, Ventura had persuaded a Parisian chef to carry out PIERRE’s orders. Sherry, coconut milk, cinnamon, and cocoa powder all went into the same pot, along with halloumi cheese, leeks, black beans, and green chiles. What emerged—Scrumptious Broth With Bean, in this case—was an eminently edible, above-average dish, about the quality you’d expect from a confident guest at a decent potluck. For a piece of software that has never been hungry or tasted food, above average isn’t shabby.
Perhaps the most powerful example of computational creativity is the oeuvre of David Cope, the composer-turned-programmer. In 1980, Cope came down with a severe case of composer’s block. He had been commissioned to write an opera, and the opera wasn’t cooperating. Earlier experiments in synthesized sound and musical software led Cope, then a professor at the University of California-Santa Cruz, to try to write a program that would compose new music in a style matching his own. When that proved difficult, he instead set about writing software that could compose interesting music in the style of canonical composers like Bach, Beethoven, Messaien, and Joplin. (“I chose a bunch of dead composers so I wouldn’t get sued,” Cope recalls.)
In 1989, a performance at a music festival mixed Cope’s software-composed music with authentic classical works. In a sly variation on the Turing Test—a game computer scientists play to see if a machine can fool people into thinking it’s human—audience members were quizzed to see if they could tell the difference. Most couldn’t. In later, more advanced experiments, Cope tried crossing, say, the style of Joplin with the style of Chopin. And in the culmination of his artificial intelligence work, he designed a piece of software—with the distinctly unrobotic name Emily Howell—that composes contemporary classical music in a style all its own.
The day is not far off, Cope says, when everyone will be able to carry a personal composer in their pocket, one that can generate new songs in the style of those they already have on their playlist—or in new styles altogether. In fact, “that’s almost trivial to imagine,” Cope says. “I’m surprised it doesn’t already exist.”
The main hurdles for computational creativity are not technical but psychological. “Every time a machine can do something, people turn around and say, ‘Well, that’s not really intelligent,’” says Michael Spranger, an artificial intelligence researcher for Sony Computer Science Laboratories in Tokyo. Colton calls this prejudice against software “silicon bias”—and he has seen convincing evidence that it’s pervasive. “If you show people two rows of paintings, get their feedback, and then tell them the second row was painted by a mass murderer, those rankings fall precipitously,” he said. “The same thing can happen when you tell them it’s painted by a computer.”
As the proud father of the Painting Fool—a piece of software with no known copy, and hence a kind of individuality—Colton takes silicon bias more than a little personally. And while he is no doubt himself biased in favor of his own creations, the talents of the Painting Fool do sometimes make him forget it is a set of logical commands atop a knot of silicon and metal. “Every morning when I wake up,” he says, “I have to repeat to myself ten times: Software is not human, software is not human, software is not human….”