A few years from now, as you take your seat in a concert hall, you might open your program and find a puzzling announcement: Tonight we’ll be hearing works by André Previn, Henry Purcell, and Hewlett Packard.
An annoying example of product placement? Actually, it could be an accurate, if incomplete, indicator of authorship.
And without that notification, we might never know the difference.
Most of us like to think we could easily differentiate between a piece of music written by a human being and one generated by a computer. But a paper just presented at the International Conference on Computational Creativity 2012 suggests otherwise.
In it, three researchers from Simon Fraser University in Vancouver—Arne Eigenfeldt, Adam Burnett and Philippe Pasquier—describe a real-world test of their ongoing collaboration, the Musical Metacreation Project.
A metacreation, as they explained when launching the initiative in 2009, “is software which, using aspects of artificial intelligence, cognitive modeling, artificial life or machine learning, displays creative behaviors; that is, behaviors which would be considered creative if performed by humans.” (For an in-depth look at how software can write symphonies, see our 2010 feature “Triumph of the Cyborg Composer”.)
This past December, the trio presented a public concert of world-premiere compositions, which were performed by a professional string quartet, a percussionist, and a Disklavier (a mechanized piano that can interface with a computer).
“Ten compositions by two composer/programmers were created by five different software systems,” they report. “Two of the works were human-composed, while a third was computer-assisted. The audience was not informed which compositions were human-composed.”
Performances of some of the compositions can be viewed here:
One of the Above #2
The 46 audience members were asked to indicate on a one-to-five scale their familiarity with contemporary classical music. They then rated how “engaging” they found each of the 10 works, again using a one-to-five scale.
The key findings: “The audience did not discern computer-composed from human-composed material.” Listeners generally considered the works appealing, but they found the human-composed works no more enjoyable than those created by computers.
What’s more, the listeners who described themselves as musically knowledgeable were no more likely to discriminate between the two than were the musical novices.
Needless to say, the digital Debussys weren’t doing this on their own. As Eigenfeldt and his colleagues note, the compositions reflect “the artistic sentiment of their designers.” The approval of the audience suggests the systems in question “were successful in portraying the goal, aesthetic and style of the two composers who generated them,” they write.
This raises an enticing, albeit ghoulish, prospect. Could an aging genius composer program a computer to turn out a continuing stream of “his” or “her” works, long after their death? The mind boggles. Roll over, Beethoven, and reboot.