Menus Subscribe Search

Follow us


Quick Studies

Quick Studies

Sufferers of Social Anxiety Disorder, Your Friends Like You

anxiety

(Photo: fixem/Flickr)

The first study of friends' perceptions suggest they know something's off with their pals but like them just the same.

Social anxiety disorder (SAD) can be devastating. In the worst cases, sufferers of the illness struggle with basic tasks, like signing a check in front of another person, and the majority routinely report less satisfying friendships and other relationships. A new study, however, suggests people with SAD might be underestimating how much others like them: While friends perceived the relationships somewhat differently, they reported higher levels of friendship intimacy and satisfaction than their SAD friends.

Researchers have known for a while that social anxiety disorder leads people to perceive their situations more negatively than others, specifically with regard to friendships. However, previous studies have generally relied on patients’ own impressions of their friendships, and typically those impressions concern friendships in general rather than any specific relationship in particular. As a result, scientists who study SAD don’t really know how bad patients’ friendships are—maybe the disease really does harm friendships, or maybe it just harms one’s perceptions of those friendships.

Compared with the control group’s friends, friends of SAD sufferers perceived their friends to be less dominant in the relationship and also less well adjusted.

Sorting that out requires something no one seems to have done before: finding friends of people with SAD and asking them how they felt about their chums. That’s precisely what Thomas Rodebaugh and a team at Washington University in St. Louis did. They asked 77 people diagnosed with generalized social anxiety disorder—meaning they experienced social anxiety in a number of different situations—and a control group of 63 people without SAD symptoms to bring a pal into the lab. There, both the primary participants and their friends filled out surveys concerning how much they liked each other, how close they felt to each other, and how satisfying they felt the friendship was.

The survey results indicated that people with SAD viewed their relationships more pessimistically compared with people in the control group. And, as Rodebaugh and team suspected they might, SAD sufferers reported feeling less close to their friends than their friends did to them. People with SAD also reported liking their friends less than the other way around and being slightly less satisfied with the relationship than their friends.

That’s not to say that friends of those with SAD couldn’t tell the difference. Compared with the control group’s friends, friends of SAD sufferers perceived their friends to be less dominant in the relationship and also less well adjusted.

“We found clear evidence that SAD is related to self-report of impairment in specific friendships, consistent with the hypothesis that SAD is a fundamentally interpersonal disorder,” the authors write in the Journal of Abnormal Psychology. “However, we found little evidence that friends experienced the same level of friendship impairment, despite them seeing differences” between those with and without social anxiety disorder. That, the authors explain, provides support for treatments that focus on helping people with the disorder see that they’ll come across better than they think they will.

Quick Studies

Standing Up for My Group by Kicking Yours

gay-equality

(Photo: 719production/Shutterstock)

Members of a minority ethnic group are less likely to express support for gay equality if they believe their own group suffers from discrimination.

A common-cause coalition of oppressed minority groups was one of those 1960s fantasies that failed to materialize. A new study published in the Journal of Experimental Social Psychology suggests one reason why.

In two large surveys and a lab experiment, African Americans, Asian Americans, and Latinos were less likely to express support for gay equality if they believed their ethnic group suffered from discrimination.

Individually hurtful experiences, as opposed to a general sense that one’s entire race has been wronged, “may better promote sympathy and/or perceived commonality with other disadvantaged groups.”

Maureen Craig and Jennifer Richeson of Northwestern University attribute this to the psychological phenomenon known as social identity threat, in which the self-esteem of a devalued group is bolstered by derogating other groups.

While that’s a disheartening dynamic, the researchers, to their surprise, found members of one racial minority—Asian Americans—who had personally experienced discrimination expressed more positive attitudes toward homosexuality.

Individually hurtful experiences, as opposed to a general sense that one’s entire race has been wronged, “may better promote sympathy and/or perceived commonality with other disadvantaged groups.” But absent direct experience with intolerance, group solidarity trumps empathy for outsiders.


For more on the science of society, and to support our work, sign up for our free email newsletters and subscribe to our bimonthly magazine. Digital editions are available in the App Store (iPad) and on Google Play (Android) and Zinio (Android, iPad, PC/MAC, iPhone, and Win8).

Quick Studies

How Old Brains Learn New Tricks

old

(Photo: neilmoralee/Flickr)

A new study shows that the neural plasticity needed for learning doesn't vanish as we age—it just moves.

Turns out you can teach an old dog new tricks—the dog just needs to use a different kind of nerve cell to learn them. That’s the thrust of a study out today that presents perhaps the first clear evidence that aging people’s brains still undergo physical changes as they learn, just not the way their youthful counterparts do.

At issue is the idea that brains don’t change much in adulthood. A host of magnetic resonance imaging (MRI) studies have proved that wrong, but those same studies seemed to confirm another intuition that, as we age, we lose the capacity to grow, repair, and modify connections between nerve cells, a capacity called neural plasticity.

At issue is the idea that brains don’t change much in adulthood. A host of magnetic resonance imaging (MRI) studies have proved that wrong, but those same studies seemed to confirm another intuition that, as we age, we lose the capacity to grow, repair, and modify connections between nerve cells.

“It has been said that old people are less plastic, meaning the effect of learning is much less,” says Takeo Watanabe, a psychologist at Brown University and one of the authors of the new study. But, he says, behavioral experiments “have shown that is not necessarily the case.” In visual learning experiments in which participants must remember sets of images or look for minute changes in an image, older experimental subjects can learn at about the same rate as younger people, Watanabe says. But if older brains are less plastic, how are older people still able to learn so well?

The key turns out to be what kinds of nerve cells are plastic in younger and older people. With MRI scans, researchers can mainly see synapses, the relatively short nerve cells that make up the cerebral cortex and other gray matter in the brain. Another kind of cell, long nerve cells called axons, make up the white matter that carries signals between brain regions—but white matter doesn’t show up in as much detail on MRI images. For that, the team needed a relatively new technique called diffusion tensor imaging (DTI). After using MRI and DTI to scan the brains of 18 adults aged 65 to 80 and 21 others aged 19 to 32 before and after several days practicing a visual learning task, the researchers found that both age groups learned at similar rates—but their brains responded differently. Gray matter changed only in younger adults, while white matter changed only in older ones—there, DTI results suggested that axons had grown thicker and developed more robust myelin shells, which can help prevent crosstalk between the brain’s electrical connections.

That sort of “double dissociation” between changes in younger and older people’s brains is a clear sign that something is changing, despite similar learning abilities in the young and old alike. Perhaps, Watanabe says, increased axon plasticity in older people serves to compensate for a degradation in synapse efficiency—though he is quick to point out that’s just one guess. Our brains may have to age a bit more before we actually figure it out.

Quick Studies

Ethnic Diversity Deflates Market Bubbles

market

(Photo: petrick/Flickr)

But it's not in the rainbow and sing-along way you'd hope for. We just don't trust outsiders' judgments.

Ethnic diversity could help prevent stock market and housing bubbles, according to new experiments, though the reason might be a little bit depressing. Basically, we’re less likely to trust others’ judgment‚ and therefore less likely to follow their leads, when they come from different ethnic groups than our own.

That’s the conclusion of a paper just out in Proceedings of the National Academy of Sciences that reports the results of two stock-trading experiments conducted in Singapore and Kingsville, Texas. The project was motivated in part by a desire to understand how the housing bubble followed so closely on the heels of the 1990s tech bubble, lead author and Columbia University economist Sheen Levine writes in an email. “In 1999, everybody I knew was starting an Internet company, and in 2005 the same people assured me that real estate prices can only go up,” he says. “I wondered how intelligent people, versed in economics and finance, can all ignore reality so well.”

“I wondered how intelligent people, versed in economics and finance, can all ignore reality so well.”

One hypothesis is a kind of groupthink. If somebody’s buying one stock and I’m not, the groupthink goes, he must know something I don’t, and I should follow suit—while in truth the buyer might actually need a reality check.

Ethnic diversity, some suggest, could be a solution to this conundrum. University presidents have defended programs aimed at racial and ethnic minorities on those grounds, and research seems to back up the idea that a wider range of viewpoints leads to more balanced, groupthink-free decisions. Yet ethnic diversity has a dark side too, Levine and co-authors point out. Sometimes, it leads to more conflict than progress.

To see whether diversity could improve stock-market decisions—and if so, why—the researchers divided 180 people with backgrounds in business or finance into groups of six. Those groups played a 10-round stock-market game in which players traded a dividend-paying stock. Half the groups were ethnically homogeneous, while the other half had at least one ethnic minority—say, five Chinese players and one ethnically Malay player. While traders knew the ethnic make-up of their groups, they couldn’t communicate with each other, and all trades were anonymous.

As expected, homogeneous groups set inflated selling prices, yet traders in those groups still bought the stock, and the stock price climbed over 10 rounds. Just the opposite happened in ethnically diverse groups: Traders refused inflated selling prices, and over time the stock price fell to roughly the price it would have in an idealized market with rational traders.

It would have been nice if that had happened because traders in diverse groups took others’ views into account when setting prices, but with anonymity and a lack of communication, it’s more likely they simply didn’t trust others’ judgments when it came to setting reasonable buying and selling prices.

“Homogeneity, we suggest, imbues people with false confidence in the judgment of coethnics, discouraging them from scrutinizing behavior,” the authors write.

Quick Studies

Online Brain Exercises Are Probably Useless

brainy

(Photo: healthblog/Flickr)

Even under the guidance of a specialist trainer, computer-based brain exercises have only modest benefits, a new analysis shows.

Maybe the scariest part of growing old is the possibility of cognitive decline—forgetfulness, difficulty thinking clearly, and, in the worst cases, full-on dementia. It’s therefore natural that researchers and entrepreneurs hoped that specialized brain training could make a difference, just as daily walks might keep an aging body fit.

Unfortunately, that hope remains for the most part unfulfilled, according to a study published Tuesday in PLoS Medicine. In healthy older adults, computer-based brain exercises have limited benefits, and then only when supervised by a trainer once to three times a week. And despite what Lumosity and BrainHQ will tell you, doing the training at home had no effect at all, at least in the short term. The meta-analysis and an accompanying commentary add to a growing movement among scientists, who argue that cognitive training may have value, but as yet there is very little evidence to support that claim.

Center-based CCT guided by a specialist has a small but discernibly positive effect on cognitive abilities, much as you might expect a fitness trainer at the gym to have a small positive impact on your physical health.

Amit Lampit, Harry Hallock, and Michael Valenzuela of the University of Sydney’s Brain and Mind Research Institute reached their conclusions following a meta-analysis of 51 studies that investigated the effects of computerized cognitive training, or CCT, on nearly 5,000 senior citizens. Limpet, Hallock, and Valenzuela focused specifically on experiments which used at least four hours of CCT and which tested cognitive abilities just before and just after training. Despite those criteria, that left a considerable range of CCT approaches, including both center- and home-based methods as well as measures of information processing speed, working memory, attention, and other skills.

Overall, the most important result was that center-based CCT guided by a specialist has a small but discernibly positive effect on cognitive abilities, much as you might expect a fitness trainer at the gym to have a small positive impact on your physical health. Home training, as the analogy might suggest, had essentially no effect.

Breaking the results down further, the researchers found that CCT had less impact on some skills than others. The largest effects, though still generally small, were on memory for images, working memory—the system that lets you keep track of different pieces of an idea you’re pondering, for example—and processing speed. CCT had little to no effect on attention or executive functions, the sorts of things involved in impulse control, planning, and generally avoiding bad spending decisions. And as with the big picture, at-home CCT had no effect on cognitive abilities.

The research is not without limitations, the team notes. The results do not necessarily apply to those already experiencing cognitive impairments, and it remains possible there are more substantial long-term benefits from computer-based brain training—though that mainly highlights the need for more study, the authors write.

In an accompanying perspective article, PLoS Medicine consulting editor Druin Burch writes that CCT’s modest effectiveness “is a conclusion of value to academics in the field and to those with interest in selling training programmes. The value to others depends on how well they understand the conclusion’s limits.” In particular, Burch warns against consumers interpreting the results with anything but caution.

Quick Studies

To Find Suspicious Travelers, Try Talking to Them

airport

(Photo: jonathancohen/Flickr)

Brief, directed conversations are more effective at identifying liars than fancy behavioral analysis, experiment suggests.

By now, it’s safe to say that the Transportation Security Administration’s behavioral detection officers—agents trained to detect suspicious behavior simply by watching people—aren’t very effective. Still, the TSA would like to have tools for detecting potential threats beyond current body scanners, which have their own problems. Now, a pair of English researchers report a new interview approach that could help tell the difference between liars and others.

Lie detection is a controversial subject historically, and a field perhaps dominated more by the hope that it’s possible than particularly strong scientific research. Though a few prominent scientists think we can detect lies using physiological measurements or facial expressions, most think that interview techniques are more effective for identifying prevaricators. Interviews, the thinking goes, are more mentally taxing on liars than truth tellers, and they yield more opportunities for liars to contradict themselves. On the other hand, an interview must last long enough to set traps and make them work.

“In contrast to current practice, we propose that security agents should not be trained to identify specific behaviors associated with deception.”

Thomas Ormerod and Coral Dando‘s solution is to engage passengers in brief, friendly conversations that elicit fairly detailed accounts of individuals’ travel plans and backgrounds. Those conversations are meant to be quite flexible, so that officers can probe details of a passenger’s story as they come up. Key to the approach is to let the traveler do most of the talking, giving agents more information to go on when evaluating a passenger’s truthfulness. This contrasts with other methods such as “suspicious signs,” which emphasizes a fixed set of questions with generally shorter answers and which often emphasize supposed behavioral tells over information gathering.

To see if their approach worked, the pair went into London Heathrow Airport and a few others and trained 79 officers in their method, called Controlled Cognitive Engagement (CCE). Another 83 trained in the suspicious-signs method also took part. To test the methods, Ormerod and Dando recruited 204 people and gave them one goal: con their way past airport security agents using falsified boarding passes and false identities. The agents’ goal was to stop as many of the fakes as they could—a particularly difficult challenge since the fakes had blended in with legitimate air travelers who showed up simply to catch a flight.

The contrast between methods was stark. Agents trained in CCE stopped two-thirds of the mock passengers, compared with a dismal three percent stopped by agents using suspicious signs, which is standard protocol at many airports around the world. Meanwhile, agents using CCE stopped only three percent of real passengers who agreed afterwards to participate in the study—about the same false-positive rate as the suspicious signs method.

“Our results have implications for practitioners, both in security screening, and more generally for professional lie catchers such as police officers and court officials,” Ormerod and Dando write in the Journal of Experimental Psychology: General. “In contrast to current practice, we propose that security agents should not be trained to identify specific behaviors associated with deception.” Instead, agents should work to draw out potential inconsistencies through conversation, they argue.

Quick Studies

Coastal Cognizance of Climate Change

santa-barbara

Santa Barbara, California, home to Pacific Standard. (Photo: S.Borisov/Shutterstock)

People who live closer to the shore are more likely to believe in climate change and to support regulation of carbon emissions.

If you can feel the sea breeze on your face when you walk out of your house, you’re more cognizant of climate change.

That’s the conclusion of a new study of 5,815 New Zealanders, which finds “people living in closer proximity to the shoreline expressed greater belief that climate change is real, and greater support for government regulation of carbon emissions.” This held true even after taking into account the respondents’ age, gender, education, personal wealth, and political leanings.

The ocean “may inspire a sense of respect for the power of nature and its changeability.”

The researchers, led by psychologist Taciano Milfont of Victoria University of Wellington, can’t definitively say why residents of coastal communities hold views more in line with the scientific consensus. But they suspect predictions of such disasters as flooding and sea level rise hit home for seaside dwellers in a more immediate, psychologically impactful way.

The ocean, they write in the online journal PLoS One, “may inspire a sense of respect for the power of nature and its changeability.” If so, the challenge for policymakers is to inspire similar reverence among the landlocked.

We at Pacific Standard are already convinced—but then, our offices are only about a mile from the ocean.


For more on the science of society, and to support our work, sign up for our free email newsletters and subscribe to our bimonthly magazine. Digital editions are available in the App Store (iPad) and on Google Play (Android) and Zinio (Android, iPad, PC/MAC, iPhone, and Win8).

Quick Studies

Kids Don’t Really Mind an Inflated Ego—Unless They’re Its Target

class

(Photo: departmentofed/Flickr)

A new survey of eighth graders suggests that an unjustifiably high opinion of oneself has subtler effects on relationships than previously thought.

Nobody likes a know-it-all or a snob, or so the conventional wisdom goes. But in a new study, psychologists argue that students who feel unreasonably high on themselves academically don’t actually engender their peers’ contempt. It takes a sense of superiority targeted at one person to do that, and the chilly feeling that results is often mutual.

Much has been made in years past about the effects of self-esteem, both in academia and the popular press, but the conclusions are often inconsistent with each other. Some studies find enhanced or even inflated self-perceptions can be good for you and lead others to perceive you more positively. Others suggest that an enhanced self-image alienates others and leads you to a life of narcissism and apathy. But goals and methods often vary in these kinds of experiments. In particular, some studies examine a general sense of superiority to others, while some look at what happens when individuals feel superior to specific colleagues or peers. That led German psychologists Katrin Rentzsch and Michela Schröder-Abé to wonder whether there really is a difference between Johnny thinking he’s the smartest kid in the room, and Johnny thinking he’s smarter than Jenny.

“Feeling superior to a specific other was not so easily forgiven.”

To find out, Rentzsch and Schröder-Abé brought their science to that bastion of fraught social politics, eighth grade. They surveyed 330 eighth-grade boys and girls in eight schools in southeast Germany about personality traits, academic self-esteem, whether they felt academically superior to each of their classmates, and whether they liked each fellow student. They also calculated the average of each students’ scores in math, physics, German, and English, a measure that allowed them to determine whether students harbored unrealistically high opinions of themselves relative to specific others.

Analyzing their data, the psychologists found that students didn’t like or dislike other kids who had unrealistically high opinions of themselves any more than others, as long as they weren’t being singled out as the target of a big-headed peer’s feelings of superiority. When they were—when one student had an inflated sense of academic ability relative to a specific classmate—targeted students disliked the kids targeting them. Big-egoed students didn’t entertain such subtleties, though—they just disliked everybody.

“Our findings may help to explain previous controversial findings on the interpersonal consequences of self-enhancement in that they reveal different effects at two levels of analysis,” the authors write in Social Psychological and Personality Science. “Although in our study, students high in habitual self-enhancement tended to dislike others, they were not disliked by others in return; whereas at the relationship level, feeling superior to a specific other was not so easily forgiven.”

Quick Studies

Controlling Genes With Your Mind

brain

(Photo: 125992663@N02/Flickr)

It's not as surprising as you think.

Scientists have figured out how to control genes with their minds.

You read that right. A team of bioengineers has developed a proof-of-concept system with which a person can regulate simple gene functions using electrical signals in his or her brain. Odd though it seems, it might one day be a useful medical tool, the team reports in Nature Communications.

Actually, it shouldn’t be that surprising. The biology and neuroscience behind their technique isn’t all that new or even complicated by modern standards. Biologists first began to understand how to control gene expression—the process that allows organisms to produce different kinds of cells from the same DNA—in E. coli during the 1970s. More recently, bioengineers have devised ways to regulate gene expression in mice and humans. Theoretically, doctors could use gene expression to treat disease through various relatively non-invasive techniques—for example, illuminating light-sensitive proteins that bind to particular, targeted genes in the brain could help treat depression.

You’d be forgiven at this point for wondering whether the work is the product of “because we can” thinking or even a mad scientist, but in the long term it might have practical medical value.

At the same time, brain scientists have stretched the boundaries of what we can do with our minds alone. Motivated in part by a desire to help those who’ve lost limbs, researchers have designed robotic arms a person can control using brain signals alone, and you can buy similar, though somewhat less sophisticated, devices online.

Still, it is something of a novelty to combine the two areas of technology into one. To do so, researchers at ETH Zurich‘s Department of Biosystems Science and Engineering first designed implants to be placed inside a group of mice. Each had three main parts: a wireless receiver used to power the device, a near-infrared light-emitting diode, and a semi-permeable chamber containing a variant of the bacteria Rhodobacter sphaeroides, which had been modified so that when near-infrared light shined on it, the bacteria would release a protein, secreted alkaline phosphatase, that plays a number of roles in humans, including regulating the immune-system protein interferon.

The power source is where mind control comes in. Using a commercially available headset that measures electrical signals on the scalp, a group of human test subjects trained themselves to control a brain-computer interface. The researchers then hooked the interface up to the implant’s wireless power source, allowing humans to control gene expression in mice.

You’d be forgiven at this point for wondering whether the work is the product of “because we can” thinking or even a mad scientist, but in the long term it might have practical medical value, writes senior author Martin Fussenegger in an email. Doctors could use devices like the one his team designed to manage gene therapy through thoughts. Farther down the line, “it may become possible to capture brain wave signatures associated with chronic pain and epileptic seizures” ahead of time, and those signals might be used to trigger an implant to provide treatment before pain or a seizure strikes.

All indications suggest that’s a long way off, however. For one thing, there remain ethical questions about using such implants, let alone having patients control them.

Quick Studies

Tough Weather Makes for Moralistic Gods

weather

(Photo: kingray/Flickr)

Climate variability and the availability of natural resources help shape religious beliefs, scientists find.

A tough climate might make for tough men and women, but with history and social forces on its side, it will also most likely make for a pretty tough god.

That’s according to a study out today in Proceedings of the National Academy of Sciences, which suggests one can predict with a high level of accuracy whether a society has a moralizing high god—one thought to govern reality, intervene in our affairs, and enforce or at least support moral behavior—using ecological data in conjunction with just a few political, economic, and agricultural measures.

Scientists have been debating for some time what influence ecology might have on the religious aspects of our culture. It shouldn’t come as a big surprise that culture and the natural environment interact, and there’s reason to believe that believing in god might increase cooperation even in anonymous interactions. Those two factors suggest that societies who most need to cooperate—for example, groups living in places with few resources or unreliable agricultural conditions—might have the sort of gods that would encourage cooperation. Yet no one’s quite sure whether that argument works out in practice.

In most societies they considered, climate stability meant predictable, generally good living conditions, making deity-encouraged cooperation less necessary.

To find out, a team of biologists, linguists, and others collected data on 583 societies listed in the Ethnographic Atlas. The atlas itself includes information on each ethnic group’s location, religion, agriculture, and the extent to which societies organized themselves into political units beyond the local level, among other factors. To take account of potential outside influence on a culture, the team also recorded the religious beliefs of each society’s 10 nearest neighboring societies. To bring climate into the mix, the team gathered historical data on rainfall, temperature, biodiversity, and primary production—roughly, a measure of how much solar energy plants convert into forms they can store for later.

Data in hand, the researchers first boiled the ecological data down into two variables, resource abundance and climate stability. Taking those two factors as well as the ethnographic data into account, the team examined a subset of 389 societies and found that political complexity and proximity to societies with moralistic high gods increased the probability a society had a moralistic deity, while societies with the most resources tended to be less likely to have such a god. Increasing climate stability for the most part had the same effect as increasing resources: In most societies they considered, climate stability meant predictable, generally good living conditions, making deity-encouraged cooperation less necessary.

Not only did climate help shape religious beliefs, but including it in the team’s model also led to remarkably accurate predictions. Testing the model they’d developed using the first two-thirds of the data on the remaining third, the researchers found they could correctly classify 91 percent of those cultures as having a moralistic deity or not.

Quick Studies

If It’s Good Enough for the Supreme Court…

gay-rights-greece

(Photo: Portokalis/Shutterstock)

A research team looks into how Iowa's legalization of gay marriage in 2009 affected the views of registered voters.

The same-sex-marriage battle is adding new evidence to an age-old debate: Do the courts tend to follow public opinion, or do they shape it?

“Democrats, non-religious, non-evangelical, educated, and younger respondents were more likely to change their opinions to increased support, as did those who had gay or lesbian friends and family.”

A research team led by University of Iowa political scientist Caroline Tolbert interviewed 503 registered Iowa voters just before and just after the state’s supreme court effectively legalized gay marriage in April 2009. They found that while hard-core social conservatives were unmoved by the ruling, “Democrats, non-religious, non-evangelical, educated, and younger respondents were more likely to change their opinions to increased support, as did those who had gay or lesbian friends and family.”

Writing in Political Research Quarterly, the researchers conclude that “the signaling of new social norms pressured some respondents to modify their expressed attitudes.” This suggests a positive feedback loop for same-sex marriage, in which legalization leads to increased social acceptance, and helps explain why attitudes on the issue have shifted with such remarkable rapidity.


For more on the science of society, and to support our work, sign up for our free email newsletters and subscribe to our bimonthly magazine. Digital editions are available in the App Store (iPad) and on Google Play (Android) and Zinio (Android, iPad, PC/MAC, iPhone, and Win8).

Quick Studies

High School Is a Rude Awakening

sleep

Five more minutes. (Photo: danielfoster/Flickr)

Researchers find—yet again—that teens really do need to sleep in.

Remember dragging yourself out of bed before dawn to get to your high school classes on time? Remember how much easier it seemed in grade school? Yeah, you weren’t just getting lazy, a new study shows.

The research, out today in the journal PLoS One, is further confirmation for what sleep researchers had been thinking for a while: As they get older, kids’ and teens’ circadian rhythms shift, meaning they really should go to bed later at night and wake up later in the morning. The new study lends support to the American Academy of Pediatrics’ recent statement that middle and high schools shouldn’t start earlier than 8:30 a.m.—doing otherwise, the pediatricians’ group argues, could threaten kids’ health and academic performance.

The conflict between school start times and biology could pose health risks as an apparently natural desire to stay in bed gets rudely awakened by the morning bell.

In their study, Stephanie Crowley and researchers from five other institutions followed 38 kids aged nine or 10 initially and 56 teens aged 15 or 16 initially, all from Providence, Rhode Island, for about two and a half years. About every six months, those 94 participants underwent a week-long sleep assessment, during which they kept daily sleep diaries and wore activity monitors on their wrists. The resulting data gave the research team an idea of their subjects’ sleep patterns, but to investigate what sleep their bodies actually wanted, as opposed to what they got, the team had each participant come into the lab to measure something called dim light melatonin onset. Basically, that’s the time when the body starts producing more melatonin in preparation for sleep.

As both the younger and older cohorts aged, the team found, they went to bed later and later, and on weekends they woke up at correspondingly later times, typically around 8 or 8:30 a.m.. On weekdays, kids under 18 all got up before 7 a.m.—suggesting schools set the de facto wake up time for most adolescents—while those 18 or older woke up closer to 8 a.m., in line with their weekend habits. Regardless of when they got up, the study participants’ melatonin rhythms shifted back by one or two hours as they aged. That suggests that regardless of school policy, adolescents really ought to go to bed later.

“The consistent early weekday sleep offset [waking] times across 9 to 17 years … indicates that the school schedule may suppress a biologically-driven behavior to sleep later,” a result bolstered by the facts that weekend waking times grew later over time and that the difference between weekday and weekend waking times declined only after age 17, near or after the end of high school, the authors write. The conflict between school start times and biology could pose health risks as an apparently natural desire to stay in bed gets rudely awakened by the morning bell.

Quick Studies

Poorly Chosen Headlines Can Hurt Your Memory

headlines

Hot headlines. (Photo: stevenritzer/Flickr)

Experiments show people remember the main points less when a headline emphasizes something else.

The surprising thing about weapons of mass destruction in Iraq wasn’t that they never turned up. It was that vast numbers of Americans continued to believe there were WMDs over there long after Bush administration officials acknowledged there weren’t. Psychologists and political scientists sort of expected that—it doesn’t take much to sway your beliefs about the news, really, and attempts to correct inaccurate beliefs can actually backfire. But could a carelessly (or nefariously) placed headline be enough to mess with your memory of the facts in a news story? It can, according to a new study.

“The present research suggests that misleading headlines affect readers’ memory for news articles.”

Building on past research on the spread and persistence of manifestly false beliefs, Ullrich Ecker and collaborators at the University of Western Australia wondered whether misleading, though not exactly incorrect, information might have a similar impact on outright falsehoods. In the first of two experiments, 51 UWA undergraduates read fake news stories concerning natural disaster-related deaths or burglaries. In both cases, students read there was a slight, short-term uptick but a more substantial long-term decline in the rates overall. Half the students saw an accompanying headline that emphasized the main point (“Downward Trend in Burglary Rate,” for example) while the other half read a somewhat misleading headline (“Number of Burglaries Going Up”). After reading those stories, the students got a pop quiz on what they’d read, and those who’d read the misleading headline scored 12 percent worse—they scored 46 percent on average, compared to 58 percent for those who’d read a more congruent headline.

Those results extended beyond the facts to students’ impressions of the people in them. In the second experiment, Ecker and team showed 47 more students made-up news reports of a crime, such as a financial scam. This time, a photo of one of the players—a victim, a culprit, or a prosecutor, for example—accompanied the story. The headline and the article’s first paragraph likewise referred to one of those people, though not necessarily the same one. When the researchers later asked students how positively they felt about the people depicted in the photos, they did generally favor the “good guys,” such as the victim or prosecutor. Still, the headline had an effect. When headlines focused on the perpetrators, for example, students rated pictures of victims and others on the side of justice more negatively than otherwise.

“There can be little doubt that misleading headlines result in misconceptions in readers who do not read beyond the headlines,” the authors write in the Journal of Experimental Psychology: Applied. “The present research suggests that misleading headlines affect readers’ memory for news articles.” In part, they argue, that’s because the facts of the story will always be interpreted in the context of what’s already been read—namely, the headline. In addition, readers may not be watching out for incongruities and therefore do nothing to correct for them. This may explain another of the researchers’ findings: When the team replaced news stories with opinion pieces, headlines had no effect on memory.

Quick Studies

No Matter Your Age, Making Mistakes Can Help You Learn

sad dude

(Photo: xavitalleda/Flickr)

Experiment suggests seniors benefit from trial-and-error learning just as much, and in the same ways, as young adults.

Making mistakes is good for learning concepts regardless of age, according to a new study. Psychologists have been debating that issue for a few years, especially in light of experiments suggesting the value of trial-and-error learning declines with age—an effect that likely reflects different experimental procedures.

Much of the research psychologists do on how we learn uses a variation on the same simple paradigm: present pairs of words, such as “green tree,” let people practice those pairs in some way, then present the first words in each pair and see how many of the second words people remember. In a variation, experimental subjects learn lexical, rather than conceptual, associations—instead of “green” and “tree,” “qu” and “quote.”

In the former case, researchers have found that eliciting mistakes actually helps people learn. For instance, asking people to guess which word related to “green” the experimenters have in mind before explaining it’s “tree” reduces errors later on. In the real world, that’s why pre-tests, in which teachers test students on material they haven’t yet studied, can help students learn. But trial and error has the opposite effect on lexical learning.

While seniors did worse overall on lexical memory tasks, they did just as well as young people on conceptual ones.

When they dug into the details of those experiments, Andrée-Ann Cyr and Nicole Anderson found that psychologists had been giving more conceptual memory tests to younger people than older ones, which might explain why they found trial-and-error learning was good for young adults and bad for seniors. Perhaps it was the test, and not the person taking it.

To see if their hypothesis was right, Cyr and Anderson gave 32 young people and 32 older people, aged 72 on average, a conceptual memory test, and they gave another 32 young and 32 older adults a lexical test. Within each of the four groups, the researchers gave half instructions that elicited errors, first presenting one word, such as “fruit,” and then asking each person to guess what fruit experimenters wanted them to learn.

Consistent with their argument, Cyr and Anderson found that trial-and-error learning—guessing and making mistakes—translated into about 10 percent fewer errors for conceptual learning and 10 percent more mistakes for lexical learning at test time. While seniors did worse overall on lexical memory tasks, they did just as well as young people on conceptual ones. An additional analysis showed that among those cases where participants had correctly recalled the answers, young and old alike remembered more of the guesses they’d made on conceptual versus lexical word pairs, suggesting that guessing worked by highlighting a set of related ideas—for example, things that are green—which reinforce memory. But precisely since the guesses one would make with a prompt like “qu” aren’t already related in our minds, guessing doesn’t help with the lexical tests.

“When learning emphasizes conceptual processing, error generation creates a richer memory trace” that aids recall, the authors write in the Journal of Experimental Psychology: Learning, Memory, and Cognition. “By contrast, lexical errors, in addition to [target words] are recalled significantly less, and are best forgotten in service of older adults’ memory for correct information.”

Quick Studies

How to Avoid Choking at Your Next Big Game

shaq

Brick! (Photo: georgeparrilla/Flickr)

A team of neuroscientists tried to figure out why we choke and, in the process, stumbled on a practical tip.

Maybe you’ve been there. At the free throw line attempting the game-winning shot, or making a presentation in a key business meeting. It’s up to you to make the save or blow the win, and now the fear comes over you. You’re about to choke. Fear not, for neuroscientists may have a surprising solution: for those who feel the agony of defeat most strongly, embrace it and think about what you have to lose.

Scientists used to think choking was a figment of athletes’ imaginations, though more recent research suggests it’s a real thing. But buzzer shots weren’t what motivated researchers at the California Institute of Technology to study the effect.

“Overall, participants that were very loss averse performed better when acting to avoid a loss, and those that were of low loss aversion performed better when acting to obtain a gain.”

“We’re interested in studying how incentives influence performance,” and what that can reveal about the limits of our decision-making, says neuroscientist Vikram Chib, now an assistant professor at Johns Hopkins University. Back at Caltech in 2012, Chib and colleagues looked at what happened when people attempted tricky Wii- or Xbox Kinect-style motor-skills tests for up to $100 in cash. Scanning each person’s brain using fMRI, the researchers found that decreasing activity in the ventral striatum, part of the brain’s reward-processing circuit, was a good indicator of the likelihood people were about to choke when the stakes were highest. And loss aversion—how much more strongly a person feels the sting of loss versus the pleasure of gains—was correlated with both the drop in ventral striatum activity and the likelihood of choking.

That got them wondering, Chib says. Reduced ventral striatum activation suggested that when it came time to do the motor-skills test, their subjects were thinking in terms of how much they might lose, despite the fact they had something to gain. What would happen if they re-framed the experiment in terms of losses rather than gains?

To find out, Chib, Shinsuke Shimojo, and John O’Doherty went back to the lab and this time gave 26 people $100 up front and told them they could keep it if they did well on the tests—otherwise, they’d lose money. That change flipped the results. While the most loss-averse people were most likely to choke originally, now it was the least loss-averse who choked, the authors write in the Journal of Neuroscience.

“Overall, participants that were very loss averse performed better when acting to avoid a loss, and those that were of low loss aversion performed better when acting to obtain a gain,” the team writes. Exactly why that happened is a bit unclear, Chib says. Their original hypothesis that ventral striatum activity would flip, just as behavior had, turned out to be false—in fact, that brain region responded just the same to prospective losses as it did to prospective gains. Chib says the team is working to understand that observation, but in the meantime, their results could be of practical value to those who choke. Tailoring a task’s frame as a gain or loss depending on a person’s loss aversion “could potentially mitigate decreases in performance for large incentives,” the authors write.

Quick Studies

Marriage Records Reveal Patterns of Korean Migration Through the Centuries

city

Seoul. (Photo: clintsharp/Flickr)

Centuries-old genealogies suggest diffusion away from a clan's point of origin, with a general flow toward Seoul.

Scientists have learned a lot about human mobility—from traffic to migration patterns—in recent years, yet there’s a significant limitation. Most data concerns how we move over the course of hours or days—months or years if we’re lucky. Now, researchers using an unusual data set—Korean marriage records in conjunction with clan place names—have opened the door to studying migration over the course of centuries.

Modern life comes with many ways to track our movements in the short term. Traffic cameras can measure how many cars pass through different intersections, and researchers have managed to trace short-term migration using cell phone data. But if you want to follow migration patterns over a few hundred years or so—say, the rural-to-urban migration that has taken place in the United States over the last century—you’re generally out of luck. In most places, there’s just not enough data.

Diffusion alone couldn’t explain Korea’s migration patterns—based on their estimates from the jokbo data, it would have taken about 67,000 years for clans to be as geographically mixed as they are today.

Korea is an exception, according to Sang Hoon Lee and other researchers in Korea, Sweden, the United Kingdom, and the United States. Families there keep genealogical records, called jokbo, that describe births and, more importantly, marriages dating back hundreds of years (though the records contain no details about where someone was born or lived). That makes marriage records important for two reasons. First, families are subdivided into clans according to their geographic origins—Lee, for example, is a member of the “Lee from Hakseong” clan, and fellow author Beom Jun Kim is a member of the “Kim from Gimhae” clan. Second, brides customarily moved from their homes to join grooms in theirs.

Assuming a kind of gravitational pull between clans—brides should be more likely to marry grooms from larger clans and also clans closer to their own homes—the researchers could get a better handle on where the clans were and how they moved over time. Fitting that model to data, the team discovered that physical distance actually had little to do with migration. Instead, clans seemed to diffuse outward from their place of origin.

To complement that analysis, the team next looked at modern census records, which still record clan names and, therefore, how different clans originally from one area are distributed around the country today. That follow-up made it clear that diffusion alone couldn’t explain Korea’s migration patterns—based on their estimates from the jokbo data, it would have taken about 67,000 years for clans to be as geographically mixed as they are today. Rather, clans seem to have flowed toward Seoul, a pattern that appears in the data as a correlation between how spread out a clan is and the distance from the capital to its original location.

Writing in the journal Physical Review X (increasingly a home for studies of social structure), the results suggest that diffusion and directed flow, known as convection, could be valuable tools for understanding human migration and especially for comparing migration patterns across countries, times, and cultures.

Quick Studies

Levels of Depression Could Be Evaluated Through Measurements of Acoustic Speech

depression

(Photo: goldilockphotography/Flickr)

Engineers find tell-tale signs in speech patterns of the depressed.

Diagnosing depression can be a fairly subjective endeavor, as it requires physicians and psychiatrists to rely on patients’ reports of symptoms including changes in sleep and appetite, low self-esteem, and a loss of interest in things that used to be enjoyable. Now, researchers report some more quantitative measures based on speech that could aid in diagnosing depression and measuring its severity.

Around one in 10 Americans suffers from depression at any time, according to Centers for Disease Control and Prevention statistics, and, in the worst cases, it can leave people with the illness unable to work, sleep, and enjoy life. Depression also has physical consequences in the form of impaired motor skills, coordination, and a general feeling of sluggishness. In recent years, that’s motivated a wide range of researchers to study different aspects of depression, including experts from disciplines as far afield as electrical engineering.

Around one in 10 Americans suffers from depression at any time, according to Centers for Disease Control and Prevention statistics, and, in the worst cases, it can leave people with the illness unable to work, sleep, and enjoy life.

Yes, electrical engineers. Building on the observation that depression interferes with our motor skills, Saurabh Sahu and Carol Espy-Wilson hypothesized that depression might affect our speech in fundamental ways. The pair focused on four basic acoustic properties: speaking rate, and three less-familiar quantities, breathiness, jitter and shimmer. In speech acoustics, breathiness is relatively high-frequency noise that results from the vocal cords being a bit too relaxed when speaking. Jitter tracks the average variation in the frequency of sound, while shimmer tracks variation in its amplitude—roughly speaking, its volume. The latter three traits are “source traits,” meaning that they’re related to muscles in the vocal cords, and haven’t been studied much before, Espy-Wilson writes in an email.

Sahu and Espy-Wilson measured those four properties in samples of people talking about their depression and focused on six individuals in particular whose depression had unambiguously subsided. (The audio samples came from a set of 35 that had been recorded by a separate lab for a related 2007 study of other, more readily apparent speech patterns, such as the number and duration of pauses between words and phrases.)

In keeping with other research on speech and depression, Sahu and Espy-Wilson found that four of those six people spoke a bit faster when their condition had improved. In addition, they found that jitter and shimmer went down—that is, the tone and volume of speech changed less frequently from moment to moment—in five of the six people as their depression eased. Breathiness declined in just three of the six.

Based on those results, Sahu and Espy-Wilson conclude, jitter and shimmer could be valuable indicators of a patient’s level of depression, though it will take a larger study and additional tests to see how well jitter and shimmer predict depression independent of a clinical diagnosis. “We have just shown that these parameters are relevant for the distinction. Our next step will be to build a classifier to see how well we are able to detect whether a speaker is depressed or not,” Espy-Wilson says.

The research will be presented Friday at the American Acoustical Society’s fall meeting in Indianapolis.

Quick Studies

We’re Not So Great at Rejecting Each Other

relationship

(Photo: seranyaphotography/Flickr)

And it's probably something we should work on.

What we want in a relationship and what we end up with are often not the same thing, and the reason is pretty simple, according to a new study. We overestimate our ability to reject people, and we do that because when it comes down to it, we don’t want to hurt anyone’s feelings.

It might not always seem like it, but people generally don’t like being mean to each other—it’s what psychologists call “other-regarding preferences.” But those preferences can have negative consequences. We tend to be more satisfied in relationships with people who come closer to our ideals, and focusing on others’ feelings could keep us from seeking what we truly want.

We tend to be more satisfied in relationships with people who come closer to our ideals, and focusing on others’ feelings could keep us from seeking what we truly want.

To test out this theory, psychologists at the University of Toronto and Yale University conducted two dating experiments. In the first, the team sat down 132 undergraduates and had them fill out a dating profile, after which they perused three profiles of potential dates. The researchers then randomly selected about half of their experimental subjects and told them that all three people in the profiles were in the lab and available for a meet-up. The rest were told those potential dates weren’t available right then, but they should nonetheless imagine they were. Next, each undergrad selected one person they’d most like to meet, at which point the team showed each participant “a photo of an unattractive person,” as they put it, who they said depicted the person they’d chosen.

Finally, they asked whether each undergrad wanted to go through with trading contact information, and it made a difference whether they’d be rejecting someone in the next room or somewhere far away. When they’d been told their potential date wasn’t around, just 16 percent wanted to get digits. When they thought that the person in the unappealing photo was hanging around outside, the number jumped to 37 percent. In other words, the researchers suggest, people were on average more than twice as willing to go on a date with the unattractive person when they were nearby.

In a second version of the experiment with 99 new students, the team replaced the unattractive photo with additional information, tailored to each subject based on a prior questionnaire. It indicated the person in their favorite profile had a deal-breaking trait or habit—for example, diametrically-opposed political beliefs. This time, 46 percent wanted to pursue a date when they thought the person wasn’t around, and a whopping 74 percent wanted one when they thought the person was nearby.

The reason for these discrepancies, post-experiment surveys showed, was that students didn’t want to hurt anyone’s feelings, and that concern was stronger when they thought their possible dates were nearby. That could have consequences down the line, the researchers argue. As flaws become more grating over time, one partner may finally call it quits, causing more hurt than if they’d never gone out in the first place. Alternatively, a desire not to hurt a boyfriend or girlfriend could lead them to stay in a strained relationship longer despite the incompatibility.

Quick Studies

Chronic Fatigue Syndrome and the Brain

fatigue

(Photo: codedragon/Flickr)

Neuroscientists find less—but potentially stronger—white matter in the brains of patients with CFS.

Chronic fatigue syndrome affects as many as four in a thousand people in the United States—perhaps more. Despite that, there’s been slow progress in understanding the disease, and researchers still aren’t exactly sure what causes it. Now, a small new study hints that subtle differences in the brain’s white matter might have something to do with the disease.

CFS has a controversial past. For years, health officials denied it even existed, ironically dismissing it as a sign of mental illness. But in the last few years, more and more researchers are taking it seriously. The latest research points to mold-produced toxins as a likely cause—or at least trigger—of CFS, the symptoms of which include impaired memory and concentration, extreme fatigue after exercise, muscle and joint pain, and unrefreshing sleep. Yet exactly how CFS works remains something of a mystery.

Using standard fMRI, the researchers discovered that CFS patients’ brains generally had less white matter—the long, fiber-like nerves that transmit electrical signals between different parts of the brain—than those of control subjects.

One avenue worth exploring is brain imaging, Stanford researcher Michael Zeineh and colleagues write today in the journal Radiology, though previous brain studies of patients with CFS have yielded inconsistent results. To probe deeper, Zeineh and company used standard functional magnetic resonance imaging, or fMRI, along with a technique called diffusion tensor imaging, which helps researchers and doctors examine microscopic properties of brain tissues. Using those methods, the team compared the brains of 15 patients with CFS, identified using the so-called Fukuda definition, and a control group of 14 healthy people who’d been chosen to match the CFS group on traits such as age and gender.

Using standard fMRI, the researchers discovered that CFS patients’ brains generally had less white matter—the long, fiber-like nerves that transmit electrical signals between different parts of the brain—than those of control subjects. On its own, that’s not really that surprising.

What was truly odd was what went on in a white-matter tract called the right arcuate fasciculus, which connects the frontal and temporal lobes of the brain. There, diffusion tensor imaging revealed signs of stronger nerve fibers running along parts of the right arcuate fasciculus, or possibly weaker nerve fibers crossing it—in theory, a sign of a better-connected brain. Odder still, that effect was strongest in patients with the most severe CFS symptoms.

It was “an unexpected finding for a disorder characterized by reduced cognitive abilities,” the authors write, though they point out an intriguing recent study suggesting something similar happening in some patients with Alzheimer’s disease.

These findings could help doctors better diagnose severe cases of CFS, and they may also help researchers trying to understand the syndrome’s origins. Still, the team suggests caution. “Overall, this study has a small number of subjects, so all the findings in this study require replication and exploration in a larger group of subjects,” they write.

Quick Studies

Incumbents, Pray for Rain

storm

(Photo: chrisirmo/Flickr)

Come next Tuesday, rain could push voters toward safer, more predictable candidates.

Bad weather can change the course of political history. According to one account, a particularly nasty storm in 1960 kept rural, primarily Republican voters home on Election Day, tipping the balance in favor of John F. Kennedy. News reports disagree on which political party benefits most from bad weather, but they all agree on the cause: Inclement conditions keep people home.

But weather affects more than our ability to make it to the polls. In a recent paper, University of North Carolina political scientist Anna Bassi argues that depressing weather leads to bad moods, and those bad moods lead us to prefer safer, more predictable candidates—namely, incumbents.

 While there’s no consensus among researchers about the overall effect of inclement weather on an election, her experiment suggests that a storm or heavy rain really could change the political landscape.

To test that hypothesis, Bassi had 166 participants choose between two hypothetical candidates, whom she dubbed Mr. C, for challenger, and Mr. I, for incumbent. Selecting Mr. C was risky: There was an equal chance of earning either $8.40 or $13.20 (independent of the experimental condition), and which one a subject got would be determined only after he or she had chosen the challenger. Meanwhile, Mr. I was a safe bet: While the actual amount earned varied across experimental conditions, participants always knew beforehand what choosing the incumbent would net them.

Where does weather come in? Before approaching potential subjects, Bassi chose dates for the experimental sessions based on the forecast—one set of sunny days, and one set of cloudy ones. To ensure that she tested for the effects of actual weather rather than forecasts, she constructed two indicators of good weather: whether the day was predominantly sunny and whether rainfall that day was less than the local daily average of about 0.12 inches. Finally Bassi gauged each participant’s subjective assessment of the weather using a seven point scale, ranging from “Terrible” to “Awesome.”

In most cases, bad weather yielded the incumbent, Mr. I, a 10 to 20 percent boost, depending on which of the metrics Bassi used to define good and bad weather. Those results held up, Bassi found, when controlling for other factors such as race, gender, and political leanings. A detailed follow-up survey suggested that much of the weather-based difference in choice could be accounted for by mood—when bad weather made for more negative moods, that led participants to choose the safe incumbent more often.

Those results are at odds with media reports, which generally argue that bad weather suppresses turnout, in turn, favoring one party or the other, Bassi writes. While there’s no consensus among researchers about the overall effect of inclement weather on an election, her experiment suggests that a storm or heavy rain really could change the political landscape—and in different ways than anyone had previously thought.

Quick Studies

Could Economics Benefit From Computer Science Thinking?

computation

(Photo: 101332430@N03/Flickr)

Computational complexity could offer new insight into old ideas in biology and, yes, even the dismal science.

Economists are sometimes content asking whether or not a banking system could be stable or a market could continue to grow. But they and other scientists could benefit from a computational view that asks not just whether the right conditions exist but also how hard it is to find them, according to a commentary published today in Proceedings of the National Academy of Sciences.

The “how hard?” question is about computational complexity, says Christos Papadimitriou, a University of California-Berkeley computer scientist and the commentary’s author. “Nature, [people]—they are doing some kind of computation,” he says, but some computations are easier than others. For nature to compute the best possible kind of life for every environment on Earth is profoundly complex, an observation that informs biologists’ understanding of evolution. In fact, biologists don’t think nature actually finds the optimal kinds of life—it’s far too difficult a problem—an observation that helps them understand why life is so diverse.

Under certain assumptions about the economy, free markets produce stable, socially optimal outcomes, in the sense that no one person can improve his or her lot without hurting someone else.

Societies face a similar problem. For example, under certain assumptions about the economy, free markets produce stable, socially optimal outcomes, in the sense that no one person can improve his or her lot without hurting someone else. Politicians and the occasional novelist have used that claim to promote an unregulated free market.

That makes sense if you don’t contemplate the problem any further, but thinking about markets in terms of computational complexity puts the problem in a different light. Finding an economic outcome that’s stable and benefits everyone is a lot like the evolution problem. It’s not the hardest problem to solve, but as the number of economic players grows, the problem gets exponentially harder—tough even for a computer to deal with.

That has an important consequence. “You can’t expect a market to get there because you can’t expect a computer to get there,” Papadimitriou says. And if a market can’t get to a stable, socially-optimal solution, whether or not a solution exists becomes a less interesting—or at least quite different—question.

Ben Golub, a Harvard economist who studies social and financial networks, says that’s an important perspective, though it may not always be the most valuable one. “Much of complexity theory is focused on worst-case complexity,” he writes in an email. “So ‘hardness’ results that at first seem very sweeping” might not always apply. For example, real-world markets might be set up—intentionally or otherwise—to make solving certain economic problems computationally easier.

Still, “whatever it is that markets do, they are doing a sort of computation,” Golub says, and Papadimitriou and other computer scientists pose “a provocative, invigorating challenge for economists.” In a way, it’s a return to economists’ roots, too: In the 1950s and ’60s, economists thought long and hard about how societies could reach optimal solutions, or at least an equilibrium. Now, Golub says, “computer science has reinvigorated this hugely important area.”

Quick Studies

Politicians Really Aren’t Better Decision Makers

WH

(Photo: bigberto/Flickr)

Politicians took part in a classic choice experiment but failed to do better than the rest of us.

When it comes to risky and uncertain decisions, politicians have the same basic shortcomings as the rest of us, according to an experimental study presented earlier this month at the 2014 Behavioral Models of Politics Conference. That result undermines a core tenet of representative democracy, namely that our leaders are better at making political decisions than the rest of us.

As a species, we are not particularly good at decision making. Among our foibles, we will often make different choices based on a problem’s wording rather than its underlying structure. Danny Kahneman and Amos Tversky’s “Asian disease” experiment, a particularly well-known example, goes like this: An exotic disease is coming, and it’ll kill 600 people. You have two options. Choose the first, and 400 people will die. Choose the second, and you take a risk: There’s a two-thirds chance that everyone dies.

“Democratic government relies on the delegation of decision making to agents acting under strong incentives. These actors, however, remain just as human as those who elect them.”

In the original experiment, 22 percent of people surveyed chose the first option while 78 chose the second, but that’s not the interesting part. Given a choice between saving 200 lives with certainty or a one-third chance of saving everyone, Kahneman and Tversky found, 28 percent choose the first option while 72 percent choose the second—a different proportion, even though the choice is exactly the same as before.

That’s a bit troubling when it comes to the average citizen choosing whom to vote for, but it’d be worse if our political leaders were susceptible to the same effect. Alas, they are, according to a team of political scientists led by Peter Loewen. The team reached that conclusion with a straightforward test: they put the Asian disease question to 154 Belgian, Canadian, and Israeli members of parliament. In the loss frame, where subjects decided between 400 deaths or a two-thirds chance everyone dies, 82 percent of Belgian, 68 percent of Israeli, and 79 percent of Canadian MPs chose the risky option, compared with 40, 53, and 34 percent, respectively, when the researchers presented MPs with the less gloomily-phrased version.

For comparison, the experimenters posed the same problem to 515 Canadian citizens, who, if anything, were less susceptible to framing effects. “The overall patterns observed for MPs and for citizens is strikingly similar. However, the effect size observed in Canadian MPs … is larger than that estimated among Canadian citizens,” the team writes. It was also larger than estimates of the framing effect in average people.

It’s all a bit of a problem for a common line of reasoning among political scientists and political economists, many of whom assume that re-election concerns or political acumen will render politicians more strategic and also more rational than average Joes. Loewen and company’s results suggest otherwise. “Democratic government relies on the delegation of decision making to agents acting under strong incentives,” they write. “These actors, however, remain just as human as those who elect them.”

Quick Studies

Earliest High-Altitude Settlements Found in Peru

basin1

The Pucuncho Basin. (Photo: Kurt Rademaker)

Discovery suggests humans adapted to high altitude faster than previously thought.

Living at high altitude isn’t easy. The thinner air above 4,000 meters makes for colder temperatures, less oxygen, and less protection from the sun’s harmful ultraviolet rays. Yet humans occupied sites that high and higher in the Peruvian Andes as early as 12,800 years ago, according to a new study. The result could change how archaeologists think about the earliest human inhabitants in South America and how they managed to adapt to extreme environments.

Traveling to 4,000 meters and higher isn’t such a big deal as it once was. Mountaineers regularly climb 4,392-meter high Mount Rainier and miners work just outside of the highest city in the world, La Rinconada, Peru, which stands at 5,100 meters. India and Pakistan have even fought battles at 6,100 meters on the disputed Siachen glacier.

Traveling to 4,000 meters and higher isn’t such a big deal as it once was. Mountaineers regularly climb 4,392-meter high Mount Rainier and miners work just outside of the highest city in the world, La Rinconada, Peru.

But how and how early people actually lived in such extraordinary places is less clear. For some, human occupation in the Andes didn’t make any sense. Even if settlers could survive freezing temperatures and limited oxygen, altitude increases metabolism, meaning they’d need to eat more in a place where travel was difficult and food was scarce.

Regardless, Kurt Rademaker and colleagues report they’ve found evidence of two high-altitude settlements at sites in southern Peru. Members of the team had been on the trail of obsidian that turned up in the earliest coastal villages in the region, which were dated to between 12,000 and 13,500 years ago. But the obsidian didn’t originate there. Archaeologists have known for some time that it came from Alca in the Peruvian highlands, strongly suggesting contemporary outposts or base camps in the Andes.

Eventually, a combination of obsidian surveys, mapping of likely settlement locations, and reconnaissance led the team to 4,355-meter-high Pucuncho and 4,445-meter-high Cuncaicha. There, researchers found tools, animal and plant remains, and other signs of habitation. Using a carbon-dating variant called accelerator mass spectrometry, the team dated Pucuncho to between 12,800 and 11,500 years ago and Cuncaicha to between 12,400 and 11,800 years ago, roughly a millennium earlier than previously discovered settlements at similar altitudes.

The results may help scientists understand the genetic adaptations particular to high-altitude dwellers, especially with regard to how quickly humans were able to adjust biologically to harsh environments. “Our data do not support previous hypotheses, which suggested that climatic amelioration and a lengthy period of human adaptation were necessary for successful human colonization of the high Andes,” the team writes in Science. “As new studies identify potential genetic signatures of high-altitude adaptation in modern Andean populations, comparative genomic, physiologic, and archaeological research will be needed to understand when and how these adaptations evolved.”

“This research assists in finally explaining some of the key archaeological questions regarding early South American occupation,” Washington State University archaeologist Louis Fortin, who has worked with Rademaker in the past but was not involved in the present research, writes in an email. The work, he says, “has brought to light a significant discovery for South American archaeology and specifically high-altitude adaptation and the peopling of South America.”

Quick Studies

My Politicians Are Better Looking Than Yours

clinton

Hotter, if you're a Democrat. (Photo: veni/Flickr)

A new study finds we judge the cover by the book—or at least the party.

Beauty, they say, is in the eyes of the beholdee’s in-group.

At least, that’s what they say if “they” means researchers interested in how we perceive political leaders. According to researchers at Cornell University’s Lab for Experimental Economics and Decision Research, people seem to be judging the cover in part by the content of the book: Democrats find their political heroes more attractive than Republican leaders, and vice versa.

Curious to know, essentially, how hot for their leaders partisans and average citizens were, the lab’s co-director, Kevin Kniffin, and colleagues conducted a simple test—they asked people to say how attractive sets of  familiar and unfamiliar political figures were. In theory, if a person’s beauty or handsomeness were a fixed, objective trait of an individual—something we all agreed on—a beholder’s partisan leanings ought to have no impact.

Republican aides rated GOP leaders as more attractive than their donkey counterparts, but only by less than half a point.

But that is not what Kniffin and company found. In one version of the experiment, the researchers asked a total of 49 aides working for Wisconsin state legislators—38 Democrats and 11 Republicans, owing to the balance of power in the state—to rate the attractiveness of 24 politicians. That total included 16 familiar leaders, including recent Wisconsin gubernatorial and United States senate candidates, and eight relatively unfamiliar ones who came from New York.

The aides rated familiar politicians as more attractive than unfamiliar ones overall, but, more importantly, they thought leaders of their own party were more appealing than others. Democratic aides, for example, rated their leaders on average about a 5.5 on a nine-point scale and rated Republican leaders about 4.5. For Republican aides, those ratings were 4.2 and 5.2, respectively. Those results depended on aides being familiar with those politicians, though. When they were ogling low-profile politicians from New York, Wisconsin legislative aides found them a point or two less attractive overall, and Democrats rated Republican and Democratic leaders as equally attractive. Republican aides rated GOP leaders as more attractive than their donkey counterparts, but only by less than half a point. These results suggest that the aides had to actually know something about who they were rating for there to be a partisanship-attractiveness effect.

Those findings are at odds with studies that presume physical attractiveness is a “static personal characteristic that influences how people perceive each other,” the authors write in the Leadership Quarterly. “In effect, we find evidence that people are capable—for better or worse—of judging covers by their books, whereby the cover of physical attractiveness is viewed partly and significantly through the lens of organizational membership.”

Quick Studies

That Cigarette Would Make a Great Water Filter

cig

(Photo: 42787780@N04/Flickr)

Clean out the ashtray, add some aluminum oxide, and you've (almost) got yourself a low-cost way to remove arsenic from drinking water.

In further evidence that one person’s trash is another’s treasure—and perhaps life saver—researchers in China and Saudi Arabia have devised a way to use cigarette ash to filter arsenic from water. The technique could prove to be a cost-effective way to deal with contaminated drinking water, especially in the developing world.

Odorless and tasteless, arsenic is more than just the stuff of Agatha Christie novels. It’s also a serious public health threat in some parts of the world, notably Bangladesh, where naturally occurring arsenic compounds are abundant in the soil. Even in wealthy countries such as the United States, a mix of natural and industrial sources poses a threat to public health if it goes undetected and unmanaged. Regardless of the source, long-term exposure through drinking water and from crops irrigated with contaminated water can lead to skin lesions and cancer. Fortunately, richer nations have a number of options for dealing with arsenic, including absorption treatments and methods based on chemical oxidation.

Odorless and tasteless, arsenic is more than just the stuff of Agatha Christie novels. It’s also a serious public health threat in some parts of the world, notably Bangladesh, where naturally occurring arsenic compounds are abundant in the soil.

But in the developing world, finding the money for a state-of-the-art treatment facility isn’t an easy job. Apart from collecting rain water and boiling it, the simplest and most cost-effective way to treat arsenic-laced water is absorption. A standard water filter just passes water through a material that attracts arsenic compounds but lets water molecules flow by.

Here’s where cigarette ash comes in. Tobacco is grown throughout the world, and millions of cigarettes are made and smoked every day—a public-health concern in its own right. But it’s also a good source of water-filtering carbon.

“When people smoke, incomplete combustion emerges as air is sucked through the tobacco within a short time. Thus, a certain amount of activated carbon”—that’s the porous, absorbent stuff in your water filter—“is formed and incorporated into the cigarette soot,” write He Chen and colleagues in Industrial & Engineering Chemistry Research. The team combined that with another material for arsenic removal, aluminum oxide, to create a low-cost, relatively easy-to-make filter.

Neither ash nor aluminum oxide is ideal as a filtering material—ash has to be heat treated to be an efficient water filter, while aluminum oxide tends to clump up or form gels when exposed to water. To get around that, the researchers treated cigarette soot with hydrochloric and nitric acid before mixing the resulting powder with aluminum nitrate, finally producing an aluminum oxide-carbon mix. Finally, the team tested their concoction on a groundwater sample from Mongolia. With about two grams of aluminum oxide to one gram of cigarette-soot carbon, the team removed about 96 percent of the arsenic in the sample, as well as 98 percent of fluoride ions. They also found that they could use the same mix six times without losing filtering capacity. Finally, something good about smoking cigarettes.

A weekly roundup of the best of Pacific Standard and PSmag.com, delivered straight to your inbox.

Follow us


Sufferers of Social Anxiety Disorder, Your Friends Like You

The first study of friends' perceptions suggest they know something's off with their pals but like them just the same.

Standing Up for My Group by Kicking Yours

Members of a minority ethnic group are less likely to express support for gay equality if they believe their own group suffers from discrimination.

How Old Brains Learn New Tricks

A new study shows that the neural plasticity needed for learning doesn't vanish as we age—it just moves.

Ethnic Diversity Deflates Market Bubbles

But it's not in the rainbow and sing-along way you'd hope for. We just don't trust outsiders' judgments.

Online Brain Exercises Are Probably Useless

Even under the guidance of a specialist trainer, computer-based brain exercises have only modest benefits, a new analysis shows.

The Big One

One company, Comcast, will control up to 40 percent of Internet service coverage in the U.S., and 19 of the top 20 cable markets, if a proposed merger with Time Warner Cable is approved by regulators. November/December 2014

Copyright © 2014 by Pacific Standard and The Miller-McCune Center for Research, Media, and Public Policy. All Rights Reserved.