Menus Subscribe Search

Follow us


Quick Studies

Quick Studies

Levels of Depression Could Be Evaluated Through Measurements of Acoustic Speech

depression

(Photo: goldilockphotography/Flickr)

Engineers find tell-tale signs in speech patterns of the depressed.

Diagnosing depression can be a fairly subjective endeavor, as it requires physicians and psychiatrists to rely on patients’ reports of symptoms including changes in sleep and appetite, low self-esteem, and a loss of interest in things that used to be enjoyable. Now, researchers report some more quantitative measures based on speech that could aid in diagnosing depression and measuring its severity.

Around one in 10 Americans suffers from depression at any time, according to Centers for Disease Control and Prevention statistics, and, in the worst cases, it can leave people with the illness unable to work, sleep, and enjoy life. Depression also has physical consequences in the form of impaired motor skills, coordination, and a general feeling of sluggishness. In recent years, that’s motivated a wide range of researchers to study different aspects of depression, including experts from disciplines as far afield as electrical engineering.

Around one in 10 Americans suffers from depression at any time, according to Centers for Disease Control and Prevention statistics, and, in the worst cases, it can leave people with the illness unable to work, sleep, and enjoy life.

Yes, electrical engineers. Building on the observation that depression interferes with our motor skills, Saurabh Sahu and Carol Espy-Wilson hypothesized that depression might affect our speech in fundamental ways. The pair focused on four basic acoustic properties: speaking rate, and three less-familiar quantities, breathiness, jitter and shimmer. In speech acoustics, breathiness is relatively high-frequency noise that results from the vocal cords being a bit too relaxed when speaking. Jitter tracks the average variation in the frequency of sound, while shimmer tracks variation in its amplitude—roughly speaking, its volume. The latter three traits are “source traits,” meaning that they’re related to muscles in the vocal cords, and haven’t been studied much before, Espy-Wilson writes in an email.

Sahu and Espy-Wilson measured those four properties in samples of people talking about their depression and focused on six individuals in particular whose depression had unambiguously subsided. (The audio samples came from a set of 35 that had been recorded by a separate lab for a related 2007 study of other, more readily apparent speech patterns, such as the number and duration of pauses between words and phrases.)

In keeping with other research on speech and depression, Sahu and Espy-Wilson found that four of those six people spoke a bit faster when their condition had improved. In addition, they found that jitter and shimmer went down—that is, the tone and volume of speech changed less frequently from moment to moment—in five of the six people as their depression eased. Breathiness declined in just three of the six.

Based on those results, Sahu and Espy-Wilson conclude, jitter and shimmer could be valuable indicators of a patient’s level of depression, though it will take a larger study and additional tests to see how well jitter and shimmer predict depression independent of a clinical diagnosis. “We have just shown that these parameters are relevant for the distinction. Our next step will be to build a classifier to see how well we are able to detect whether a speaker is depressed or not,” Espy-Wilson says.

The research will be presented Friday at the American Acoustical Society’s fall meeting in Indianapolis.

Quick Studies

We’re Not So Great at Rejecting Each Other

relationship

(Photo: seranyaphotography/Flickr)

And it's probably something we should work on.

What we want in a relationship and what we end up with are often not the same thing, and the reason is pretty simple, according to a new study. We overestimate our ability to reject people, and we do that because when it comes down to it, we don’t want to hurt anyone’s feelings.

It might not always seem like it, but people generally don’t like being mean to each other—it’s what psychologists call “other-regarding preferences.” But those preferences can have negative consequences. We tend to be more satisfied in relationships with people who come closer to our ideals, and focusing on others’ feelings could keep us from seeking what we truly want.

We tend to be more satisfied in relationships with people who come closer to our ideals, and focusing on others’ feelings could keep us from seeking what we truly want.

To test out this theory, psychologists at the University of Toronto and Yale University conducted two dating experiments. In the first, the team sat down 132 undergraduates and had them fill out a dating profile, after which they perused three profiles of potential dates. The researchers then randomly selected about half of their experimental subjects and told them that all three people in the profiles were in the lab and available for a meet-up. The rest were told those potential dates weren’t available right then, but they should nonetheless imagine they were. Next, each undergrad selected one person they’d most like to meet, at which point the team showed each participant “a photo of an unattractive person,” as they put it, who they said depicted the person they’d chosen.

Finally, they asked whether each undergrad wanted to go through with trading contact information, and it made a difference whether they’d be rejecting someone in the next room or somewhere far away. When they’d been told their potential date wasn’t around, just 16 percent wanted to get digits. When they thought that the person in the unappealing photo was hanging around outside, the number jumped to 37 percent. In other words, the researchers suggest, people were on average more than twice as willing to go on a date with the unattractive person when they were nearby.

In a second version of the experiment with 99 new students, the team replaced the unattractive photo with additional information, tailored to each subject based on a prior questionnaire. It indicated the person in their favorite profile had a deal-breaking trait or habit—for example, diametrically-opposed political beliefs. This time, 46 percent wanted to pursue a date when they thought the person wasn’t around, and a whopping 74 percent wanted one when they thought the person was nearby.

The reason for these discrepancies, post-experiment surveys showed, was that students didn’t want to hurt anyone’s feelings, and that concern was stronger when they thought their possible dates were nearby. That could have consequences down the line, the researchers argue. As flaws become more grating over time, one partner may finally call it quits, causing more hurt than if they’d never gone out in the first place. Alternatively, a desire not to hurt a boyfriend or girlfriend could lead them to stay in a strained relationship longer despite the incompatibility.

Quick Studies

Chronic Fatigue Syndrome and the Brain

fatigue

(Photo: codedragon/Flickr)

Neuroscientists find less—but potentially stronger—white matter in the brains of patients with CFS.

Chronic fatigue syndrome affects as many as four in a thousand people in the United States—perhaps more. Despite that, there’s been slow progress in understanding the disease, and researchers still aren’t exactly sure what causes it. Now, a small new study hints that subtle differences in the brain’s white matter might have something to do with the disease.

CFS has a controversial past. For years, health officials denied it even existed, ironically dismissing it as a sign of mental illness. But in the last few years, more and more researchers are taking it seriously. The latest research points to mold-produced toxins as a likely cause—or at least trigger—of CFS, the symptoms of which include impaired memory and concentration, extreme fatigue after exercise, muscle and joint pain, and unrefreshing sleep. Yet exactly how CFS works remains something of a mystery.

Using standard fMRI, the researchers discovered that CFS patients’ brains generally had less white matter—the long, fiber-like nerves that transmit electrical signals between different parts of the brain—than those of control subjects.

One avenue worth exploring is brain imaging, Stanford researcher Michael Zeineh and colleagues write today in the journal Radiology, though previous brain studies of patients with CFS have yielded inconsistent results. To probe deeper, Zeineh and company used standard functional magnetic resonance imaging, or fMRI, along with a technique called diffusion tensor imaging, which helps researchers and doctors examine microscopic properties of brain tissues. Using those methods, the team compared the brains of 15 patients with CFS, identified using the so-called Fukuda definition, and a control group of 14 healthy people who’d been chosen to match the CFS group on traits such as age and gender.

Using standard fMRI, the researchers discovered that CFS patients’ brains generally had less white matter—the long, fiber-like nerves that transmit electrical signals between different parts of the brain—than those of control subjects. On its own, that’s not really that surprising.

What was truly odd was what went on in a white-matter tract called the right arcuate fasciculus, which connects the frontal and temporal lobes of the brain. There, diffusion tensor imaging revealed signs of stronger nerve fibers running along parts of the right arcuate fasciculus, or possibly weaker nerve fibers crossing it—in theory, a sign of a better-connected brain. Odder still, that effect was strongest in patients with the most severe CFS symptoms.

It was “an unexpected finding for a disorder characterized by reduced cognitive abilities,” the authors write, though they point out an intriguing recent study suggesting something similar happening in some patients with Alzheimer’s disease.

These findings could help doctors better diagnose severe cases of CFS, and they may also help researchers trying to understand the syndrome’s origins. Still, the team suggests caution. “Overall, this study has a small number of subjects, so all the findings in this study require replication and exploration in a larger group of subjects,” they write.

Quick Studies

Incumbents, Pray for Rain

storm

(Photo: chrisirmo/Flickr)

Come next Tuesday, rain could push voters toward safer, more predictable candidates.

Bad weather can change the course of political history. According to one account, a particularly nasty storm in 1960 kept rural, primarily Republican voters home on Election Day, tipping the balance in favor of John F. Kennedy. News reports disagree on which political party benefits most from bad weather, but they all agree on the cause: Inclement conditions keep people home.

But weather affects more than our ability to make it to the polls. In a recent paper, University of North Carolina political scientist Anna Bassi argues that depressing weather leads to bad moods, and those bad moods lead us to prefer safer, more predictable candidates—namely, incumbents.

 While there’s no consensus among researchers about the overall effect of inclement weather on an election, her experiment suggests that a storm or heavy rain really could change the political landscape.

To test that hypothesis, Bassi had 166 participants choose between two hypothetical candidates, whom she dubbed Mr. C, for challenger, and Mr. I, for incumbent. Selecting Mr. C was risky: There was an equal chance of earning either $8.40 or $13.20 (independent of the experimental condition), and which one a subject got would be determined only after he or she had chosen the challenger. Meanwhile, Mr. I was a safe bet: While the actual amount earned varied across experimental conditions, participants always knew beforehand what choosing the incumbent would net them.

Where does weather come in? Before approaching potential subjects, Bassi chose dates for the experimental sessions based on the forecast—one set of sunny days, and one set of cloudy ones. To ensure that she tested for the effects of actual weather rather than forecasts, she constructed two indicators of good weather: whether the day was predominantly sunny and whether rainfall that day was less than the local daily average of about 0.12 inches. Finally Bassi gauged each participant’s subjective assessment of the weather using a seven point scale, ranging from “Terrible” to “Awesome.”

In most cases, bad weather yielded the incumbent, Mr. I, a 10 to 20 percent boost, depending on which of the metrics Bassi used to define good and bad weather. Those results held up, Bassi found, when controlling for other factors such as race, gender, and political leanings. A detailed follow-up survey suggested that much of the weather-based difference in choice could be accounted for by mood—when bad weather made for more negative moods, that led participants to choose the safe incumbent more often.

Those results are at odds with media reports, which generally argue that bad weather suppresses turnout, in turn, favoring one party or the other, Bassi writes. While there’s no consensus among researchers about the overall effect of inclement weather on an election, her experiment suggests that a storm or heavy rain really could change the political landscape—and in different ways than anyone had previously thought.

Quick Studies

Could Economics Benefit From Computer Science Thinking?

computation

(Photo: 101332430@N03/Flickr)

Computational complexity could offer new insight into old ideas in biology and, yes, even the dismal science.

Economists are sometimes content asking whether or not a banking system could be stable or a market could continue to grow. But they and other scientists could benefit from a computational view that asks not just whether the right conditions exist but also how hard it is to find them, according to a commentary published today in Proceedings of the National Academy of Sciences.

The “how hard?” question is about computational complexity, says Christos Papadimitriou, a University of California-Berkeley computer scientist and the commentary’s author. “Nature, [people]—they are doing some kind of computation,” he says, but some computations are easier than others. For nature to compute the best possible kind of life for every environment on Earth is profoundly complex, an observation that informs biologists’ understanding of evolution. In fact, biologists don’t think nature actually finds the optimal kinds of life—it’s far too difficult a problem—an observation that helps them understand why life is so diverse.

Under certain assumptions about the economy, free markets produce stable, socially optimal outcomes, in the sense that no one person can improve his or her lot without hurting someone else.

Societies face a similar problem. For example, under certain assumptions about the economy, free markets produce stable, socially optimal outcomes, in the sense that no one person can improve his or her lot without hurting someone else. Politicians and the occasional novelist have used that claim to promote an unregulated free market.

That makes sense if you don’t contemplate the problem any further, but thinking about markets in terms of computational complexity puts the problem in a different light. Finding an economic outcome that’s stable and benefits everyone is a lot like the evolution problem. It’s not the hardest problem to solve, but as the number of economic players grows, the problem gets exponentially harder—tough even for a computer to deal with.

That has an important consequence. “You can’t expect a market to get there because you can’t expect a computer to get there,” Papadimitriou says. And if a market can’t get to a stable, socially-optimal solution, whether or not a solution exists becomes a less interesting—or at least quite different—question.

Ben Golub, a Harvard economist who studies social and financial networks, says that’s an important perspective, though it may not always be the most valuable one. “Much of complexity theory is focused on worst-case complexity,” he writes in an email. “So ‘hardness’ results that at first seem very sweeping” might not always apply. For example, real-world markets might be set up—intentionally or otherwise—to make solving certain economic problems computationally easier.

Still, “whatever it is that markets do, they are doing a sort of computation,” Golub says, and Papadimitriou and other computer scientists pose “a provocative, invigorating challenge for economists.” In a way, it’s a return to economists’ roots, too: In the 1950s and ’60s, economists thought long and hard about how societies could reach optimal solutions, or at least an equilibrium. Now, Golub says, “computer science has reinvigorated this hugely important area.”

Quick Studies

Politicians Really Aren’t Better Decision Makers

WH

(Photo: bigberto/Flickr)

Politicians took part in a classic choice experiment but failed to do better than the rest of us.

When it comes to risky and uncertain decisions, politicians have the same basic shortcomings as the rest of us, according to an experimental study presented earlier this month at the 2014 Behavioral Models of Politics Conference. That result undermines a core tenet of representative democracy, namely that our leaders are better at making political decisions than the rest of us.

As a species, we are not particularly good at decision making. Among our foibles, we will often make different choices based on a problem’s wording rather than its underlying structure. Danny Kahneman and Amos Tversky’s “Asian disease” experiment, a particularly well-known example, goes like this: An exotic disease is coming, and it’ll kill 600 people. You have two options. Choose the first, and 400 people will die. Choose the second, and you take a risk: There’s a two-thirds chance that everyone dies.

“Democratic government relies on the delegation of decision making to agents acting under strong incentives. These actors, however, remain just as human as those who elect them.”

In the original experiment, 22 percent of people surveyed chose the first option while 78 chose the second, but that’s not the interesting part. Given a choice between saving 200 lives with certainty or a one-third chance of saving everyone, Kahneman and Tversky found, 28 percent choose the first option while 72 percent choose the second—a different proportion, even though the choice is exactly the same as before.

That’s a bit troubling when it comes to the average citizen choosing whom to vote for, but it’d be worse if our political leaders were susceptible to the same effect. Alas, they are, according to a team of political scientists led by Peter Loewen. The team reached that conclusion with a straightforward test: they put the Asian disease question to 154 Belgian, Canadian, and Israeli members of parliament. In the loss frame, where subjects decided between 400 deaths or a two-thirds chance everyone dies, 82 percent of Belgian, 68 percent of Israeli, and 79 percent of Canadian MPs chose the risky option, compared with 40, 53, and 34 percent, respectively, when the researchers presented MPs with the less gloomily-phrased version.

For comparison, the experimenters posed the same problem to 515 Canadian citizens, who, if anything, were less susceptible to framing effects. “The overall patterns observed for MPs and for citizens is strikingly similar. However, the effect size observed in Canadian MPs … is larger than that estimated among Canadian citizens,” the team writes. It was also larger than estimates of the framing effect in average people.

It’s all a bit of a problem for a common line of reasoning among political scientists and political economists, many of whom assume that re-election concerns or political acumen will render politicians more strategic and also more rational than average Joes. Loewen and company’s results suggest otherwise. “Democratic government relies on the delegation of decision making to agents acting under strong incentives,” they write. “These actors, however, remain just as human as those who elect them.”

Quick Studies

Earliest High-Altitude Settlements Found in Peru

basin1

The Pucuncho Basin. (Photo: Kurt Rademaker)

Discovery suggests humans adapted to high altitude faster than previously thought.

Living at high altitude isn’t easy. The thinner air above 4,000 meters makes for colder temperatures, less oxygen, and less protection from the sun’s harmful ultraviolet rays. Yet humans occupied sites that high and higher in the Peruvian Andes as early as 12,800 years ago, according to a new study. The result could change how archaeologists think about the earliest human inhabitants in South America and how they managed to adapt to extreme environments.

Traveling to 4,000 meters and higher isn’t such a big deal as it once was. Mountaineers regularly climb 4,392-meter high Mount Rainier and miners work just outside of the highest city in the world, La Rinconada, Peru, which stands at 5,100 meters. India and Pakistan have even fought battles at 6,100 meters on the disputed Siachen glacier.

Traveling to 4,000 meters and higher isn’t such a big deal as it once was. Mountaineers regularly climb 4,392-meter high Mount Rainier and miners work just outside of the highest city in the world, La Rinconada, Peru.

But how and how early people actually lived in such extraordinary places is less clear. For some, human occupation in the Andes didn’t make any sense. Even if settlers could survive freezing temperatures and limited oxygen, altitude increases metabolism, meaning they’d need to eat more in a place where travel was difficult and food was scarce.

Regardless, Kurt Rademaker and colleagues report they’ve found evidence of two high-altitude settlements at sites in southern Peru. Members of the team had been on the trail of obsidian that turned up in the earliest coastal villages in the region, which were dated to between 12,000 and 13,500 years ago. But the obsidian didn’t originate there. Archaeologists have known for some time that it came from Alca in the Peruvian highlands, strongly suggesting contemporary outposts or base camps in the Andes.

Eventually, a combination of obsidian surveys, mapping of likely settlement locations, and reconnaissance led the team to 4,355-meter-high Pucuncho and 4,445-meter-high Cuncaicha. There, researchers found tools, animal and plant remains, and other signs of habitation. Using a carbon-dating variant called accelerator mass spectrometry, the team dated Pucuncho to between 12,800 and 11,500 years ago and Cuncaicha to between 12,400 and 11,800 years ago, roughly a millennium earlier than previously discovered settlements at similar altitudes.

The results may help scientists understand the genetic adaptations particular to high-altitude dwellers, especially with regard to how quickly humans were able to adjust biologically to harsh environments. “Our data do not support previous hypotheses, which suggested that climatic amelioration and a lengthy period of human adaptation were necessary for successful human colonization of the high Andes,” the team writes in Science. “As new studies identify potential genetic signatures of high-altitude adaptation in modern Andean populations, comparative genomic, physiologic, and archaeological research will be needed to understand when and how these adaptations evolved.”

“This research assists in finally explaining some of the key archaeological questions regarding early South American occupation,” Washington State University archaeologist Louis Fortin, who has worked with Rademaker in the past but was not involved in the present research, writes in an email. The work, he says, “has brought to light a significant discovery for South American archaeology and specifically high-altitude adaptation and the peopling of South America.”

Quick Studies

My Politicians Are Better Looking Than Yours

clinton

Hotter, if you're a Democrat. (Photo: veni/Flickr)

A new study finds we judge the cover by the book—or at least the party.

Beauty, they say, is in the eyes of the beholdee’s in-group.

At least, that’s what they say if “they” means researchers interested in how we perceive political leaders. According to researchers at Cornell University’s Lab for Experimental Economics and Decision Research, people seem to be judging the cover in part by the content of the book: Democrats find their political heroes more attractive than Republican leaders, and vice versa.

Curious to know, essentially, how hot for their leaders partisans and average citizens were, the lab’s co-director, Kevin Kniffin, and colleagues conducted a simple test—they asked people to say how attractive sets of  familiar and unfamiliar political figures were. In theory, if a person’s beauty or handsomeness were a fixed, objective trait of an individual—something we all agreed on—a beholder’s partisan leanings ought to have no impact.

Republican aides rated GOP leaders as more attractive than their donkey counterparts, but only by less than half a point.

But that is not what Kniffin and company found. In one version of the experiment, the researchers asked a total of 49 aides working for Wisconsin state legislators—38 Democrats and 11 Republicans, owing to the balance of power in the state—to rate the attractiveness of 24 politicians. That total included 16 familiar leaders, including recent Wisconsin gubernatorial and United States senate candidates, and eight relatively unfamiliar ones who came from New York.

The aides rated familiar politicians as more attractive than unfamiliar ones overall, but, more importantly, they thought leaders of their own party were more appealing than others. Democratic aides, for example, rated their leaders on average about a 5.5 on a nine-point scale and rated Republican leaders about 4.5. For Republican aides, those ratings were 4.2 and 5.2, respectively. Those results depended on aides being familiar with those politicians, though. When they were ogling low-profile politicians from New York, Wisconsin legislative aides found them a point or two less attractive overall, and Democrats rated Republican and Democratic leaders as equally attractive. Republican aides rated GOP leaders as more attractive than their donkey counterparts, but only by less than half a point. These results suggest that the aides had to actually know something about who they were rating for there to be a partisanship-attractiveness effect.

Those findings are at odds with studies that presume physical attractiveness is a “static personal characteristic that influences how people perceive each other,” the authors write in the Leadership Quarterly. “In effect, we find evidence that people are capable—for better or worse—of judging covers by their books, whereby the cover of physical attractiveness is viewed partly and significantly through the lens of organizational membership.”

Quick Studies

That Cigarette Would Make a Great Water Filter

cig

(Photo: 42787780@N04/Flickr)

Clean out the ashtray, add some aluminum oxide, and you've (almost) got yourself a low-cost way to remove arsenic from drinking water.

In further evidence that one person’s trash is another’s treasure—and perhaps life saver—researchers in China and Saudi Arabia have devised a way to use cigarette ash to filter arsenic from water. The technique could prove to be a cost-effective way to deal with contaminated drinking water, especially in the developing world.

Odorless and tasteless, arsenic is more than just the stuff of Agatha Christie novels. It’s also a serious public health threat in some parts of the world, notably Bangladesh, where naturally occurring arsenic compounds are abundant in the soil. Even in wealthy countries such as the United States, a mix of natural and industrial sources poses a threat to public health if it goes undetected and unmanaged. Regardless of the source, long-term exposure through drinking water and from crops irrigated with contaminated water can lead to skin lesions and cancer. Fortunately, richer nations have a number of options for dealing with arsenic, including absorption treatments and methods based on chemical oxidation.

Odorless and tasteless, arsenic is more than just the stuff of Agatha Christie novels. It’s also a serious public health threat in some parts of the world, notably Bangladesh, where naturally occurring arsenic compounds are abundant in the soil.

But in the developing world, finding the money for a state-of-the-art treatment facility isn’t an easy job. Apart from collecting rain water and boiling it, the simplest and most cost-effective way to treat arsenic-laced water is absorption. A standard water filter just passes water through a material that attracts arsenic compounds but lets water molecules flow by.

Here’s where cigarette ash comes in. Tobacco is grown throughout the world, and millions of cigarettes are made and smoked every day—a public-health concern in its own right. But it’s also a good source of water-filtering carbon.

“When people smoke, incomplete combustion emerges as air is sucked through the tobacco within a short time. Thus, a certain amount of activated carbon”—that’s the porous, absorbent stuff in your water filter—“is formed and incorporated into the cigarette soot,” write He Chen and colleagues in Industrial & Engineering Chemistry Research. The team combined that with another material for arsenic removal, aluminum oxide, to create a low-cost, relatively easy-to-make filter.

Neither ash nor aluminum oxide is ideal as a filtering material—ash has to be heat treated to be an efficient water filter, while aluminum oxide tends to clump up or form gels when exposed to water. To get around that, the researchers treated cigarette soot with hydrochloric and nitric acid before mixing the resulting powder with aluminum nitrate, finally producing an aluminum oxide-carbon mix. Finally, the team tested their concoction on a groundwater sample from Mongolia. With about two grams of aluminum oxide to one gram of cigarette-soot carbon, the team removed about 96 percent of the arsenic in the sample, as well as 98 percent of fluoride ions. They also found that they could use the same mix six times without losing filtering capacity. Finally, something good about smoking cigarettes.

Quick Studies

Love and Hate in Israel and Palestine

warehouse

A warehouse destroyed by the Israeli army and Hamas. (Photo: un_photo/Flickr)

Psychologists find that parties to a conflict think they're motivated by love while their enemies are motivated by hate.

Not long after the September 11th attacks, a Newsweek cover story famously purported to explain “why they hate us,” they being militant Muslim extremists. But there might be a problem with that thinking. According to a new study, it’s not hatred of outsiders that motivates opposing sides in a conflict. To some extent, it’s love for each other.

Psychologists have known for quite a while now that we interpret others’ actions rather differently than our own, even if they’re the very same actions. There’s a simple reason for that difference, variously called the fundamental attribution error and correspondence bias. While we experience our own internal responses to the situations we encounter, we can only see the external actions that others take. It’s not that we’re incapable of empathy—who hasn’t heard the aphorism that you can’t know someone until you’ve walked a mile in their shoes?—but it’s harder when we don’t know what others are thinking and feeling. It’s harder still when political or military conflict is involved: That idea is often illustrated by the hostile media effect, in which both sides in a dispute view media coverage as biased against them.

It’s not that we’re incapable of empathy—who hasn’t heard the aphorism that you can’t know someone until you’ve walked a mile in their shoes?—but it’s harder when we don’t know what others are thinking and feeling.

That’s all fairly well understood, but psychologists Adam Waytz, Liane Young, and Jeremy Ginges wondered whether they could get at the specific emotions that conflicting parties felt toward their comrades and their enemies. To do so, they first asked 285 Americans to rate, on seven-point scales, whether either their political party or the opposing one was motivated by love (empathy, compassion, and kindness) or hate (dislike, indifference, or hatred toward those in their own party). On average, study participants rated their own parties as being 23 percent more motivated by love than hate, while they rated those in other parties as being 29 percent more motivated by hate than love.

Things got a bit more interesting when the team asked similar questions of 497 Israelis and 1,266 Palestinians. Asked why some of their fellow citizens supported bombing in Gaza, Israelis reported they were 35 percent more motivated by love for fellow Israelis than hate, while they thought just about the reverse for Palestinians’ motivations for firing rockets into Israel. Palestinians, meanwhile, ascribed more hate than love to Israelis, though they thought fellow Palestinians were about equally motivated by love and hate. An additional survey of 498 Israelis found that the more they perceived differences in the two parties’ motivations, the less likely they were to support negotiations, vote for a peace deal, or believe that Palestinians would support such a deal.

Such perceptions are “a significant barrier to resolution of intergroup conflict,” the authors write in a paper published today in Proceedings of the National Academy of Sciences. From an additional study of Republicans and Democrats, the team concludes that monetary incentives might ameliorate the problem, though “the strength of this particular intervention might vary for conflicts of a more violent and volatile nature.”

Quick Studies

How to Water a Farm in Sandy Ground

sand

(Photo: angeloangelo/Flickr)

Physicists investigate how to grow food more efficiently in fine-grained soil.

Sand is probably not what you think of when you think of growing food—it’s supposed to be at the beach or in the desert, not in your garden. But as our population and the demand for both food and water grows, farmers will likely have to find ways to grow crops efficiently in less-than-ideal soil, while conserving as much water as possible. Now, physicists have developed some recommendations for dealing with one challenge of growing in sand—water retention—including pre-wetting and mixing absorbent particles in with the soil.

The problem with sandy soil is well known to gardeners and farmers alike: Unlike soils made up of smaller particles and more varied, irregular shapes, sand doesn’t hold water and nutrients well. Instead, water falling on sand tends to form a shallow, uniform top layer that collects into narrow vertical channels as water travels deeper—much like water droplets forming and then falling from the edge of a wet roof. As a result, most of the sand never gets wet, and the parts that do drain quickly, making it difficult for plants to take in water.

Unlike soils made up of smaller particles and more varied, irregular shapes, sand doesn’t hold water and nutrients well.

Searching for a solution, Yuli Wei and colleagues at the University of Pennsylvania and the Complex Assemblies of Soft Matter lab first performed a series of experiments designed to probe the effects of soil particle size and water flow rate on soil irrigation. Controlling those factors in real soil, however, is difficult to say the least, so the team used boxes of tiny glass beads, ranging in size from 180 micrometers to a millimeter in diameter as a stand in for sandy soil, and they devised a sprinkler system that would allow them to control both how much water fell and how fast the droplets were moving when they hit soil. While channels formed regardless of particle size, irrigation flow rate, and droplet speed, the team found that using larger beads resulted in narrower water channels that formed closer to the surface, while soils comprising smaller particles led to much wider channels and a deeper layer of water near the surface—a consequence of increased capillary forces in finer soils. The team found similar results when increasing the irrigation rate, though droplet speed had no effect.

Next, the experimenters turned to controlling the formation of water channels and improving overall irrigation. One technique that worked was thoroughly mixing a small amount of water into the soil before turning on the sprinklers (or before the rain came). Even in small amounts, pre-wetting was enough to encourage water sprinkled on the surface to diffuse through the soil rather than form narrow channels. A potential alternative to pre-wetting is to add a layer of super-absorbent hydrogel particles—a mix of potassium acrylate and acrylamide—underneath the surface. As the hydrogel wets, its particles swell and form a kind of dam, so that the soil above slowly fills as water falls. Either way, there’s more water in the right places for crops to grow.

Quick Studies

Unlocking Consciousness

consciousness

(Photo: 59898141@N06/Flickr)

A study of vegetative patients closes in on the nature of consciousness.

You wake up in a hospital, eyes open but completely unable to move. You can’t even blink. How’s anyone to know you’re conscious, let alone aware?

While that’s an extreme case, there’s a real-world need to understand whether patients with more common disorders of consciousness, such as those in a vegetative state, are alive and thinking. Now, researchers report they’ve taken a step in that direction by comparing patterns of electrical activity in the brains of healthy adults with those of patients suffering from a consciousness disorder.

Inspired by research indicating a small number of patients might be at least marginally aware and able to control their thoughts despite showing no outward signs of consciousness, Srivas Chennu and an international team of neuroscientists decided to see whether they could identify consciousness using electroencephalography, or EEG, which tracks oscillating electrical signals from the brain and measured on the scalp. The team collected 10 minutes of EEG data from 91 points on the heads of 32 patients in a vegetative or minimally conscious state as well as a control group of 26 healthy men and women. Next, they broke the data down according to the electrical signals’ frequency bands, commonly known as delta (0-4 Hertz, or cycles per second), theta (4–8 Hz), and alpha (8–13 Hz).

A small number of patients might be at least marginally aware and able to control their thoughts despite showing no outward signs of consciousness.

On the first pass through the data, the team noticed that their patients tended to have stronger delta-band and weaker alpha-band signals compared to those of healthy people, but really getting a handle on the data required a more sophisticated approach. First, the researchers computed correlations between delta, theta, and alpha-band signals from each pair of the 91 EEG measurement points. From those correlations, they next built a connectivity network, a graph showing the strongest correlations—hence the strongest connections—between different parts of the brain. Finally, they compared graphs from the patients to those of healthy people using measures such as clustering, which describes how dense the connections are between a subset of points in the brain, and modularity, a measure of how easily one could break the graph down into smaller components by cutting individual links.

Healthy subjects, the neuroscientists found, had more clustered and less modular alpha-band networks than patients, and alpha networks also spanned a greater physical distance in healthy people than in patients. (Though it’s a bit counter-intuitive, clustering and modularity don’t actually take physical distance into account.)  Much the opposite was true of delta- and theta-band networks: These were more clustered and less modular in patients than in healthy controls, although they didn’t extend as far across the brain as alpha networks did in control subjects. Finally, behavioral and EEG data combined suggested that the more responsive a patient was, the more that patient’s alpha-band connectivity resembled a healthy person’s.

Those last two points could be key, the authors argue today in PLoS Computational Biology. The shift to delta- and theta-band networks in patients with consciousness disorders doesn’t bring quite the same pattern of connectivity as healthy alpha-band networks, suggesting that it’s the long-range connections that underlie consciousness.

Quick Studies

Advice for Emergency Alert Systems: Don’t Cry Wolf

emergency

(Photo: thompsonrivers/Flickr)

A survey finds college students don't always take alerts seriously.

Text and email-based campus emergency alert systems seem like a great idea, and in the worst circumstances they might help save lives. But a new experiment suggests a serious downside: If administrators aren’t careful, students, faculty, and staff might think they’re crying wolf.

Sending countless alerts about every misplaced backpack or suspicious character “is a huge issue. If people don’t like the system, they’re not going to trust it,” says Daphne Kopel, lead author of a new study on students’ perceptions of emergency alert systems and a graduate student at the University of Central Florida. “Overexposure can really do harm.”

“We were getting so many alerts … people were kind of laughing about it.”

Technically, the Department of Education has required alert systems since 1990’s Clery Act, but those systems came into sharper focus—and under tighter scrutiny—after a mass shooting at Virginia Tech in 2007. Amendments to the Clery Act in 2008 led some schools to develop elaborate warning systems, complete with text alerts, remote-controlled locks, and more.

But that technology won’t make a difference if no one takes it seriously, an issue that first occurred to Kopel when thinking about her response to a fire in her building, she says. Kopel set to work on a survey of University of Central Florida students’ attitudes toward their schools’ alert system—“we were getting so many alerts … people were kind of laughing about it”—when fears a serious attack might be underway shut the campus down.

In the wake of the attack, Kopel worked with Valerie Sims and Matthew Chin to probe students’ thoughts about their alert system and how the planned attack had changed them. Surveying 148 UCF students, the team found modest but nonetheless important changes. Students agreed they liked the alert system a bit more after the attack compared with before, and they were less likely to feel like UCF was a safe campus than they had beforehand. Tellingly, survey takers were 15 percent less likely to report hearing others openly mock the alert system, suggesting that students took the system more seriously after the interrupted attack. Perceptions also varied based on gender—the alert system made women feel safer than it did men, for example—as well as personality traits such as agreeableness and imagination.

Kopel says the team is planning follow-up surveys at the beginning and end of each semester. That should address one drawback of the study—students’ recollections of how they felt months ago or prior to an emergency aren’t the most reliable—as well as provide feedback to administrators. Those surveys will also help with Kopel’s broader goal of understanding how students and others categorize and respond to different kinds of emergencies.

The team will present their research later this month at the Human Factors and Ergonomics Annual Meeting in Chicago.

Quick Studies

Brain’s Reward Center Does More Than Manage Rewards

brain1

The nucleus accumbens is highlighted in red. (Photo: Wikimedia Commons)

Nucleus accumbens tracks many different connections in the world, a new rat study suggests.

One of the keys to modern thinking about choices and values is a small part of the brain called the nucleus accumbens, which is part of the ventral striatum, itself part of the basal ganglia. It’s sometimes called the reward center of the brain, and neuroeconomists generally believe the nucleus accumbens is responsible for recognizing and processing the rewards and punishments that follow from our actions.

Like much of what you read about neuroscience these days, that’s only partly right. Nucleus accumbens isn’t just a reward processor, according to a recent study. It’s more like a coincidence processor.

When a reward follows an action, that action gets reinforced, and we’re more likely to take that action in the future.

Interest in nucleus accumbens, or NAc, grew in the 1990s, when monkey studies suggested that getting a sip of juice as a reward for a correct response to a problem caused dopamine neurons in and around monkeys’ NAc to fire. FMRI studies showed something similar happening in our brains, too, leading theorists to suggest that NAc was doing some kind of reinforcement learning: When a reward follows an action, that action gets reinforced, and we’re more likely to take that action in the future. But humans and other animals can learn all kinds of associations—things drop when we let go, October is crunch-time in baseball, and the better your Skee Ball score, the more prize tickets you get. Even dogs can learn food’s coming when a bell rings. Curiously, there are few studies that look at whether NAc might be recording all of these associations, not just the action-reward ones.

To press the question, Dominic Cerri, Michael Saddoris, and Regina Carelli conducted a standard experiment. First, they taught 20 rats a variety of second-order stimulus associations. For example, a rat might first learn that white noise followed right after a light flashed, and in a second session, they’d learn that a food pellet would be available following white noise. Finally, the team tested the rats—if they went looking for food after a flashing light, but not other signals, they’d learned the light-noise-food pattern. All the while, the researchers monitored NAc activity using electric probes implanted in the rats’ brains.*

The team found that NAc neurons in the rats fired not only in response to the food pellets in the second session, but also during the first training session when there were no rewards of any kind. Next, the team divided the animals into groups of good and poor learners based on how well they’d performed during the test phase, a process by which they found that good learners’ NAc neurons fired more during learning than either poor learners’ brains or those of a control group. In other words, NAc does more than just encode rewards—it tracks other sorts of connections in the world, too. Though questions remain, that insight might throw a small but intriguing wrench into our understanding of how rat—and maybe human—choices work.*


*UPDATE — October 14, 2014: We originally wrote that the experiment was conducted with mice. That language, in both the body of the post and subheadline, has been corrected to rats.

Quick Studies

A City’s Fingerprints Lie in Its Streets and Alleyways

street

(Photo: robgross/Flickr)

Researchers propose another way to analyze the character and evolution of cities.

Cities touch you, each in their own special way. Walk its streets, and Seattle feels different than Berlin or Johannesburg or Tokyo. Each has its own fingerprint.

Still, those fingerprints have just four types, exemplified by Buenos Aires, Athens, New Orleans, and Mogadishu, argue researchers Rémi Louf and Marc Barthelemy.

Louf and Barthelemy trained in physics but have an ongoing interest in how one part of a city’s core infrastructure—its streets—evolves and how that evolution relates to, say, where people live in relation to work. It’s part of an emerging science of cities aimed at understanding how an urban environment’s physical, social, and economic networks evolve over time, though much of the current research is somewhat abstract. In a typical model, street networks are just that—abstract representations that could be Paris or a map of the brain. But real streets are grounded in real cities, which take on attributes like size and shape. How should those factors be taken into account? How could different street plans change the way cities work?

Real streets are grounded in real cities, which take on attributes like size and shape. How should those factors be taken into account? How could different street plans change the way cities work?

No one’s quite prepared to answer those questions yet, but Louf and Barthelemy decided to take a first step by at least categorizing what was out there in the maps of the 131 cities on six continents. Rather than study the street network itself, they considered the shapes and sizes of the blocks that streets form. Following academic geographers’ lead, they computed shape factors: the ratio of a block’s area to that of the smallest circle that could fit around it. Then, Louf and Barthelemy categorized cities based on their blocks’ distribution of size and shape.

The analysis revealed four main city fingerprints. The largest group, comprising 102 cities, were New Orleans-like cities with the largest city blocks in a wide range of shapes. Athens-like cities made up another 27 cities and contained generally smaller blocks—less than about 10,000 square meters, or, for a square block, about 300 feet on a side—but a wide variety of shapes. Only two cities remained: Buenos Aires, with medium-sized square and rectangular blocks, and Mogadishu, which features almost entirely small, square blocks.

Interestingly, every major city the team looked at in the United States and Europe fell into the NOLA category except Vancouver and Athens. Europe and the U.S. have their own particular subtypes, but a few U.S. cities, including Portland, Oregon, and Washington, D.C., fall more into the European mold.

Those sorts of differences, the authors suggest, could be used to better understand how cities are born and evolve. Uniform block sizes, such as those found in New York, “could be the result of planning” while a city like Paris reflects a continual process of building and rebuilding that produces a range of block shapes and sizes.

Quick Studies

When Violins Meet Leaf Analysis

violin

(Photo: land_camera/Flickr)

Techniques used to analyze leaf shapes reveal the subtle evolution of the violin.

Violins are kind of like leaves. They’ve changed over time, driven in part by their designers’ tastes. Violins fall into distinct lineages, recognizable by their shapes, just as leaves from one or another plant would be. And they show signs of a sort of natural selection: Violins look more and more like the ones first created by Antonio Stradivari.

This is according to Dan Chitwood, a biologist who normally studies leaves. Specifically, he studies how leaf shapes have evolved over time and the genetic basis of that evolution. Doing that research means quantifying and tracking shape changes over time, something just as easily applied to Chitwood’s other avocation, the viola.

Chitwood compares violins to living, evolving organisms, complete with mutations and a sort of survival of the fittest.

Chitwood’s first question was whether he could tell the difference between different kinds of string instruments based on their shape while taking overall size out of the equation. To do that, he drew on an auction house’s database of more than 9,000 instruments in the violin family—the viola, cello, bass, and the violin itself—from prominent luthiers over a range of 400 years. Chitwood used those to construct instrument outlines, which he could then compare using a method called linear discriminant analysis.

To understand the idea, imagine constructing a shadow puppet. You could cut the puppet out of a single piece of cardboard or wood, or you could assemble it using a set of elementary shapes. In a similar way, you could describe a violin’s shape as a whole, or you could break it down into more abstract shapes. A double bass looks like an especially broad-bottomed pear, while a lute looks like a squash with some triangle thrown in. While Chitwood’s study uses a more sophisticated set of basic shapes, the goal is the same. By quantifying how much pear, squash, and triangle a shape has, you can quantify the similarities and differences between different instruments’ shapes.

Violas and violins, it turns out, are hard to discriminate, but cellos and double basses are distinct. While all four are pear-shaped to some extent, the double bass takes that to another level, complete with a bit of stem where the instrument’s neck attaches to the main body. When Chitwood turned his attention specifically to the roughly 7,000 instruments in his database, he found that violins fell into four families, each represented by an archetype designed by an actual human family—Maggini, Amati, Stainer, and, of course, Stradivari, whose violins were slightly more bass-like in shape. What’s more, other violins became more like these four over time, and especially more like Strads.

In that respect, Chitwood compares violins to living, evolving organisms, complete with mutations and a sort of survival of the fittest. “Despite using molds, Antonio Stradivari nonetheless innovated new shapes, using a method both faithful to the previous outlines but with the potential to change,” he wrote in the paper. Meanwhile, luthier Jean-Baptiste Vuillaume purposely copied Stradivari’s designs, because those were the ones customers selected.

Quick Studies

Drill Sergeant Bosses Don’t Get the Job Done

drill

(Photo: Marines/Flickr)

Even if they think it's meant to motivate, workers respond badly to workplace abuse.

“I done some checkin’. I looked through your files. I know about your mama. Hey, don’t you eyeball me, boy! I know your father’s a alcoholic and a whore chaser. That’s why you don’t mesh, Mayo, because deep down—don’t you eyeball me boy!—deep down inside, you know the others are better than you. Isn’t that right, Mayo?” That’s Gunnery Sergeant Emil Foley, in an Oscar-winning performance by Louis Gossett Jr. in the 1982 film An Officer and a Gentleman. It’s unclear whether Foley, who came to define the hard and mean yet loving drill sergeant stereotype, is trying to motivate Richard Gere’s Zack Mayo or simply humiliate him into quitting.

In the movie, Mayo completes his training. But in real life, a team of researchers argue, Foley’s behavior could backfire, whatever the intentions: Employees with abusive supervisors are more likely to slack off and more likely to return their bosses’ favor by publicly ridiculing them.

It’s clear that workplace abuse doesn’t make for happy workers, and that can cost businesses quite a lot of money—as much as $24 billion a year in the United States according to one estimate.

It’s clear that workplace abuse doesn’t make for happy workers, and that can cost businesses quite a lot of money—as much as $24 billion a year in the United States according to one estimate. But maybe the costs aren’t so bad if employees believe their bosses mean well. Kevin Eschleman and colleagues put the question to 268 people through the StudyResponse Project. Specifically, their survey asked participants how often their supervisors abused them—putting them down in front of others or ridiculing them, for example—as well as how often they’d engaged in counterproductive actions, such as making fun of their bosses at work or simply putting little effort into work. Crucially, they also asked how workers perceived the abuse. Perhaps, the team thought, put downs and ridicule could work if it was coming from the right place. Indeed, Eschleman says “we thought motivational intent would soften the blow” of abuse.

But it did not. Workers reported about the same frequency of counterproductive behavior at work when they felt abused regardless of what they thought their bosses were thinking. In fact, it mattered more whether abuse was intentional than whether it was hostile or motivational. When employees didn’t think bosses were being particularly hostile or motivational, the amount of abuse had little effect on their behavior. That could be because without intent, workers could reasonably dismiss it. “Maybe it was accident, maybe they were having a bad day,” Eschleman says.

Eschleman says the results could be important for manager training programs and for human resources departments trying to settle disputes related to workplace abuse. Mentioning any kind of intent, positive or negative, he says, “might be the wrong strategy.” Still, Eschleman cautions that his team’s study was small and might not apply in every working environment. “It might be interesting to explore this in the military or in other industries” where lives are on the line, such as medicine, he says.

Quick Studies

Too Hot to Hire

shutterstock_113453149

Don't hate her because she's beautiful. (Photo: Zoom Team/Shutterstock)

Are you an attractive woman looking for a job? Acknowledging your beauty could keep potential employers from discriminating against you.

Beautiful people, as a rule, have it pretty good. They tend to have more self-confidence, higher income, and better financial well-being than plain-faced or downright ugly people. They’re also rated as more socially and intellectually competent. But one way attractive women, in particular, suffer is when applying to jobs that are stereotypically masculine.

While this phenomenon (known as the “beauty is beastly” effect) was first discovered 30 years ago, researchers didn’t exactly scramble to find solutions for this not-so-oppressed minority. Luckily, the modern sexy woman need not despair—a new study shows that simply acknowledging your attractiveness can hold this type of discrimination at bay.

Researchers discovered that acknowledging one’s sex and attractiveness causes evaluators to rate women as more masculine.

It may sound trite, but the “beauty is beastly” effect “demonstrates a subtle form of sex discrimination,” according to a paper currently in press, to be published in Organizational Behavior and Human Decision Processes. Researchers used a technique that has been shown to mitigate discrimination against those with disabilities—by simply acknowledging the stereotypes that evaluators may hold.

In the first study, undergraduate students rated “employment suitability” of four different candidates for a construction job, one of whom was a woman, based on a photo and interview transcript. Some saw the photo of a beautiful woman, while others saw one of a not-so-beautiful woman. In the interview transcript, the woman said “I know that I don’t look like our typical construction worker,” “I know that there are not a lot of women in this industry,” or neither. In the control condition, where the women did not acknowledge any stereotypes, the unattractive woman was rated as more suitable for the job than the beautiful woman. However, the attractive woman received significantly higher ratings if she acknowledged either her appearance or sex than if she didn’t.

The unattractive woman received significantly lower ratings if she acknowledged her appearance, compared to the control condition.

Screen Shot 2014-10-08 at 10.49.09 AM

These results were replicated in a similar study with construction workers as participants. So, why does mentioning stereotypes have an effect on evaluators? In a subsequent study, the researchers discovered that acknowledging one’s sex and attractiveness causes evaluators to rate women as more masculine—and presumably, a better fit for the manly construction job. Those who acknowledge stereotypes are also, surprisingly, rated as less counter-communal. Women who are counter-communal (a nicer way to say “bitchy,” as ambitious women are often perceived) violate gender norms and are evaluated less favorably because of it. This double standard is just another example of how women on the job market must tread a fine line between feminine and masculine.

If you’re an attractive woman gunning for a job in construction, engineering, tech, or another male-dominated industry, you might consider being upfront about your beauty. But beware, researchers write: “People often have unrealistic views of their physical attractiveness … so acknowledgment could result in negative repercussions.” Of course, we could also hope for a world where people aren’t as rigid and stodgy about their gender beliefs, but that’s little solace for a woman who’s afraid to be too hot to hire.

Quick Studies

Twitter’s No Beacon of Democracy, But It’s Better Than Expected

twitter

(Photo: 30032901@N04/Flickr)

It's pretty bad, but it's less status-conscious and less insult-prone than you'd think.

It’s doubtful that anyone really thinks of Twitter as a good example of democratic discourse. Sure, there’s plenty of good, interesting, even important things to read, but the Internet in general isn’t known as being a safe place for everyone to express their opinions. Still, in some respects, Twitter might be a touch more democratic than you might think.

Democracy means many things to many people, but for the purposes of their study, Zhe Liu and Ingmar Weber turned to Jürgen Habermas and his idea of the public sphere, which placed the highest value on interpersonal equality, inclusiveness, and discussion that focused on common concerns rather than those of one social class. In theory, at least, Twitter could be such an institution, albeit in 140-character form.

The top 50 words in inter-ideology chats included “kill,” “murder,” and “hate,” compared with words such as “love,” “thank,” and “great” for discussions among allies.

Before they could figure out whether Twitter met Habermas’ conditions, of course, they had to make those conditions concrete and define groups that might engage in some sort of political discourse. On the latter question, they decided to focus on tweeted conversations involving Democrats and Republicans, the Hamas-associated military group Izz ad-Din al-Qassam Brigades (whose original account Twitter was suspended, though others have since popped up) and the Israeli Defense Forces, and—why not?— Real Madrid and FC Barcelona. To address Habermas’ criteria, Liu and Weber sampled a total of 226,239 Twitter accounts that had re-tweeted one of the group’s tweets and examined how they interacted with allies and foes alike.

The pair’s analysis showed that Twitter isn’t an idealized Enlightenment salon, but it might not be quite so bad as we all thought. First, the bad news: Across social groups, defined by how many followers users had, tweeters were more likely to engage in conversation with those on their own side, just as political scientists have found in other contexts. Meanwhile, cross-ideology conversations were shorter than others and more often than not initiated by mentioning high-status users. They weren’t the most pleasant chats, either. The top 50 words in inter-ideology chats included “kill,” “murder,” and “hate,” compared with words such as “love,” “thank,” and “great” for discussions among allies.

There is a bright side, though. Users generally disregarded their Twitter social status and participated in cross-ideology conversation at rates largely independent of status. And, despite less-than-pleasant word use, a qualitative analysis of conversations between foes suggested tweeters avoided insults and made generally logical arguments, although they rarely cited any references.

Liu and Weber will present their results at the Sixth International Conference on Social Informatics in Barcelona this November.

Quick Studies

How Moms Change Brains

mom

(Photo: 80502454@N00/Flickr)

Seeing mom makes young children's brains function more like those of adolescents.

For little kids, seeing mom or dad nearby is a calming influence, maybe the difference between between perfect calm and a full-bore freakout. It’s as if having a trusted caregiver nearby transforms children from scared toddlers into confident adolescents. And in a way, a new report suggests, that’s what having mom around does to a kid’s brain.

When they’re first born and for years after, infants and young children can’t do a whole lot by themselves. They can’t eat on their own, they aren’t very good at managing their emotions, and it takes a while for them to learn how to dress themselves. Most children figure it out eventually, but in the meantime they need their parents to do a lot of that stuff for them. All the while, their brains are changing, too. Well into adolescence, kids’ brains undergo anatomical and physiological changes that affect the way we think and act.

Young children made around 20 percent fewer errors when their mothers were present than when they weren’t, while there was no difference for adolescents.

That observation led Nim Tottenham and her lab at the University of California-Los Angeles to wonder whether a child’s brain might function differently depending on whether she or he can see her mother. In particular, Tottenham wanted to know whether being able to see mom would change connections between the amygdala—an area of the brain that’s been linked to emotional responses, among other things—and the prefrontal cortex, the part of the brain thought to be responsible for integrating and processing information before turning it into action.

To find out, the team used functional magnetic resonance imaging, or fMRI, to scan the brains of 23 children between the ages of four and 10 and another 30 kids aged 11 to 17 while they viewed a series of photographs. For 28 seconds at a time, each child viewed a picture of his or her mother or a stranger, both of whom might be smiling or wearing a neutral expression. In a companion experiment outside the scanner, children viewed a series of pictures of faces and were told to press a button when they saw a particular expression, such as a happy face.

Young children’s brains responded differently based on whether they were looking at their mothers or strangers. In particular, their brains showed signs of positive amygdala-PFC connections when viewing pictures of strangers, but negative connections when viewing pictures of their mothers, suggesting more mature and stable brain function—and likely more mature and stable behavior, at least when moms were around. In contrast, tweens and teens had negative connections whether they were looking at their mothers or strangers. In other words, looking at pictures of their mothers made young children’s brains look a little more like those of adolescents.

The companion behavioral experiment backed up that thinking—young children made around 20 percent fewer errors when their mothers were present than when they weren’t, while there was no difference for adolescents. That combined with the fMRI results to suggest that mothers—and likely other caregivers—can provide an external source of mental regulation that young children won’t develop until later in life, the authors write in Psychological Science.

Quick Studies

A Poor Sense of Smell Might Mean Death Is Near

smell

(Photo: 60213635@N07/Flickr)

You probably won't smell Death before he knocks at your door.

Here’s what we know from a study of senior citizens sniffing scented markers for science: Your sense of smell is as good an indicator of your five-year risk of death as heart failure, lung disease, and cancer, and perhaps a better one.

A good sense of smell isn’t the usual thing doctors look for when predicting how long a patient has to live. That’s a role usually reserved for factors like mental state, disease, and mobility. After all, a patient with dementia, congestive heart failure, and difficulty getting out of bed probably isn’t going to live a whole lot longer. On the other hand, a decline in the ability to detect different smells often precedes neurodegenerative diseases such as Parkinson’s and Alzheimer’s, one of the leading causes of death in the United States. And that, Jayant Pinto and colleagues at the University of Chicago observed, might mean they could assess the risk not just of neurological disease but also mortality risk based on simple tests of smell.

“We believe olfaction is the canary in the coal mine of human health, not that its decline directly causes death.”

To find out, Pinto and team included as part of the National Social Life, Health, and Aging Project a test that your stoner friends from high school might have enjoyed. In the first wave of the project, conducted in 2005 and 2006, NSHAP staff conducted in-person interviews with 3,005 men and women between the ages of 57 and 85. As part of that interview, interviewees sniffed five Burghart Sniffin’ Sticks imbued with the perfumes of rose, leather, orange, peppermint, and fish and then tried to identify each aroma from a multiple-choice list. The second wave of the project, conducted five years later, was less colorful—researchers checked with interviewees, their families, news reports, and public records to see whether smell-test participants were still alive.

As the researchers suspected, they found that smell was a solid predictor of mortality. Only about 10 percent of those who scored 100 percent on the test died within the following five years, while a third of those who couldn’t correctly identify a single smell died. That held up even after controlling for age, gender, race, education, and even serious medical conditions like heart disease, cancer, liver failure, or stroke. After taking all those into account, an inability to identify common odors indicated older adults were nearly two and half times more likely to die within five years than those who could smell just fine.

Still, the researchers aren’t saying that people will die of a failing nose. “We believe olfaction is the canary in the coal mine of human health, not that its decline directly causes death. Olfactory dysfunction is a harbinger of either fundamental mechanisms of aging, environmental exposure [to pollution, toxins, or pathogens], or interactions between the two,” the authors write in PLoS One.

Quick Studies

Why That Guy Keeps Reminding You He Went to an Ivy League School

harvard 3

(Photo 360b/Shutterstock)

It's sometimes the people least secure in their place who really, really want us to know they belong.

Unlike kids at Harvard or Princeton, students at the University of Pennsylvania are in the awkward position of being Ivy League-educated but not always instantly recognized as smart. Harvard is known around the world as the pinnacle of Western intellectual life. Though University of Pennsylvania ranks alongside the other Ivies at the nation’s top, its unpretentious name, for those unfamiliar with East Coast schools, sometimes conjures frat boys and cows. When Penn psychologist Paul Rozin recently asked 204 Americans to free-associate words with “Ivy League,” 40 percent mentioned Harvard; less than two percent mentioned Penn.

Rozin conducted his experiment as part of a larger study, because he had a hunch that Penn’s tenuous perceived connection to the Ivy League changes how its students identify themselves with the elite circle of universities. His next experiment had research assistants ask 53 students at Penn and 54 students at Harvard to write down words or phrases they associated with their schools. Sixteen Penn students wrote “Ivy.” Only four Harvard students did the same.

The need to be recognized as part of a prestigious or desirable group is fundamental to anyone who just makes that group’s cut.

According to Rozin, the Penn students were showing a tendency to form what he calls “asymmetrical social Mach bands.” This means that because of their school’s marginal status, the students felt compelled to play up their Ivy League affiliation. It’s a common impulse, Rozin says: “Individuals generally prefer to be in higher-status or more positively valenced groups, both to enhance their self-esteem and to project a more impressive self to others,” he writes in the study, which was published in Psychological Science in August.

But how far does this impulse go? While it’s not surprising high-achieving Ivy Leaguers would want to be sure their credentials are known, Rozin speculates that the need to be recognized as part of a prestigious or desirable group is fundamental to anyone who just makes that group’s cut. Lieutenants may brag more about their officer status than colonels do; junior varsity players may boast about their team more than varsity players; the nouveau riche may flaunt their wealth more than old money.

Rozin ran two other experiments to explore this possibility further, both of which compare how institutions market themselves. In the first, he looked at the websites of about 200 highly ranked national and regional universities, and found that the regional universities—which offer fewer graduate programs—refer to themselves as universities 15.8 percent more than the national universities do. In the second, he found that small international airports include the word “international” when writing about themselves online 36.8 more than than large (and thus better known) international hubs.

While these results are far from proof of a universal human tendency, they still hint at a less-than-flattering element of human vanity, Rozin suggests, because they underscore our deep-seated concern about impressing others. Insecurities may be good for compelling institutions to market themselves better than their competitors, sure. But the study’s a reminder that you might want to refrain from telling your friends yet again that you played on your high school’s freshman football team.

Quick Studies

Does Cramming for a Math Test Help You Graduate High School?

math

(Photo: billselak/Flickr)

A study of Norwegian students suggests it might.

Math tests are good for you, according to a new study of more than 155,000 Norwegian 16-year-olds who took mathematics or language exit exams between 2002 and 2004. The intense preparations that precede those exams and their high-stakes nature reduce the dropout rate and increase enrollment in higher education, according to new research.

Researchers agree that there’s a connection between math test scores and educational attainment, income, and other individual outcomes, but it remains unclear if math scores are simply an indicator of good things to come or if something about taking a math test and doing well actually brings those good things about. That’s something only a randomized experiment could possibly tell you.

Preparing for and taking a math exit exam increased boys’ probability of graduating high school within five years by 0.3 percent, while there was even less impact, if any at all, on girls.

Fortunately for economists Torberg Falch, Ole Henning Nyhus, and Bjarne Strøm, Norway’s education system set up just such an experiment.

Throughout Norway, students have 10 years of compulsory education beginning at age six and ending at age 16. When that time is up, they take an exit examination that’s a bit unusual by American standards. First, the test helps determine whether a student will go on to further academic or vocational training. Second, students in Norway take one of three tests, and which one they take is chosen at random. About 40 percent take a mathematics exam, another 40 percent take a Norwegian language test, and the rest take a test on English language, but they don’t know which one until a few days before taking it.

Those few days, in other words, turn into a nation-wide cram session for students and a perfect experiment to test the effects of intense preparation, testing, and math. Falch, Nyhus, and Strøm’s analysis of test data and other education records confirms what researchers had thought—sort of. Each additional day of preparation for the math exam increased the probability of completing high school within five years by about 0.2 percent relative to others and upped the probability of going on to study at a university by a similar amount.

Boys, it turns out, account for most of that effect: Preparing for and taking a math exit exam increased boys’ probability of graduating high school within five years by 0.3 percent, while there was even less impact, if any at all, on girls. Exactly why that is, the researchers write in the journal Labour Economics, is unclear, but may be related to prior math skills. When the researchers broke things down further, they found that girls with low math skills benefited as much, if not more than, boys with low and high math skills. Girls who’d already been good at math didn’t seem to benefit from taking the exit exam.

Quick Studies

The Bitter Taste of Hostility

bitter_hostility

(Photo: Syda Productions/Shutterstock)

Swallowing a bitter pill isn't just a metaphor for an unpleasant experience—research shows bitter tastes can cause outright hostility.

Here’s an easy question: If a person feels bitter, is he more likely to act with hostility toward others? Here’s a tougher question: If a person tastes something bitter, is she more likely to be hostile?

The idea is less outlandish than you might think. A wide swath of previous research has shown how taste is linked to emotions, and therefore, behavior. Previous studies show that sweet tastes reduce stress and increase agreeableness. And watching happy video clips can make a sweet drink taste more pleasant than watching sad clips.

Bitter foods, on the other hand, have been linked with emotional reactivity and threat. And since bitterness has such strongly negative metaphoric meaning (think: bitter enemies, leaving a bitter taste in your mouth), researchers from the University of Innsbruck, Austria, predicted that bitter tastes might alter a person’s emotions for the worse. In a paper published this week in Personality and Social Psychology Bulletin, they describe three experiments that showed how bitter tastes may cause people to act in aggressive and hostile ways.

A wide swath of previous research has shown how taste is linked to emotions, and therefore, behavior.

The first experiment showed that participants who drank “the bitterest natural substance currently known” (gentian root tea, in case you were wondering) felt significantly more hostile than those who drank sugary water, according to a self-reported mood survey. This effect remained even after accounting for the participant’s perceived enjoyment of the beverage.

Next, researchers tested a more common bitter beverage, grapefruit juice, as compared to a neutral drink, water. After a taste test, participants completed a questionnaire that they were told was for another study. They read through a series of vignettes in which they could be provoked to anger, then rated how angry, frustrated, or irritated they would hypothetically be, and chose from five potential actions—one of which was directly aggressive. Those who drank grapefruit juice reported more anger and irritation and were more than twice as likely to choose the “direct aggression” option than those who drank water. For those who drank grapefruit juice, “when feeling angry, the option of acting out the anger was preferred over holding it back,” the researchers write.

In the last experiment, researchers elicited actual aggressive behavior from participants, instead of using mood surveys or hypotheticals. Students in the study drank a shot of water or bitter gentian root tea. Then, as part of a supposed effort to determine a link between taste and creativity, an experimenter tested them on a filler task. Afterward, the subjects rated their experimenter. Those who drank the bitter tea were significantly more likely to give poor ratings to the experimenters—saying they were less competent, friendly, or good at their job—than those who drank water.

Taken together, these three experiments provide fairly strong evidence that bitter tastes can lead to hostility. And while you might plan to drink gentian root tea on a regular basis, other bitter foods and drinks have become more popular lately—like hoppy IPA beers or kale, for instance. Better watch out for aggressive behavior at your local organic bar.

Quick Studies

When and Where HIV Began, and How It Spread

hiv

Railways helped spread HIV throughout central Africa as early as the 1920s. (Photo: Carl Gierstorfer/Science)

HIV spread by train from 1920s Kinshasa, researchers say.

Most of us know how HIV, the virus that causes AIDS, spreads, and most of us know the tremendous toll it’s taken on communities worldwide.

Here are some things you might not know. Different versions of the virus almost certainly hopped from chimpanzees, where it’s known as simian immunodeficiency virus, to humans several times in southern Cameroon and elsewhere in the Congo River basin. It first spread in humans in the 1920s in what was then called Belgian Congo, today’s Democratic Republic of Congo, or DRC. And it was likely transportation networks combined with changing medical practices and social conditions that took main-group HIV strains, or HIV-1 M, from an outbreak to a global pandemic, while others remain confined largely to central Africa.

It’s likely that unsterilized injections in sexually transmitted disease clinics in the 1950s combined with an increase in the number of sex workers’ clients in the early 1960s helped spread group M throughout DRC and Africa.

“We suggest a number of factors contributed to a ‘perfect storm’ for HIV emergence from Kinshasa and successful epidemic and eventually pandemic spread,” Phillippe Lenney, co-author of a new study on the earliest origins of the HIV crisis, says in an email. Those factors include urban growth, the development of new transportation infrastructure, and changes in commercial sex work after the DRC’s independence from Belgium.

To reach that conclusion, the team first had to sort out where and when HIV originated and how different groups of the virus spread over time. While researchers were confident humans had contracted HIV by around the 1930s, hypotheses about where it came from were based largely on circumstantial evidence, such as HIV’s high genetic diversity in DRC, Cameroon, Gabon, and the neighboring Republic of Congo. Using a large database of HIV samples from central Africa and a statistical method called phylogeography, which used samples of different strains of HIV as well as when and where they were collected to reconstruct the disease’s genetic and geographic history, the researchers put HIV-1 M’s origins in Kinshasa, Democratic Republic of Congo, in about 1920. From there, the disease spread first by train and later by river to cities throughout DRC as well as the Republic of Congo, including Pointe-Noire and Brazzaville, just across the Congo River from Kinshasa.

That left open questions about why HIV-1 M and not others, such as the “outlier” group HIV-1 O, managed to spread throughout Africa and the world. Both M and O groups, the team found, spread at about the same rate until 1960, at which point group M expansion exploded. It’s likely, the authors argue, that unsterilized injections in sexually transmitted disease clinics in the 1950s combined with an increase in the number of sex workers’ clients in the early 1960s helped spread group M throughout DRC and Africa. The prevalence of Haitian professionals in Kinshasa around that time lends support to the idea that this group spread the disease to the Americas, they write.

Unfortunately, that’s not great news for the future. The range of factors involved “makes it difficult to make the connection to particular intervention policies at present,” since those are specific to the particular disease and outbreak, Lenney says.

A weekly roundup of the best of Pacific Standard and PSmag.com, delivered straight to your inbox.

Follow us


Levels of Depression Could Be Evaluated Through Measurements of Acoustic Speech

Engineers find tell-tale signs in speech patterns of the depressed.

We’re Not So Great at Rejecting Each Other

And it's probably something we should work on.

Chronic Fatigue Syndrome and the Brain

Neuroscientists find less—but potentially stronger—white matter in the brains of patients with CFS.

Incumbents, Pray for Rain

Come next Tuesday, rain could push voters toward safer, more predictable candidates.

Could Economics Benefit From Computer Science Thinking?

Computational complexity could offer new insight into old ideas in biology and, yes, even the dismal science.

The Big One

One town, Champlain, New York, was the source of nearly half the scams targeting small businesses in the United States last year. November/December 2014

Copyright © 2014 by Pacific Standard and The Miller-McCune Center for Research, Media, and Public Policy. All Rights Reserved.