Menus Subscribe Search

Follow us


Quick Studies

Quick Studies

Politicians Really Aren’t Better Decision Makers

WH

(Photo: bigberto/Flickr)

Politicians took part in a classic choice experiment but failed to do better than the rest of us.

When it comes to risky and uncertain decisions, politicians have the same basic shortcomings as the rest of us, according to an experimental study presented earlier this month at the 2014 Behavioral Models of Politics Conference. That result undermines a core tenet of representative democracy, namely that our leaders are better at making political decisions than the rest of us.

As a species, we are not particularly good at decision making. Among our foibles, we will often make different choices based on a problem’s wording rather than its underlying structure. Danny Kahneman and Amos Tversky’s “Asian disease” experiment, a particularly well-known example, goes like this: An exotic disease is coming, and it’ll kill 600 people. You have two options. Choose the first, and 400 people will die. Choose the second, and you take a risk: There’s a two-thirds chance that everyone dies.

“Democratic government relies on the delegation of decision making to agents acting under strong incentives. These actors, however, remain just as human as those who elect them.”

In the original experiment, 22 percent of people surveyed chose the first option while 78 chose the second, but that’s not the interesting part. Given a choice between saving 200 lives with certainty or a one-third chance of saving everyone, Kahneman and Tversky found, 28 percent choose the first option while 72 percent choose the second—a different proportion, even though the choice is exactly the same as before.

That’s a bit troubling when it comes to the average citizen choosing whom to vote for, but it’d be worse if our political leaders were susceptible to the same effect. Alas, they are, according to a team of political scientists led by Peter Loewen. The team reached that conclusion with a straightforward test: they put the Asian disease question to 154 Belgian, Canadian, and Israeli members of parliament. In the loss frame, where subjects decided between 400 deaths or a two-thirds chance everyone dies, 82 percent of Belgian, 68 percent of Israeli, and 79 percent of Canadian MPs chose the risky option, compared with 40, 53, and 34 percent, respectively, when the researchers presented MPs with the less gloomily-phrased version.

For comparison, the experimenters posed the same problem to 515 Canadian citizens, who, if anything, were less susceptible to framing effects. “The overall patterns observed for MPs and for citizens is strikingly similar. However, the effect size observed in Canadian MPs … is larger than that estimated among Canadian citizens,” the team writes. It was also larger than estimates of the framing effect in average people.

It’s all a bit of a problem for a common line of reasoning among political scientists and political economists, many of whom assume that re-election concerns or political acumen will render politicians more strategic and also more rational than average Joes. Loewen and company’s results suggest otherwise. “Democratic government relies on the delegation of decision making to agents acting under strong incentives,” they write. “These actors, however, remain just as human as those who elect them.”

Quick Studies

Earliest High-Altitude Settlements Found in Peru

basin1

The Pucuncho Basin. (Photo: Kurt Rademaker)

Discovery suggests humans adapted to high altitude faster than previously thought.

Living at high altitude isn’t easy. The thinner air above 4,000 meters makes for colder temperatures, less oxygen, and less protection from the sun’s harmful ultraviolet rays. Yet humans occupied sites that high and higher in the Peruvian Andes as early as 12,800 years ago, according to a new study. The result could change how archaeologists think about the earliest human inhabitants in South America and how they managed to adapt to extreme environments.

Traveling to 4,000 meters and higher isn’t such a big deal as it once was. Mountaineers regularly climb 4,392-meter high Mount Rainier and miners work just outside of the highest city in the world, La Rinconada, Peru, which stands at 5,100 meters. India and Pakistan have even fought battles at 6,100 meters on the disputed Siachen glacier.

Traveling to 4,000 meters and higher isn’t such a big deal as it once was. Mountaineers regularly climb 4,392-meter high Mount Rainier and miners work just outside of the highest city in the world, La Rinconada, Peru.

But how and how early people actually lived in such extraordinary places is less clear. For some, human occupation in the Andes didn’t make any sense. Even if settlers could survive freezing temperatures and limited oxygen, altitude increases metabolism, meaning they’d need to eat more in a place where travel was difficult and food was scarce.

Regardless, Kurt Rademaker and colleagues report they’ve found evidence of two high-altitude settlements at sites in southern Peru. Members of the team had been on the trail of obsidian that turned up in the earliest coastal villages in the region, which were dated to between 12,000 and 13,500 years ago. But the obsidian didn’t originate there. Archaeologists have known for some time that it came from Alca in the Peruvian highlands, strongly suggesting contemporary outposts or base camps in the Andes.

Eventually, a combination of obsidian surveys, mapping of likely settlement locations, and reconnaissance led the team to 4,355-meter-high Pucuncho and 4,445-meter-high Cuncaicha. There, researchers found tools, animal and plant remains, and other signs of habitation. Using a carbon-dating variant called accelerator mass spectrometry, the team dated Pucuncho to between 12,800 and 11,500 years ago and Cuncaicha to between 12,400 and 11,800 years ago, roughly a millennium earlier than previously discovered settlements at similar altitudes.

The results may help scientists understand the genetic adaptations particular to high-altitude dwellers, especially with regard to how quickly humans were able to adjust biologically to harsh environments. “Our data do not support previous hypotheses, which suggested that climatic amelioration and a lengthy period of human adaptation were necessary for successful human colonization of the high Andes,” the team writes in Science. “As new studies identify potential genetic signatures of high-altitude adaptation in modern Andean populations, comparative genomic, physiologic, and archaeological research will be needed to understand when and how these adaptations evolved.”

“This research assists in finally explaining some of the key archaeological questions regarding early South American occupation,” Washington State University archaeologist Louis Fortin, who has worked with Rademaker in the past but was not involved in the present research, writes in an email. The work, he says, “has brought to light a significant discovery for South American archaeology and specifically high-altitude adaptation and the peopling of South America.”

Quick Studies

My Politicians Are Better Looking Than Yours

clinton

Hotter, if you're a Democrat. (Photo: veni/Flickr)

A new study finds we judge the cover by the book—or at least the party.

Beauty, they say, is in the eyes of the beholdee’s in-group.

At least, that’s what they say if “they” means researchers interested in how we perceive political leaders. According to researchers at Cornell University’s Lab for Experimental Economics and Decision Research, people seem to be judging the cover in part by the content of the book: Democrats find their political heroes more attractive than Republican leaders, and vice versa.

Curious to know, essentially, how hot for their leaders partisans and average citizens were, the lab’s co-director, Kevin Kniffin, and colleagues conducted a simple test—they asked people to say how attractive sets of  familiar and unfamiliar political figures were. In theory, if a person’s beauty or handsomeness were a fixed, objective trait of an individual—something we all agreed on—a beholder’s partisan leanings ought to have no impact.

Republican aides rated GOP leaders as more attractive than their donkey counterparts, but only by less than half a point.

But that is not what Kniffin and company found. In one version of the experiment, the researchers asked a total of 49 aides working for Wisconsin state legislators—38 Democrats and 11 Republicans, owing to the balance of power in the state—to rate the attractiveness of 24 politicians. That total included 16 familiar leaders, including recent Wisconsin gubernatorial and United States senate candidates, and eight relatively unfamiliar ones who came from New York.

The aides rated familiar politicians as more attractive than unfamiliar ones overall, but, more importantly, they thought leaders of their own party were more appealing than others. Democratic aides, for example, rated their leaders on average about a 5.5 on a nine-point scale and rated Republican leaders about 4.5. For Republican aides, those ratings were 4.2 and 5.2, respectively. Those results depended on aides being familiar with those politicians, though. When they were ogling low-profile politicians from New York, Wisconsin legislative aides found them a point or two less attractive overall, and Democrats rated Republican and Democratic leaders as equally attractive. Republican aides rated GOP leaders as more attractive than their donkey counterparts, but only by less than half a point. These results suggest that the aides had to actually know something about who they were rating for there to be a partisanship-attractiveness effect.

Those findings are at odds with studies that presume physical attractiveness is a “static personal characteristic that influences how people perceive each other,” the authors write in the Leadership Quarterly. “In effect, we find evidence that people are capable—for better or worse—of judging covers by their books, whereby the cover of physical attractiveness is viewed partly and significantly through the lens of organizational membership.”

Quick Studies

That Cigarette Would Make a Great Water Filter

cig

(Photo: 42787780@N04/Flickr)

Clean out the ashtray, add some aluminum oxide, and you've (almost) got yourself a low-cost way to remove arsenic from drinking water.

In further evidence that one person’s trash is another’s treasure—and perhaps life saver—researchers in China and Saudi Arabia have devised a way to use cigarette ash to filter arsenic from water. The technique could prove to be a cost-effective way to deal with contaminated drinking water, especially in the developing world.

Odorless and tasteless, arsenic is more than just the stuff of Agatha Christie novels. It’s also a serious public health threat in some parts of the world, notably Bangladesh, where naturally occurring arsenic compounds are abundant in the soil. Even in wealthy countries such as the United States, a mix of natural and industrial sources poses a threat to public health if it goes undetected and unmanaged. Regardless of the source, long-term exposure through drinking water and from crops irrigated with contaminated water can lead to skin lesions and cancer. Fortunately, richer nations have a number of options for dealing with arsenic, including absorption treatments and methods based on chemical oxidation.

Odorless and tasteless, arsenic is more than just the stuff of Agatha Christie novels. It’s also a serious public health threat in some parts of the world, notably Bangladesh, where naturally occurring arsenic compounds are abundant in the soil.

But in the developing world, finding the money for a state-of-the-art treatment facility isn’t an easy job. Apart from collecting rain water and boiling it, the simplest and most cost-effective way to treat arsenic-laced water is absorption. A standard water filter just passes water through a material that attracts arsenic compounds but lets water molecules flow by.

Here’s where cigarette ash comes in. Tobacco is grown throughout the world, and millions of cigarettes are made and smoked every day—a public-health concern in its own right. But it’s also a good source of water-filtering carbon.

“When people smoke, incomplete combustion emerges as air is sucked through the tobacco within a short time. Thus, a certain amount of activated carbon”—that’s the porous, absorbent stuff in your water filter—“is formed and incorporated into the cigarette soot,” write He Chen and colleagues in Industrial & Engineering Chemistry Research. The team combined that with another material for arsenic removal, aluminum oxide, to create a low-cost, relatively easy-to-make filter.

Neither ash nor aluminum oxide is ideal as a filtering material—ash has to be heat treated to be an efficient water filter, while aluminum oxide tends to clump up or form gels when exposed to water. To get around that, the researchers treated cigarette soot with hydrochloric and nitric acid before mixing the resulting powder with aluminum nitrate, finally producing an aluminum oxide-carbon mix. Finally, the team tested their concoction on a groundwater sample from Mongolia. With about two grams of aluminum oxide to one gram of cigarette-soot carbon, the team removed about 96 percent of the arsenic in the sample, as well as 98 percent of fluoride ions. They also found that they could use the same mix six times without losing filtering capacity. Finally, something good about smoking cigarettes.

Quick Studies

Love and Hate in Israel and Palestine

warehouse

A warehouse destroyed by the Israeli army and Hamas. (Photo: un_photo/Flickr)

Psychologists find that parties to a conflict think they're motivated by love while their enemies are motivated by hate.

Not long after the September 11th attacks, a Newsweek cover story famously purported to explain “why they hate us,” they being militant Muslim extremists. But there might be a problem with that thinking. According to a new study, it’s not hatred of outsiders that motivates opposing sides in a conflict. To some extent, it’s love for each other.

Psychologists have known for quite a while now that we interpret others’ actions rather differently than our own, even if they’re the very same actions. There’s a simple reason for that difference, variously called the fundamental attribution error and correspondence bias. While we experience our own internal responses to the situations we encounter, we can only see the external actions that others take. It’s not that we’re incapable of empathy—who hasn’t heard the aphorism that you can’t know someone until you’ve walked a mile in their shoes?—but it’s harder when we don’t know what others are thinking and feeling. It’s harder still when political or military conflict is involved: That idea is often illustrated by the hostile media effect, in which both sides in a dispute view media coverage as biased against them.

It’s not that we’re incapable of empathy—who hasn’t heard the aphorism that you can’t know someone until you’ve walked a mile in their shoes?—but it’s harder when we don’t know what others are thinking and feeling.

That’s all fairly well understood, but psychologists Adam Waytz, Liane Young, and Jeremy Ginges wondered whether they could get at the specific emotions that conflicting parties felt toward their comrades and their enemies. To do so, they first asked 285 Americans to rate, on seven-point scales, whether either their political party or the opposing one was motivated by love (empathy, compassion, and kindness) or hate (dislike, indifference, or hatred toward those in their own party). On average, study participants rated their own parties as being 23 percent more motivated by love than hate, while they rated those in other parties as being 29 percent more motivated by hate than love.

Things got a bit more interesting when the team asked similar questions of 497 Israelis and 1,266 Palestinians. Asked why some of their fellow citizens supported bombing in Gaza, Israelis reported they were 35 percent more motivated by love for fellow Israelis than hate, while they thought just about the reverse for Palestinians’ motivations for firing rockets into Israel. Palestinians, meanwhile, ascribed more hate than love to Israelis, though they thought fellow Palestinians were about equally motivated by love and hate. An additional survey of 498 Israelis found that the more they perceived differences in the two parties’ motivations, the less likely they were to support negotiations, vote for a peace deal, or believe that Palestinians would support such a deal.

Such perceptions are “a significant barrier to resolution of intergroup conflict,” the authors write in a paper published today in Proceedings of the National Academy of Sciences. From an additional study of Republicans and Democrats, the team concludes that monetary incentives might ameliorate the problem, though “the strength of this particular intervention might vary for conflicts of a more violent and volatile nature.”

Quick Studies

How to Water a Farm in Sandy Ground

sand

(Photo: angeloangelo/Flickr)

Physicists investigate how to grow food more efficiently in fine-grained soil.

Sand is probably not what you think of when you think of growing food—it’s supposed to be at the beach or in the desert, not in your garden. But as our population and the demand for both food and water grows, farmers will likely have to find ways to grow crops efficiently in less-than-ideal soil, while conserving as much water as possible. Now, physicists have developed some recommendations for dealing with one challenge of growing in sand—water retention—including pre-wetting and mixing absorbent particles in with the soil.

The problem with sandy soil is well known to gardeners and farmers alike: Unlike soils made up of smaller particles and more varied, irregular shapes, sand doesn’t hold water and nutrients well. Instead, water falling on sand tends to form a shallow, uniform top layer that collects into narrow vertical channels as water travels deeper—much like water droplets forming and then falling from the edge of a wet roof. As a result, most of the sand never gets wet, and the parts that do drain quickly, making it difficult for plants to take in water.

Unlike soils made up of smaller particles and more varied, irregular shapes, sand doesn’t hold water and nutrients well.

Searching for a solution, Yuli Wei and colleagues at the University of Pennsylvania and the Complex Assemblies of Soft Matter lab first performed a series of experiments designed to probe the effects of soil particle size and water flow rate on soil irrigation. Controlling those factors in real soil, however, is difficult to say the least, so the team used boxes of tiny glass beads, ranging in size from 180 micrometers to a millimeter in diameter as a stand in for sandy soil, and they devised a sprinkler system that would allow them to control both how much water fell and how fast the droplets were moving when they hit soil. While channels formed regardless of particle size, irrigation flow rate, and droplet speed, the team found that using larger beads resulted in narrower water channels that formed closer to the surface, while soils comprising smaller particles led to much wider channels and a deeper layer of water near the surface—a consequence of increased capillary forces in finer soils. The team found similar results when increasing the irrigation rate, though droplet speed had no effect.

Next, the experimenters turned to controlling the formation of water channels and improving overall irrigation. One technique that worked was thoroughly mixing a small amount of water into the soil before turning on the sprinklers (or before the rain came). Even in small amounts, pre-wetting was enough to encourage water sprinkled on the surface to diffuse through the soil rather than form narrow channels. A potential alternative to pre-wetting is to add a layer of super-absorbent hydrogel particles—a mix of potassium acrylate and acrylamide—underneath the surface. As the hydrogel wets, its particles swell and form a kind of dam, so that the soil above slowly fills as water falls. Either way, there’s more water in the right places for crops to grow.

Quick Studies

Unlocking Consciousness

consciousness

(Photo: 59898141@N06/Flickr)

A study of vegetative patients closes in on the nature of consciousness.

You wake up in a hospital, eyes open but completely unable to move. You can’t even blink. How’s anyone to know you’re conscious, let alone aware?

While that’s an extreme case, there’s a real-world need to understand whether patients with more common disorders of consciousness, such as those in a vegetative state, are alive and thinking. Now, researchers report they’ve taken a step in that direction by comparing patterns of electrical activity in the brains of healthy adults with those of patients suffering from a consciousness disorder.

Inspired by research indicating a small number of patients might be at least marginally aware and able to control their thoughts despite showing no outward signs of consciousness, Srivas Chennu and an international team of neuroscientists decided to see whether they could identify consciousness using electroencephalography, or EEG, which tracks oscillating electrical signals from the brain and measured on the scalp. The team collected 10 minutes of EEG data from 91 points on the heads of 32 patients in a vegetative or minimally conscious state as well as a control group of 26 healthy men and women. Next, they broke the data down according to the electrical signals’ frequency bands, commonly known as delta (0-4 Hertz, or cycles per second), theta (4–8 Hz), and alpha (8–13 Hz).

A small number of patients might be at least marginally aware and able to control their thoughts despite showing no outward signs of consciousness.

On the first pass through the data, the team noticed that their patients tended to have stronger delta-band and weaker alpha-band signals compared to those of healthy people, but really getting a handle on the data required a more sophisticated approach. First, the researchers computed correlations between delta, theta, and alpha-band signals from each pair of the 91 EEG measurement points. From those correlations, they next built a connectivity network, a graph showing the strongest correlations—hence the strongest connections—between different parts of the brain. Finally, they compared graphs from the patients to those of healthy people using measures such as clustering, which describes how dense the connections are between a subset of points in the brain, and modularity, a measure of how easily one could break the graph down into smaller components by cutting individual links.

Healthy subjects, the neuroscientists found, had more clustered and less modular alpha-band networks than patients, and alpha networks also spanned a greater physical distance in healthy people than in patients. (Though it’s a bit counter-intuitive, clustering and modularity don’t actually take physical distance into account.)  Much the opposite was true of delta- and theta-band networks: These were more clustered and less modular in patients than in healthy controls, although they didn’t extend as far across the brain as alpha networks did in control subjects. Finally, behavioral and EEG data combined suggested that the more responsive a patient was, the more that patient’s alpha-band connectivity resembled a healthy person’s.

Those last two points could be key, the authors argue today in PLoS Computational Biology. The shift to delta- and theta-band networks in patients with consciousness disorders doesn’t bring quite the same pattern of connectivity as healthy alpha-band networks, suggesting that it’s the long-range connections that underlie consciousness.

Quick Studies

Advice for Emergency Alert Systems: Don’t Cry Wolf

emergency

(Photo: thompsonrivers/Flickr)

A survey finds college students don't always take alerts seriously.

Text and email-based campus emergency alert systems seem like a great idea, and in the worst circumstances they might help save lives. But a new experiment suggests a serious downside: If administrators aren’t careful, students, faculty, and staff might think they’re crying wolf.

Sending countless alerts about every misplaced backpack or suspicious character “is a huge issue. If people don’t like the system, they’re not going to trust it,” says Daphne Kopel, lead author of a new study on students’ perceptions of emergency alert systems and a graduate student at the University of Central Florida. “Overexposure can really do harm.”

“We were getting so many alerts … people were kind of laughing about it.”

Technically, the Department of Education has required alert systems since 1990’s Clery Act, but those systems came into sharper focus—and under tighter scrutiny—after a mass shooting at Virginia Tech in 2007. Amendments to the Clery Act in 2008 led some schools to develop elaborate warning systems, complete with text alerts, remote-controlled locks, and more.

But that technology won’t make a difference if no one takes it seriously, an issue that first occurred to Kopel when thinking about her response to a fire in her building, she says. Kopel set to work on a survey of University of Central Florida students’ attitudes toward their schools’ alert system—“we were getting so many alerts … people were kind of laughing about it”—when fears a serious attack might be underway shut the campus down.

In the wake of the attack, Kopel worked with Valerie Sims and Matthew Chin to probe students’ thoughts about their alert system and how the planned attack had changed them. Surveying 148 UCF students, the team found modest but nonetheless important changes. Students agreed they liked the alert system a bit more after the attack compared with before, and they were less likely to feel like UCF was a safe campus than they had beforehand. Tellingly, survey takers were 15 percent less likely to report hearing others openly mock the alert system, suggesting that students took the system more seriously after the interrupted attack. Perceptions also varied based on gender—the alert system made women feel safer than it did men, for example—as well as personality traits such as agreeableness and imagination.

Kopel says the team is planning follow-up surveys at the beginning and end of each semester. That should address one drawback of the study—students’ recollections of how they felt months ago or prior to an emergency aren’t the most reliable—as well as provide feedback to administrators. Those surveys will also help with Kopel’s broader goal of understanding how students and others categorize and respond to different kinds of emergencies.

The team will present their research later this month at the Human Factors and Ergonomics Annual Meeting in Chicago.

Quick Studies

Brain’s Reward Center Does More Than Manage Rewards

brain1

The nucleus accumbens is highlighted in red. (Photo: Wikimedia Commons)

Nucleus accumbens tracks many different connections in the world, a new rat study suggests.

One of the keys to modern thinking about choices and values is a small part of the brain called the nucleus accumbens, which is part of the ventral striatum, itself part of the basal ganglia. It’s sometimes called the reward center of the brain, and neuroeconomists generally believe the nucleus accumbens is responsible for recognizing and processing the rewards and punishments that follow from our actions.

Like much of what you read about neuroscience these days, that’s only partly right. Nucleus accumbens isn’t just a reward processor, according to a recent study. It’s more like a coincidence processor.

When a reward follows an action, that action gets reinforced, and we’re more likely to take that action in the future.

Interest in nucleus accumbens, or NAc, grew in the 1990s, when monkey studies suggested that getting a sip of juice as a reward for a correct response to a problem caused dopamine neurons in and around monkeys’ NAc to fire. FMRI studies showed something similar happening in our brains, too, leading theorists to suggest that NAc was doing some kind of reinforcement learning: When a reward follows an action, that action gets reinforced, and we’re more likely to take that action in the future. But humans and other animals can learn all kinds of associations—things drop when we let go, October is crunch-time in baseball, and the better your Skee Ball score, the more prize tickets you get. Even dogs can learn food’s coming when a bell rings. Curiously, there are few studies that look at whether NAc might be recording all of these associations, not just the action-reward ones.

To press the question, Dominic Cerri, Michael Saddoris, and Regina Carelli conducted a standard experiment. First, they taught 20 rats a variety of second-order stimulus associations. For example, a rat might first learn that white noise followed right after a light flashed, and in a second session, they’d learn that a food pellet would be available following white noise. Finally, the team tested the rats—if they went looking for food after a flashing light, but not other signals, they’d learned the light-noise-food pattern. All the while, the researchers monitored NAc activity using electric probes implanted in the rats’ brains.*

The team found that NAc neurons in the rats fired not only in response to the food pellets in the second session, but also during the first training session when there were no rewards of any kind. Next, the team divided the animals into groups of good and poor learners based on how well they’d performed during the test phase, a process by which they found that good learners’ NAc neurons fired more during learning than either poor learners’ brains or those of a control group. In other words, NAc does more than just encode rewards—it tracks other sorts of connections in the world, too. Though questions remain, that insight might throw a small but intriguing wrench into our understanding of how rat—and maybe human—choices work.*


*UPDATE — October 14, 2014: We originally wrote that the experiment was conducted with mice. That language, in both the body of the post and subheadline, has been corrected to rats.

Quick Studies

A City’s Fingerprints Lie in Its Streets and Alleyways

street

(Photo: robgross/Flickr)

Researchers propose another way to analyze the character and evolution of cities.

Cities touch you, each in their own special way. Walk its streets, and Seattle feels different than Berlin or Johannesburg or Tokyo. Each has its own fingerprint.

Still, those fingerprints have just four types, exemplified by Buenos Aires, Athens, New Orleans, and Mogadishu, argue researchers Rémi Louf and Marc Barthelemy.

Louf and Barthelemy trained in physics but have an ongoing interest in how one part of a city’s core infrastructure—its streets—evolves and how that evolution relates to, say, where people live in relation to work. It’s part of an emerging science of cities aimed at understanding how an urban environment’s physical, social, and economic networks evolve over time, though much of the current research is somewhat abstract. In a typical model, street networks are just that—abstract representations that could be Paris or a map of the brain. But real streets are grounded in real cities, which take on attributes like size and shape. How should those factors be taken into account? How could different street plans change the way cities work?

Real streets are grounded in real cities, which take on attributes like size and shape. How should those factors be taken into account? How could different street plans change the way cities work?

No one’s quite prepared to answer those questions yet, but Louf and Barthelemy decided to take a first step by at least categorizing what was out there in the maps of the 131 cities on six continents. Rather than study the street network itself, they considered the shapes and sizes of the blocks that streets form. Following academic geographers’ lead, they computed shape factors: the ratio of a block’s area to that of the smallest circle that could fit around it. Then, Louf and Barthelemy categorized cities based on their blocks’ distribution of size and shape.

The analysis revealed four main city fingerprints. The largest group, comprising 102 cities, were New Orleans-like cities with the largest city blocks in a wide range of shapes. Athens-like cities made up another 27 cities and contained generally smaller blocks—less than about 10,000 square meters, or, for a square block, about 300 feet on a side—but a wide variety of shapes. Only two cities remained: Buenos Aires, with medium-sized square and rectangular blocks, and Mogadishu, which features almost entirely small, square blocks.

Interestingly, every major city the team looked at in the United States and Europe fell into the NOLA category except Vancouver and Athens. Europe and the U.S. have their own particular subtypes, but a few U.S. cities, including Portland, Oregon, and Washington, D.C., fall more into the European mold.

Those sorts of differences, the authors suggest, could be used to better understand how cities are born and evolve. Uniform block sizes, such as those found in New York, “could be the result of planning” while a city like Paris reflects a continual process of building and rebuilding that produces a range of block shapes and sizes.

Quick Studies

When Violins Meet Leaf Analysis

violin

(Photo: land_camera/Flickr)

Techniques used to analyze leaf shapes reveal the subtle evolution of the violin.

Violins are kind of like leaves. They’ve changed over time, driven in part by their designers’ tastes. Violins fall into distinct lineages, recognizable by their shapes, just as leaves from one or another plant would be. And they show signs of a sort of natural selection: Violins look more and more like the ones first created by Antonio Stradivari.

This is according to Dan Chitwood, a biologist who normally studies leaves. Specifically, he studies how leaf shapes have evolved over time and the genetic basis of that evolution. Doing that research means quantifying and tracking shape changes over time, something just as easily applied to Chitwood’s other avocation, the viola.

Chitwood compares violins to living, evolving organisms, complete with mutations and a sort of survival of the fittest.

Chitwood’s first question was whether he could tell the difference between different kinds of string instruments based on their shape while taking overall size out of the equation. To do that, he drew on an auction house’s database of more than 9,000 instruments in the violin family—the viola, cello, bass, and the violin itself—from prominent luthiers over a range of 400 years. Chitwood used those to construct instrument outlines, which he could then compare using a method called linear discriminant analysis.

To understand the idea, imagine constructing a shadow puppet. You could cut the puppet out of a single piece of cardboard or wood, or you could assemble it using a set of elementary shapes. In a similar way, you could describe a violin’s shape as a whole, or you could break it down into more abstract shapes. A double bass looks like an especially broad-bottomed pear, while a lute looks like a squash with some triangle thrown in. While Chitwood’s study uses a more sophisticated set of basic shapes, the goal is the same. By quantifying how much pear, squash, and triangle a shape has, you can quantify the similarities and differences between different instruments’ shapes.

Violas and violins, it turns out, are hard to discriminate, but cellos and double basses are distinct. While all four are pear-shaped to some extent, the double bass takes that to another level, complete with a bit of stem where the instrument’s neck attaches to the main body. When Chitwood turned his attention specifically to the roughly 7,000 instruments in his database, he found that violins fell into four families, each represented by an archetype designed by an actual human family—Maggini, Amati, Stainer, and, of course, Stradivari, whose violins were slightly more bass-like in shape. What’s more, other violins became more like these four over time, and especially more like Strads.

In that respect, Chitwood compares violins to living, evolving organisms, complete with mutations and a sort of survival of the fittest. “Despite using molds, Antonio Stradivari nonetheless innovated new shapes, using a method both faithful to the previous outlines but with the potential to change,” he wrote in the paper. Meanwhile, luthier Jean-Baptiste Vuillaume purposely copied Stradivari’s designs, because those were the ones customers selected.

Quick Studies

Drill Sergeant Bosses Don’t Get the Job Done

drill

(Photo: Marines/Flickr)

Even if they think it's meant to motivate, workers respond badly to workplace abuse.

“I done some checkin’. I looked through your files. I know about your mama. Hey, don’t you eyeball me, boy! I know your father’s a alcoholic and a whore chaser. That’s why you don’t mesh, Mayo, because deep down—don’t you eyeball me boy!—deep down inside, you know the others are better than you. Isn’t that right, Mayo?” That’s Gunnery Sergeant Emil Foley, in an Oscar-winning performance by Louis Gossett Jr. in the 1982 film An Officer and a Gentleman. It’s unclear whether Foley, who came to define the hard and mean yet loving drill sergeant stereotype, is trying to motivate Richard Gere’s Zack Mayo or simply humiliate him into quitting.

In the movie, Mayo completes his training. But in real life, a team of researchers argue, Foley’s behavior could backfire, whatever the intentions: Employees with abusive supervisors are more likely to slack off and more likely to return their bosses’ favor by publicly ridiculing them.

It’s clear that workplace abuse doesn’t make for happy workers, and that can cost businesses quite a lot of money—as much as $24 billion a year in the United States according to one estimate.

It’s clear that workplace abuse doesn’t make for happy workers, and that can cost businesses quite a lot of money—as much as $24 billion a year in the United States according to one estimate. But maybe the costs aren’t so bad if employees believe their bosses mean well. Kevin Eschleman and colleagues put the question to 268 people through the StudyResponse Project. Specifically, their survey asked participants how often their supervisors abused them—putting them down in front of others or ridiculing them, for example—as well as how often they’d engaged in counterproductive actions, such as making fun of their bosses at work or simply putting little effort into work. Crucially, they also asked how workers perceived the abuse. Perhaps, the team thought, put downs and ridicule could work if it was coming from the right place. Indeed, Eschleman says “we thought motivational intent would soften the blow” of abuse.

But it did not. Workers reported about the same frequency of counterproductive behavior at work when they felt abused regardless of what they thought their bosses were thinking. In fact, it mattered more whether abuse was intentional than whether it was hostile or motivational. When employees didn’t think bosses were being particularly hostile or motivational, the amount of abuse had little effect on their behavior. That could be because without intent, workers could reasonably dismiss it. “Maybe it was accident, maybe they were having a bad day,” Eschleman says.

Eschleman says the results could be important for manager training programs and for human resources departments trying to settle disputes related to workplace abuse. Mentioning any kind of intent, positive or negative, he says, “might be the wrong strategy.” Still, Eschleman cautions that his team’s study was small and might not apply in every working environment. “It might be interesting to explore this in the military or in other industries” where lives are on the line, such as medicine, he says.

Quick Studies

Too Hot to Hire

shutterstock_113453149

Don't hate her because she's beautiful. (Photo: Zoom Team/Shutterstock)

Are you an attractive woman looking for a job? Acknowledging your beauty could keep potential employers from discriminating against you.

Beautiful people, as a rule, have it pretty good. They tend to have more self-confidence, higher income, and better financial well-being than plain-faced or downright ugly people. They’re also rated as more socially and intellectually competent. But one way attractive women, in particular, suffer is when applying to jobs that are stereotypically masculine.

While this phenomenon (known as the “beauty is beastly” effect) was first discovered 30 years ago, researchers didn’t exactly scramble to find solutions for this not-so-oppressed minority. Luckily, the modern sexy woman need not despair—a new study shows that simply acknowledging your attractiveness can hold this type of discrimination at bay.

Researchers discovered that acknowledging one’s sex and attractiveness causes evaluators to rate women as more masculine.

It may sound trite, but the “beauty is beastly” effect “demonstrates a subtle form of sex discrimination,” according to a paper currently in press, to be published in Organizational Behavior and Human Decision Processes. Researchers used a technique that has been shown to mitigate discrimination against those with disabilities—by simply acknowledging the stereotypes that evaluators may hold.

In the first study, undergraduate students rated “employment suitability” of four different candidates for a construction job, one of whom was a woman, based on a photo and interview transcript. Some saw the photo of a beautiful woman, while others saw one of a not-so-beautiful woman. In the interview transcript, the woman said “I know that I don’t look like our typical construction worker,” “I know that there are not a lot of women in this industry,” or neither. In the control condition, where the women did not acknowledge any stereotypes, the unattractive woman was rated as more suitable for the job than the beautiful woman. However, the attractive woman received significantly higher ratings if she acknowledged either her appearance or sex than if she didn’t.

The unattractive woman received significantly lower ratings if she acknowledged her appearance, compared to the control condition.

Screen Shot 2014-10-08 at 10.49.09 AM

These results were replicated in a similar study with construction workers as participants. So, why does mentioning stereotypes have an effect on evaluators? In a subsequent study, the researchers discovered that acknowledging one’s sex and attractiveness causes evaluators to rate women as more masculine—and presumably, a better fit for the manly construction job. Those who acknowledge stereotypes are also, surprisingly, rated as less counter-communal. Women who are counter-communal (a nicer way to say “bitchy,” as ambitious women are often perceived) violate gender norms and are evaluated less favorably because of it. This double standard is just another example of how women on the job market must tread a fine line between feminine and masculine.

If you’re an attractive woman gunning for a job in construction, engineering, tech, or another male-dominated industry, you might consider being upfront about your beauty. But beware, researchers write: “People often have unrealistic views of their physical attractiveness … so acknowledgment could result in negative repercussions.” Of course, we could also hope for a world where people aren’t as rigid and stodgy about their gender beliefs, but that’s little solace for a woman who’s afraid to be too hot to hire.

Quick Studies

Twitter’s No Beacon of Democracy, But It’s Better Than Expected

twitter

(Photo: 30032901@N04/Flickr)

It's pretty bad, but it's less status-conscious and less insult-prone than you'd think.

It’s doubtful that anyone really thinks of Twitter as a good example of democratic discourse. Sure, there’s plenty of good, interesting, even important things to read, but the Internet in general isn’t known as being a safe place for everyone to express their opinions. Still, in some respects, Twitter might be a touch more democratic than you might think.

Democracy means many things to many people, but for the purposes of their study, Zhe Liu and Ingmar Weber turned to Jürgen Habermas and his idea of the public sphere, which placed the highest value on interpersonal equality, inclusiveness, and discussion that focused on common concerns rather than those of one social class. In theory, at least, Twitter could be such an institution, albeit in 140-character form.

The top 50 words in inter-ideology chats included “kill,” “murder,” and “hate,” compared with words such as “love,” “thank,” and “great” for discussions among allies.

Before they could figure out whether Twitter met Habermas’ conditions, of course, they had to make those conditions concrete and define groups that might engage in some sort of political discourse. On the latter question, they decided to focus on tweeted conversations involving Democrats and Republicans, the Hamas-associated military group Izz ad-Din al-Qassam Brigades (whose original account Twitter was suspended, though others have since popped up) and the Israeli Defense Forces, and—why not?— Real Madrid and FC Barcelona. To address Habermas’ criteria, Liu and Weber sampled a total of 226,239 Twitter accounts that had re-tweeted one of the group’s tweets and examined how they interacted with allies and foes alike.

The pair’s analysis showed that Twitter isn’t an idealized Enlightenment salon, but it might not be quite so bad as we all thought. First, the bad news: Across social groups, defined by how many followers users had, tweeters were more likely to engage in conversation with those on their own side, just as political scientists have found in other contexts. Meanwhile, cross-ideology conversations were shorter than others and more often than not initiated by mentioning high-status users. They weren’t the most pleasant chats, either. The top 50 words in inter-ideology chats included “kill,” “murder,” and “hate,” compared with words such as “love,” “thank,” and “great” for discussions among allies.

There is a bright side, though. Users generally disregarded their Twitter social status and participated in cross-ideology conversation at rates largely independent of status. And, despite less-than-pleasant word use, a qualitative analysis of conversations between foes suggested tweeters avoided insults and made generally logical arguments, although they rarely cited any references.

Liu and Weber will present their results at the Sixth International Conference on Social Informatics in Barcelona this November.

Quick Studies

How Moms Change Brains

mom

(Photo: 80502454@N00/Flickr)

Seeing mom makes young children's brains function more like those of adolescents.

For little kids, seeing mom or dad nearby is a calming influence, maybe the difference between between perfect calm and a full-bore freakout. It’s as if having a trusted caregiver nearby transforms children from scared toddlers into confident adolescents. And in a way, a new report suggests, that’s what having mom around does to a kid’s brain.

When they’re first born and for years after, infants and young children can’t do a whole lot by themselves. They can’t eat on their own, they aren’t very good at managing their emotions, and it takes a while for them to learn how to dress themselves. Most children figure it out eventually, but in the meantime they need their parents to do a lot of that stuff for them. All the while, their brains are changing, too. Well into adolescence, kids’ brains undergo anatomical and physiological changes that affect the way we think and act.

Young children made around 20 percent fewer errors when their mothers were present than when they weren’t, while there was no difference for adolescents.

That observation led Nim Tottenham and her lab at the University of California-Los Angeles to wonder whether a child’s brain might function differently depending on whether she or he can see her mother. In particular, Tottenham wanted to know whether being able to see mom would change connections between the amygdala—an area of the brain that’s been linked to emotional responses, among other things—and the prefrontal cortex, the part of the brain thought to be responsible for integrating and processing information before turning it into action.

To find out, the team used functional magnetic resonance imaging, or fMRI, to scan the brains of 23 children between the ages of four and 10 and another 30 kids aged 11 to 17 while they viewed a series of photographs. For 28 seconds at a time, each child viewed a picture of his or her mother or a stranger, both of whom might be smiling or wearing a neutral expression. In a companion experiment outside the scanner, children viewed a series of pictures of faces and were told to press a button when they saw a particular expression, such as a happy face.

Young children’s brains responded differently based on whether they were looking at their mothers or strangers. In particular, their brains showed signs of positive amygdala-PFC connections when viewing pictures of strangers, but negative connections when viewing pictures of their mothers, suggesting more mature and stable brain function—and likely more mature and stable behavior, at least when moms were around. In contrast, tweens and teens had negative connections whether they were looking at their mothers or strangers. In other words, looking at pictures of their mothers made young children’s brains look a little more like those of adolescents.

The companion behavioral experiment backed up that thinking—young children made around 20 percent fewer errors when their mothers were present than when they weren’t, while there was no difference for adolescents. That combined with the fMRI results to suggest that mothers—and likely other caregivers—can provide an external source of mental regulation that young children won’t develop until later in life, the authors write in Psychological Science.

Quick Studies

A Poor Sense of Smell Might Mean Death Is Near

smell

(Photo: 60213635@N07/Flickr)

You probably won't smell Death before he knocks at your door.

Here’s what we know from a study of senior citizens sniffing scented markers for science: Your sense of smell is as good an indicator of your five-year risk of death as heart failure, lung disease, and cancer, and perhaps a better one.

A good sense of smell isn’t the usual thing doctors look for when predicting how long a patient has to live. That’s a role usually reserved for factors like mental state, disease, and mobility. After all, a patient with dementia, congestive heart failure, and difficulty getting out of bed probably isn’t going to live a whole lot longer. On the other hand, a decline in the ability to detect different smells often precedes neurodegenerative diseases such as Parkinson’s and Alzheimer’s, one of the leading causes of death in the United States. And that, Jayant Pinto and colleagues at the University of Chicago observed, might mean they could assess the risk not just of neurological disease but also mortality risk based on simple tests of smell.

“We believe olfaction is the canary in the coal mine of human health, not that its decline directly causes death.”

To find out, Pinto and team included as part of the National Social Life, Health, and Aging Project a test that your stoner friends from high school might have enjoyed. In the first wave of the project, conducted in 2005 and 2006, NSHAP staff conducted in-person interviews with 3,005 men and women between the ages of 57 and 85. As part of that interview, interviewees sniffed five Burghart Sniffin’ Sticks imbued with the perfumes of rose, leather, orange, peppermint, and fish and then tried to identify each aroma from a multiple-choice list. The second wave of the project, conducted five years later, was less colorful—researchers checked with interviewees, their families, news reports, and public records to see whether smell-test participants were still alive.

As the researchers suspected, they found that smell was a solid predictor of mortality. Only about 10 percent of those who scored 100 percent on the test died within the following five years, while a third of those who couldn’t correctly identify a single smell died. That held up even after controlling for age, gender, race, education, and even serious medical conditions like heart disease, cancer, liver failure, or stroke. After taking all those into account, an inability to identify common odors indicated older adults were nearly two and half times more likely to die within five years than those who could smell just fine.

Still, the researchers aren’t saying that people will die of a failing nose. “We believe olfaction is the canary in the coal mine of human health, not that its decline directly causes death. Olfactory dysfunction is a harbinger of either fundamental mechanisms of aging, environmental exposure [to pollution, toxins, or pathogens], or interactions between the two,” the authors write in PLoS One.

Quick Studies

Why That Guy Keeps Reminding You He Went to an Ivy League School

harvard 3

(Photo 360b/Shutterstock)

It's sometimes the people least secure in their place who really, really want us to know they belong.

Unlike kids at Harvard or Princeton, students at the University of Pennsylvania are in the awkward position of being Ivy League-educated but not always instantly recognized as smart. Harvard is known around the world as the pinnacle of Western intellectual life. Though University of Pennsylvania ranks alongside the other Ivies at the nation’s top, its unpretentious name, for those unfamiliar with East Coast schools, sometimes conjures frat boys and cows. When Penn psychologist Paul Rozin recently asked 204 Americans to free-associate words with “Ivy League,” 40 percent mentioned Harvard; less than two percent mentioned Penn.

Rozin conducted his experiment as part of a larger study, because he had a hunch that Penn’s tenuous perceived connection to the Ivy League changes how its students identify themselves with the elite circle of universities. His next experiment had research assistants ask 53 students at Penn and 54 students at Harvard to write down words or phrases they associated with their schools. Sixteen Penn students wrote “Ivy.” Only four Harvard students did the same.

The need to be recognized as part of a prestigious or desirable group is fundamental to anyone who just makes that group’s cut.

According to Rozin, the Penn students were showing a tendency to form what he calls “asymmetrical social Mach bands.” This means that because of their school’s marginal status, the students felt compelled to play up their Ivy League affiliation. It’s a common impulse, Rozin says: “Individuals generally prefer to be in higher-status or more positively valenced groups, both to enhance their self-esteem and to project a more impressive self to others,” he writes in the study, which was published in Psychological Science in August.

But how far does this impulse go? While it’s not surprising high-achieving Ivy Leaguers would want to be sure their credentials are known, Rozin speculates that the need to be recognized as part of a prestigious or desirable group is fundamental to anyone who just makes that group’s cut. Lieutenants may brag more about their officer status than colonels do; junior varsity players may boast about their team more than varsity players; the nouveau riche may flaunt their wealth more than old money.

Rozin ran two other experiments to explore this possibility further, both of which compare how institutions market themselves. In the first, he looked at the websites of about 200 highly ranked national and regional universities, and found that the regional universities—which offer fewer graduate programs—refer to themselves as universities 15.8 percent more than the national universities do. In the second, he found that small international airports include the word “international” when writing about themselves online 36.8 more than than large (and thus better known) international hubs.

While these results are far from proof of a universal human tendency, they still hint at a less-than-flattering element of human vanity, Rozin suggests, because they underscore our deep-seated concern about impressing others. Insecurities may be good for compelling institutions to market themselves better than their competitors, sure. But the study’s a reminder that you might want to refrain from telling your friends yet again that you played on your high school’s freshman football team.

Quick Studies

Does Cramming for a Math Test Help You Graduate High School?

math

(Photo: billselak/Flickr)

A study of Norwegian students suggests it might.

Math tests are good for you, according to a new study of more than 155,000 Norwegian 16-year-olds who took mathematics or language exit exams between 2002 and 2004. The intense preparations that precede those exams and their high-stakes nature reduce the dropout rate and increase enrollment in higher education, according to new research.

Researchers agree that there’s a connection between math test scores and educational attainment, income, and other individual outcomes, but it remains unclear if math scores are simply an indicator of good things to come or if something about taking a math test and doing well actually brings those good things about. That’s something only a randomized experiment could possibly tell you.

Preparing for and taking a math exit exam increased boys’ probability of graduating high school within five years by 0.3 percent, while there was even less impact, if any at all, on girls.

Fortunately for economists Torberg Falch, Ole Henning Nyhus, and Bjarne Strøm, Norway’s education system set up just such an experiment.

Throughout Norway, students have 10 years of compulsory education beginning at age six and ending at age 16. When that time is up, they take an exit examination that’s a bit unusual by American standards. First, the test helps determine whether a student will go on to further academic or vocational training. Second, students in Norway take one of three tests, and which one they take is chosen at random. About 40 percent take a mathematics exam, another 40 percent take a Norwegian language test, and the rest take a test on English language, but they don’t know which one until a few days before taking it.

Those few days, in other words, turn into a nation-wide cram session for students and a perfect experiment to test the effects of intense preparation, testing, and math. Falch, Nyhus, and Strøm’s analysis of test data and other education records confirms what researchers had thought—sort of. Each additional day of preparation for the math exam increased the probability of completing high school within five years by about 0.2 percent relative to others and upped the probability of going on to study at a university by a similar amount.

Boys, it turns out, account for most of that effect: Preparing for and taking a math exit exam increased boys’ probability of graduating high school within five years by 0.3 percent, while there was even less impact, if any at all, on girls. Exactly why that is, the researchers write in the journal Labour Economics, is unclear, but may be related to prior math skills. When the researchers broke things down further, they found that girls with low math skills benefited as much, if not more than, boys with low and high math skills. Girls who’d already been good at math didn’t seem to benefit from taking the exit exam.

Quick Studies

The Bitter Taste of Hostility

bitter_hostility

(Photo: Syda Productions/Shutterstock)

Swallowing a bitter pill isn't just a metaphor for an unpleasant experience—research shows bitter tastes can cause outright hostility.

Here’s an easy question: If a person feels bitter, is he more likely to act with hostility toward others? Here’s a tougher question: If a person tastes something bitter, is she more likely to be hostile?

The idea is less outlandish than you might think. A wide swath of previous research has shown how taste is linked to emotions, and therefore, behavior. Previous studies show that sweet tastes reduce stress and increase agreeableness. And watching happy video clips can make a sweet drink taste more pleasant than watching sad clips.

Bitter foods, on the other hand, have been linked with emotional reactivity and threat. And since bitterness has such strongly negative metaphoric meaning (think: bitter enemies, leaving a bitter taste in your mouth), researchers from the University of Innsbruck, Austria, predicted that bitter tastes might alter a person’s emotions for the worse. In a paper published this week in Personality and Social Psychology Bulletin, they describe three experiments that showed how bitter tastes may cause people to act in aggressive and hostile ways.

A wide swath of previous research has shown how taste is linked to emotions, and therefore, behavior.

The first experiment showed that participants who drank “the bitterest natural substance currently known” (gentian root tea, in case you were wondering) felt significantly more hostile than those who drank sugary water, according to a self-reported mood survey. This effect remained even after accounting for the participant’s perceived enjoyment of the beverage.

Next, researchers tested a more common bitter beverage, grapefruit juice, as compared to a neutral drink, water. After a taste test, participants completed a questionnaire that they were told was for another study. They read through a series of vignettes in which they could be provoked to anger, then rated how angry, frustrated, or irritated they would hypothetically be, and chose from five potential actions—one of which was directly aggressive. Those who drank grapefruit juice reported more anger and irritation and were more than twice as likely to choose the “direct aggression” option than those who drank water. For those who drank grapefruit juice, “when feeling angry, the option of acting out the anger was preferred over holding it back,” the researchers write.

In the last experiment, researchers elicited actual aggressive behavior from participants, instead of using mood surveys or hypotheticals. Students in the study drank a shot of water or bitter gentian root tea. Then, as part of a supposed effort to determine a link between taste and creativity, an experimenter tested them on a filler task. Afterward, the subjects rated their experimenter. Those who drank the bitter tea were significantly more likely to give poor ratings to the experimenters—saying they were less competent, friendly, or good at their job—than those who drank water.

Taken together, these three experiments provide fairly strong evidence that bitter tastes can lead to hostility. And while you might plan to drink gentian root tea on a regular basis, other bitter foods and drinks have become more popular lately—like hoppy IPA beers or kale, for instance. Better watch out for aggressive behavior at your local organic bar.

Quick Studies

When and Where HIV Began, and How It Spread

hiv

Railways helped spread HIV throughout central Africa as early as the 1920s. (Photo: Carl Gierstorfer/Science)

HIV spread by train from 1920s Kinshasa, researchers say.

Most of us know how HIV, the virus that causes AIDS, spreads, and most of us know the tremendous toll it’s taken on communities worldwide.

Here are some things you might not know. Different versions of the virus almost certainly hopped from chimpanzees, where it’s known as simian immunodeficiency virus, to humans several times in southern Cameroon and elsewhere in the Congo River basin. It first spread in humans in the 1920s in what was then called Belgian Congo, today’s Democratic Republic of Congo, or DRC. And it was likely transportation networks combined with changing medical practices and social conditions that took main-group HIV strains, or HIV-1 M, from an outbreak to a global pandemic, while others remain confined largely to central Africa.

It’s likely that unsterilized injections in sexually transmitted disease clinics in the 1950s combined with an increase in the number of sex workers’ clients in the early 1960s helped spread group M throughout DRC and Africa.

“We suggest a number of factors contributed to a ‘perfect storm’ for HIV emergence from Kinshasa and successful epidemic and eventually pandemic spread,” Phillippe Lenney, co-author of a new study on the earliest origins of the HIV crisis, says in an email. Those factors include urban growth, the development of new transportation infrastructure, and changes in commercial sex work after the DRC’s independence from Belgium.

To reach that conclusion, the team first had to sort out where and when HIV originated and how different groups of the virus spread over time. While researchers were confident humans had contracted HIV by around the 1930s, hypotheses about where it came from were based largely on circumstantial evidence, such as HIV’s high genetic diversity in DRC, Cameroon, Gabon, and the neighboring Republic of Congo. Using a large database of HIV samples from central Africa and a statistical method called phylogeography, which used samples of different strains of HIV as well as when and where they were collected to reconstruct the disease’s genetic and geographic history, the researchers put HIV-1 M’s origins in Kinshasa, Democratic Republic of Congo, in about 1920. From there, the disease spread first by train and later by river to cities throughout DRC as well as the Republic of Congo, including Pointe-Noire and Brazzaville, just across the Congo River from Kinshasa.

That left open questions about why HIV-1 M and not others, such as the “outlier” group HIV-1 O, managed to spread throughout Africa and the world. Both M and O groups, the team found, spread at about the same rate until 1960, at which point group M expansion exploded. It’s likely, the authors argue, that unsterilized injections in sexually transmitted disease clinics in the 1950s combined with an increase in the number of sex workers’ clients in the early 1960s helped spread group M throughout DRC and Africa. The prevalence of Haitian professionals in Kinshasa around that time lends support to the idea that this group spread the disease to the Americas, they write.

Unfortunately, that’s not great news for the future. The range of factors involved “makes it difficult to make the connection to particular intervention policies at present,” since those are specific to the particular disease and outbreak, Lenney says.

Quick Studies

For Memory, Curiosity Is Its Own Reward

curiosity

(Photo: fabolous/Flickr)

A new study suggests a neural link between curiosity, motivation, and memory.

Perhaps curiosity is its own reward. It’s a lot easier to learn something when you’re interested in it, and there’s certainly something inherently motivating about being curious. Now, a team of researchers think they know why: The same brain circuits seem to control both curiosity and our responses to money, tasty food, and other sorts of external motivation.

Neuroscientists Matthias Gruber, Bernard Gelman, and Charan Ranganath developed that hypothesis after contemplating a 2009 study that connected curiosity to the caudate nucleus, a brain region that neuroeconomists believe plays a role in learning and processing rewards (among many other things). To Gruber’s team, there remained unanswered questions. How was curiosity related to motivation? Why does curiosity aid learning? And could curiosity about one topic help someone learn about another?

The authors suggest their study may have implications for learning and memory in seniors or patients with psychiatric disorders that affect the brain’s motivational circuits.

To answer those questions, the trio posed a series of trivia questions to experimental participants who, after viewing each, rated both how likely they were to know the answer and how curious they were to learn that answer. The team didn’t, however, reveal those answers right away. Instead, in the second phase of the experiment, they had each of their subjects hop into an fMRI brain scanner, where they once again saw each trivia question, followed by a picture of a face four seconds later, and the question’s answer six seconds after that. Unbeknownst to participants at the time, they were about to be tested on both the faces and the trivia answers. In the meantime, the researchers trained their attention on three brain regions of interest: the substania nigra/ventral tegmental complex, or SN/VTA, the hippocampus, and the nucleus accumbens. Those three, previous studies suggest, work together to assist memory formation, especially when we’re expecting a coming reward.

The results? When participants were more curious to know a question’s answer, there was more activity in both the nucleus accumbens and SN/VTA. And the more curious someone was, the more activity there was. The hippocampus seemed to respond most when an individual was both curious about an answer and remembered that answer in the post-scanner test. That, the researchers argue, means there’s a neural connection between curiosity, motivation, and memory: the curiosity and memory elements were actually observed, and previous studies had indicated the involvement of all three brain regions in motivation.

The team next looked at what participants had actually learned. Participants, they found, correctly recalled 70.6 percent of the answers they were more curious about, compared with 54.1 percent of those they found less interesting. What’s more, curiosity about the trivia had a smaller but noticeable effect on memory for the faces, too. When their curiosity was piqued, experimental subjects recognized 42.4 percent of the faces they saw, compared with 38.2 percent when they were less curious.

Writing in the journal Neuron, the authors suggest their study may have implications for learning and memory in seniors or patients with psychiatric disorders that affect the brain’s motivational circuits. Stimulating curiosity, they argue, may be one way to help patients learn and hold on to new memories.

Quick Studies

Mysterious Resting State Networks Might Be What Allow Different Brain Therapies to Work

Haxby2001

FMRI scans from another study. (Photo: Public Domain)

Deep brain stimulation and similar treatments target the hubs of larger resting-state networks in the brain, researchers find.

More and more, doctors and patients dealing with severe depression, obsessive compulsive disorder, or even Parkinson’s disease turn to techniques such as deep brain stimulation and transcranial magnetic stimulation. While those treatments have proven effective in some cases, it has been unclear why the hodgepodge of stimulation sites and techniques all seem to work. A new study suggests one possibility: the different methods each activate parts of the brain common to one of its resting state networks.

For a few decades now, neuroscientists who specialize in functional magnetic resonance imaging, or fMRI, have focused on what our brains do when we do math problems, play games, choose between politicians, and much more. But as early as the mid-1990s, researchers realized they’d been missing something: What happens when we’re not doing anything at all? With that question, they began to explore what’s called the default mode network and other resting state networks (RSNs), collections of brain regions that are active and working together specifically as we let our minds and senses wander. But no one is quite sure what exactly these networks do.

As early as the mid-1990s, researchers realized they’d been missing something: What happens when we’re not doing anything at all?

Around the same time as some were exploring RSNs, others were pioneering the next generation of brain stimulation techniques, methods somewhat less crude than early forms of electroconvulsive therapy. Some new methods are invasive—deep brain stimulation, for example, requires an electrical implant in the brain—and some aren’t. Transcranial magnetic stimulation involves a targeted magnetic pulse originating outside the brain. They have one thing in common, though: Different techniques applied in different parts of the brain often achieve the same goals.

It works that way, Michael Fox and five others argue, because of resting state networks. To figure that out, the team reviewed clinical studies that had used deep brain stimulation (DBS), transcranial magnetic stimulation (TMS), and a third method, transcranial direct current stimulation, or tDCS, to treat 14 disorders, including anorexia, depression, and Tourette syndrome. Across all 14 diseases except for one, epilepsy, they found correlations between resting-state activity in sites where DBS was effective and in others where TMS and tDCS were effective, indicating that such sites were all part of the same resting-state network. Backing that conclusion up was the observation that there seemed to be little, if any, connection between DBS regions that worked and regions where other kinds of stimulation had failed.

“Sites effective for the same disease tend to fall within the same brain network [and] ineffective sites fall outside this network,” the authors write in Proceedings of the National Academy of Science. Researchers who study psychiatric disorders had already started thinking in network terms, and now they have an even better reason to.

Quick Studies

Trust Is Waning, and Inequality May Be to Blame

trust

(Photo: gregorywass/Flickr)

Trust in others and confidence in institutions is declining, while economic inequality creeps up, a new study shows.

Trust is on the decline in America. Between 1972 and 2012 Americans became less trusting of and less confident in not only government and the media, but also churches, doctors, business, and each other. And, according to a new report, increasing income inequality may be to blame.

Political scientists and sociologists have long wondered how, why, and even whether trust in government and other institutions changes over time. One theory, still taught today, is that the dramatic process of entering adulthood shapes a person’s social and political traits in a lasting way. Thus, citizens born during the Great Depression tend to embrace a more frugal lifestyle and often the welfare state as well, or so the theory goes. Scholars have argued more recently that traits like being frugal or trusting government are a matter of the zeitgeist, or perhaps a matter of one’s age. Whichever is true, questions remain. If it’s the times that affects trust, what is it about a particular period that makes people more or less willing to believe what others say?

Separating out the effects of age, birth year—cohort, it’s usually called—and survey year, it became clear that trust in others and confidence in institutions declined because of the times we were and are living in.

To sort it out, psychologists Jean Twenge, Keith Campbell, and Nathan Carter looked to data from the General Social Survey, or GSS, which since the 1970s has asked a total of 37,493 Americans questions about just about everything, including a range of questions about trust and confidence in other people and groups. In the early ’70s, 46 percent of Americans agreed that “most people can be trusted,” as the GSS posed the question. Between 2010 and 2012, however, those surveyed agreed with that statement just 33 percent of the time. Confidence declined by a similar amount. Only 16 percent of GSS respondents responded that they had “hardly any” confidence in the press when surveyed between 1972 and 1974, but that number nearly tripled by the 2010-12 survey.

More interesting than the raw numbers is the deeper story that the data tell. Separating out the effects of age, birth year—cohort, it’s usually called—and survey year, it became clear that trust in others and confidence in institutions declined because of the times we were and are living in. Cohort had some effects on confidence, and trust increased with age, but the data indicated something about the zeitgeist was powering the decline in trust and confidence.

That something, the team argues, is the economy. Greater income inequality, the team found, was correlated with lower trust in others, while greater poverty, more violent crime, and an improving stock market were linked with less confidence in institutions.

In an email, Carter told Pacific Standard that the team is “very interested” in how psychology and economics interact through what Depression-era economist John Maynard Keynes called “animal spirits,” spontaneous, sometimes irrational drives to economic or financial action, “which have unfortunately seen very little serious attention from either psychologists or economists.”

“I really think it will take a concerted effort for collaboration across economics and psychology to get a handle on how psychological states impact economies and vice versa,” Carter says.

Quick Studies

Dopamine Might Be Behind Impulsive Behavior

dopamine crystals

A micrograph of dopamine crystals. (Photo: wellcomeimages/Flickr)

A monkey study suggests the brain chemical makes what's new and different more attractive.

A certain amount of exploration is a good thing—without it, we’d have a hard time figuring what we like and don’t like—but it has a dark side, too, in the form of impulsivity and behavioral addictions. A small new study suggests a brain chemical called dopamine helps regulate that trade off, with consequences for how we understand our own preferences and decisions.

Dopamine has a funny history in the burgeoning field of neuroeconomics. Though it’s hardly dopamine’s only job, researchers believe the chemical is a key player in human and animal decision making. In particular, dopamine neurons in the brain’s limbic system—itself thought to support emotion, motivation, and other functions—seem to keep track of how much pleasure we expect when making a given choice and what we actually get. That, theorists say, is how we learn what we like and what we don’t. Still, dopamine’s function is less than clear, in particular with regard to why we seek out new experiences, sometimes in excess.

The results “do seem to imply that dopamine specifically heightens novelty seeking without affecting overall learning or general exploration.”

Thus motivated, Vincent Costa, Bruno Averbeck, and colleagues at the National Institute of Mental Health’s Laboratory of Neuroscience in Bethesda, Maryland, decided to see how a little extra dopamine affected the decision making of three of their rhesus monkeys. Across six sessions, the monkeys got injections of a dopamine transport inhibitor, or DAT, a drug that increases the effect of dopamine. Then, the monkeys made a series of choices between three pictures, each of which had a different chance of rewarding a monkey with a small amount of juice. Over the course of several hundred such choices, the monkeys would have ample chance to learn about those rewards, but Costa and company set things up so that every so often one of the pictures changed to something new. Crucially, the team followed each session with another one two days later that used saline—salt water—in dopamine’s place.

The team found that their monkeys chose the novel picture around 60 percent of the time when they’d administered the dopamine transport inhibitor, compared with about 55 percent of the time with saline. That difference, they discovered with the help of a computer model of learning and decision making, corresponded to the drug roughly doubling the value of a novel picture, compared to the other pictures. Though they’re hesitant to draw any strong conclusions yet, the results “do seem to imply that dopamine specifically heightens novelty seeking without affecting overall learning or general exploration,” in which monkeys might sample from different options but wouldn’t be unusually drawn to brand-new options, Costa says in an email.

Next up, Costa writes, the team is trying to identify what neural circuits are behind the choice to seek out what’s new and different, which will involve recording the activity of individual neurons in a number of brain regions thought to be related to reward learning and decision making. They’ll also be working on other ways of modeling their data, with an eye toward how rational or irrational novelty-seeking behavior really is.

Quick Studies

School Counselors Do More Than You’d Think

lockers

(Photo: construct/Flickr)

Adding just one counselor to a school has an enormous impact on discipline and test scores, according to a new study.

Hiring just one additional school counselor in an average American school could have about a third of the effect of recruiting all the school’s teachers from a pool of candidates in the top 15 percent of their profession, according to a new analysis. That’s also about the effect you’d expect from lowering class sizes by adding two teachers to a school of around 500—either way, not too shabby.

School counselors do a lot more than help kids figure out what classes to take. Their primary role, in fact, is to help students work through behavioral problems, mental health concerns, and other issues that might hamper kids’ success in school and in life. But despite considerable recent attention to factors that might improve education in underperforming schools, researchers have largely ignored how much of an impact counselors have on academic performance.

An increase of one standard deviation in teacher quality is thought to result in an improvement of about one-tenth of a standard deviation in test scores.

Economists Scott Carrell and Mark Hoekstra took a stab at figuring out that impact by studying third, fourth, and fifth graders at 22 elementary schools in and around Gainesville, Florida, which happens to be the home of the University of Florida’s Department of Counselor Education. As part of their training, graduate student counselors intern in the area, providing extra support to the lone staff counselor in schools where they’re placed. In addition to that data, Carrell and Hoekstra knew students’ percentile ranks on the Iowa Test of Basic Skills and the Stanford 9 test, both standard measures of academic progress, as well as students’ disciplinary records—most everything they needed to see what a counselor could do for a school’s academic performance.

Surprisingly, just one extra counselor can do quite a lot. After controlling for factors including school size, the proportion of students qualifying for free or reduced-price lunch, and median family income in the neighborhood—all shown to be correlated with academic achievement—Carrell and Hoekstra estimated that each additional counselor intern in a school reduced the number of reports of disruptive behavior by 20 percent for boys and 29 percent for girls. Test scores also rose by a little less than one percentile point—a little more for boys, and a little less for girls.

That little one percentile point is something to write home about, too. Education researchers usually report their results in terms of the standard deviation, or the amount of variation, in student test scores or teacher quality, which is typically measured in terms of teachers’ students’ test scores. For instance, an increase of one standard deviation in teacher quality is thought to result in an improvement of about one-tenth of a standard deviation in test scores—about two and a half percentile points in Carrell and Hoekstra’s study.

Put it all together, and that means it would take a massive, widespread improvement in teacher quality to improve test scores more than a single additional counselor would, or, as the authors write in Economics Letters, “this suggests that hiring counselors may be an effective alternative to other education policies aimed at increasing academic achievement.”

A weekly roundup of the best of Pacific Standard and PSmag.com, delivered straight to your inbox.

Follow us


Politicians Really Aren’t Better Decision Makers

Politicians took part in a classic choice experiment but failed to do better than the rest of us.

Earliest High-Altitude Settlements Found in Peru

Discovery suggests humans adapted to high altitude faster than previously thought.

My Politicians Are Better Looking Than Yours

A new study finds we judge the cover by the book—or at least the party.

That Cigarette Would Make a Great Water Filter

Clean out the ashtray, add some aluminum oxide, and you've (almost) got yourself a low-cost way to remove arsenic from drinking water.

Love and Hate in Israel and Palestine

Psychologists find that parties to a conflict think they're motivated by love while their enemies are motivated by hate.

The Big One

One company, Amazon, controls 67 percent of the e-book market in the United States—down from 90 percent five years ago. September/October 2014 new-big-one-5

Copyright © 2014 by Pacific Standard and The Miller-McCune Center for Research, Media, and Public Policy. All Rights Reserved.