New research says offices that encourage talk of religion actually make for happier workplaces.
Just last night, a few of us Pacific Standard folk were indulging in some post-work beers, soaking in the conversation, and the alcohol. Suddenly the talk shifted to religion, usually a contentious topic. Maybe it was the bar’s florescent red lighting, or the somber live jazz—one co-worker said it sounded like an episode of Louie. Or it was just the beer. Either way, atheism, religious schooling, Christmas; it was all fair game. But crazier than any of our uninhibited banter, nobody left in a huff.
That may be because religion and the office actually mix quite well, says one upcoming study. Hell (or no hell), they even make for a better workplace.
Brent Lyons, assistant professor of management and organization studies at Simon Fraser University, in Canada, led a team of researchers who found that employees who discuss their religious beliefs at work are oftentimes happier.
“Being able to openly express important aspects of one’s life at work can positively influence job satisfaction,” Lyons says. “However, sometimes individuals feel that their workplace is not open to expressing religion”
Lyons and his peers sampled some 592 employees from the United States and South Korea, as both countries have a sizable Christian base, but differ in workplace reputation: Americans are stereotypically known as more self-expressive, and South Koreans more reserved.
The employees in both countries were all Christian—Lyons says they were all surveyed in churches—but fell into varying denominations. The researchers asked each participant to describe how important their religion was, to what degree it formed their identity, and how open their workplace is about employees’ faiths. Lyons then measured variables like job satisfaction, well-being, and pressure to assimilate.
Lyons found that, despite Americans’ and South Koreans’ respective stereotypes of self-expression and suppression, there was very little cross-cultural difference in terms of religious expression in the workplace. “We thought this difference may affect religious self-expression. However, we found no differences in the benefits of openly expressing one’s religion across culture,” he adds.
And the benefits were very real, in both countries. In either country, religiously open employees reported higher job satisfaction and overall mental wellness. Conversely, those who preferred to hide their beliefs at the workplace were less satisfied at work. Feelings of secrecy and the exhibition of a fake self often stressed out the religiously “secretive,” which at times manifested negatively in office relationships.
“If religion is important to you, and you are not open about it, it may mean that you are hiding aspects of yourself from your co-workers,” Lyons says. “Keeping secrets or presenting a false self can be stressful and can negatively impact relationships you develop with your co-workers.”
In a separate press release, another of the study’s researchers suggests decorating the workspace with a religious object, or, not surprisingly, simply talking with office-mates about religious customs.
Another option, as we at Pacific Standard all found out: Get drunk with them.
Bootleg cigarette sales could be leading Canadian teens to more serious drugs, a recent study finds.
Tough to believe, given a middling smoking rate, but Canada has a real cigarette problem on its hands. And it runs deeper than stained teeth or bad lungs.
In a study published last month, Dr. Mesbah Sharaf, a health economics professor at the University of Alberta, revealed that 31 percent of Canadian smokers between grades nine and 12 use contraband tobacco at least once per week—frequently the cigarettes are smuggled in from the United States. Worse still, Sharaf and his team of researchers discovered a link between illegal tobacco and drug use.
Using a national sample of 2,136 smokers, as well as data from the 2010-2011 Youth Smoking Survey, high schoolers were asked to assess, over a one-year period, their use of: amphetamines, cocaine, hallucinogens, heroin, MDMA, and ketamine. Sharaf and his researchers found that contraband smokers were three times more likely to abuse ketamine and amphetamine than non-contraband smokers (teen smokers who score their fix through, say, an older family member) and six times more likely to use heroin.
This issue is part of a larger problem for Canada: Spurred by high tobacco taxes, contraband cigarettes have soared in popularity, taking up about 30 percent of the overall market, according to Sharaf.
But in the case of these young students, the correlation with drug use could in part fall on the black market, as cigarette dealers might often peddle other drugs too. “Maybe through selling contraband tobacco, they are also advertising other drugs,” Sharaf says. “In this case, we can say that contraband tobacco may be a gateway into the use of elicit drugs.”
Sharaf couldn’t guarantee a direct association between contraband tobacco and drug use; his work, he explains, found a link, but not an explanation. But previous studies have said as much. And, given the potential criminal ring revolving around this cigarette market, it’s not difficult to imagine a scenario where an impressionable kid buying a cigarette is then offered an addictive—and illegal—drug.
Canada’s legal smoking age is 19 in most provinces, and 18 in Alberta, Saskatchewan, Manitoba, and Quebec, so that contraband market has obvious benefits to the tobacco-needy.
“There is no age restriction [on contraband cigarettes.] They are not subject to taxes,” Sharaf says. “They are just sold in a plastic bag.”
Sharaf calls on a collective response to this problem, urging the Canadian government, law enforcement, and the tobacco companies themselves to put forth a more intensive effort to cut off these illegal suppliers.
That old myth of home field bias isn’t a myth at all; it’s a statistical fact.
In the midst of a tumultuous 3-11 campaign, the Washington football team’s loss yesterday to the New York Giants at MetLife Stadium wasn’t all that noteworthy. There was the usual display of failed offensive execution, defensive ineptitude, and questionable coaching—the stuff that invariably leads to a 3-11 record. For a franchise that’s been steeped in failure for the better part of this past decade, this was just another day of the same old, nothing more.
Except, of course, for that would-be touchdown.
Leading 10-7 in the waning seconds of the first half, beloved Washington quarterback Robert Griffin III eluded several Giants defenders to dive into the end zone for a dazzling touchdown. But then, in a controversial call, the referees ruled that Griffin III had momentarily lost possession of the ball, resulting not in a touchdown, but in a turnover that gave the Giants possession of the ball and kept the score close.
An argument can be made either way for that particular hometown call. But, according to a recent study conducted some 3,000 miles away, that old myth of home field bias isn’t a myth at all; it’s a statistical fact.
Three economists—Abhinav Sacheti, University of Nottingham professor David Paton, and University of Sheffield professor Ian Gregory-Smith—oversaw a review of 1,000 professional cricket matches between 1986 and 2012 to determine whether officials more often ruled in favor of the home team. Focusing on the Leg Before Wicket (LBW) rule, which is akin to an obstruction foul on the batter, Sacheti, Paton, and Gregory-Smith found a clear officiating bias for the home team—even by allegedly impartial referees. Specifically, away teams fell victim to LBW calls between 10 and 16 percent more frequently than their hosting counterparts. For the home team, that’s a sizable advantage, with potentially game-altering consequences.
That drop to a 10 percent bias came gradually, first with the league’s 1994 mandate that one of the two umpires hail from a neutral site (prior to that, both were from the home team) and then an updated provision in 2002 that called for both umpires to be neutral.
Sure, Sacheti, Paton, and Gregory-Smith’s study deals with cricket, not American football. But people are people. In fact, a 2012 book, Scorecasting, by Toby Moskowitz and Jon Wertheim deduces a 57.3 percent home field advantage for home teams in the NFL, a good share of which they attribute to officiating. Moskowitz and Wertheim explain that refs might let the home crowd’s vigor impact their flag throwing, basically as a crowd-pleasing mechanism.
Whether that Robert Griffin III call was correct or not, it sure did seem pretty noisy at MetLife Stadium.
Repeat customers—with higher return rates and real bargain-hunting prowess—can have negative effects on a company’s net earnings.
But not in every respect. According to a recent study in the Journal of Marketing Research, many repeat customers—with higher return rates and real bargain-hunting prowess—can have negative effects on a company’s net earnings.
“Retailers are typically interested in cultivating customer relationships over time. They engage in marketing practices such as price mark-downs, promotions, loyalty programs, liberal return policies, and so on,” says lead author Denish Shah, a Georgia State University business professor. “However, as customers repeatedly transact with the firm, can these repetitive transactions develop into a habitual behavior?”
Shah and his associates identify four shopping habits that they believe suggest brand loyalty: purchase habits (which they believe is indicative of brand loyalty), promotion habits, low-margin habits, and product return habits. Looking at a data set of one Fortune 500 retailer’s 1.3 million customers over a four-year period, they found that indeed sale-obsessed shoppers really do impact a company’s profit margins. Often for the worse.
“Repeat purchase and promotion purchase habits positively affect the firm’s bottom line by $53.5 million and $3.9 million, respectively,” the authors write, “whereas product return and low-margin purchase habits negatively affect the firm’s bottom line by $58.9 million and $61 million.”
In other words, customers who return a lot of their purchases, and have a knack for bargain-hunting, can cost a store serious money.
Customers in the study fell into certain routines, like using the same checkout counter and shopping at the same time of day. Some also developed more financially impactful tendencies, like returning items and seeking out the best deals. The problem for stores, Shah explains, is that repeat customers often become repeat customers precisely because they want the lowest prices. More specifically, they want the clearance items and the markdowns, as opposed to the advertised sales. (It’s common knowledge by now that the marketed sales very often aren’t really all that great.) Practically put, who’s going to shop at a clothing store twice a week without looking for the cheapest stuff?
These two sides of value hunting have been theorized before; a 1990 study distinguishes between the “deal prone,” who tend to seek out the marketed sale, and the value-conscious customers, who simply want to pay the lowest price.
As to exactly how much stuff people actually return, Associated Press reporter Jennifer Kerr’s 2013 report shows that about nine percent of all sales—that’s $264 billion—are returned.
Knowing this, Shah argues, retailers need to develop marketing strategies that better benefit their companies, while still acknowledging people’s love of “the deal.”
“Retailers need to seriously think about consumers’ shopping habits,” Shah says.
Scientists just need to put forth some effort.
Many scientists aren’t very good at sharing their research with the public. As a 2012 study on science communication points out, they often face considerable professional pressure not to waste time “dumbing down” their work for mass consumption. Sometimes, also, they’re just too busy with what’s under their microscopes to care.
But better outreach, says Jarrett Byrnes, a University of Massachusetts-Boston biologist, may be exactly what scientists need to keep their work afloat. He recently published a study about the role of public outreach in crowdfunding for scientific research—a source of income increasingly considered a possible counterweight to tightening federal support.
While arts and technology projects make out handsomely from crowdsourcing websites like Kickstarter—they raked in close to $200 million in six months last year alone, Byrnes notes—science funding has yet to take off. Byrnes this is largely due to a misconception of how projects actually make money; scientists tend to regard crowdfunding as “magic money” that comes from luck of a viral hit, not effort, they contend.
To see if he could dispel this belief, Byrnes and his collaborators ran a one-and-a-half-year crowdfunding experiment through a program called #SciFund Challenge. The experiment put 159 actual, peer-vetted research proposals through campaigns on the crowdfunding platform RocketHub. The fundraisers had standardized durations, and the researchers were taught how to run a campaign. Byrnes’ aim was to test which factors and solicitation techniques brought in the most donations.
Overall, the experiment raised $252,811 for the various projects from 3,904 donors. Projects received an average of $2,000 each, ranging from a few hundred dollars to $10,000. Byrnes’ analysis traced patterns of Web traffic from social media sites and other referral networks, as well as how active the scientists were in promoting their work. The secret to success, he found, was indeed the participating scientists’ abilities to tap their networks and draw more eyes to their pages, even when their proposal didn’t have mass appeal.
“Twitter and email, which get passed on to other people or organizations, had a huge impact on bringing people in to look at projects,” Byrnes says in a press release. “We learned that in order to raise more money for a project you need to build an audience for your work and engage that audience.”
Sure, this might be common knowledge to anyone with a basic understanding of public relations. But for science, it’s important: Previous researchers have proposed good crowdfunding practices, but Byrnes’ experiment provides concrete evidence that science, like anything else, can rely on everyday people for drumming up support. Scientists don’t need to worry about sacrificing integrity to get better funding, the results suggest; they’ve just got to step away from the microscope and try to get the word out.
“To create a crowdfunding proposal, scientists must talk about their work in a way that appeals to people outside of the academy,” Byrnes writes. “They must be good science communicators, and then are rewarded for their efforts with money for their research.”
Mathematical ability isn’t one single skill set; there are indeed many ways to be “good at math,” research shows.
As a kid, I was acutely aware of my strengths and my limitations. In the classroom, that meant avoiding math. I was afraid of numbers, and there’s something downright ominous about a parabola.
Maybe I let my fear get the best of me: New research has found that roughly one in five people who identify as being bad at math actually scored in the top half of a given math test. One-third of those who claimed to be good at math, meanwhile, scored in that test’s bottom half. More significantly, it seems mathematical ability isn’t one single skill set; there are indeed many ways to be “good at math.”
Ellen Peters, a psychology professor at The Ohio State University, lead a team of researchers that gave 130 students three separate numeric competency assessments: objective numeracy, or the ability to work with numbers in a traditional sense; subjective numeracy, a self-evaluation of math abilities; and symbolic-number mapping, the prediction and understanding of numeric relationships (think: a carpenter estimating how much wood is needed for a room). Participants were asked to evaluate, among other things, the attractiveness of several different bets, both risky and simple. They were also tested on their ability to remember numbers paired to different objects.
“We’ve been interested in how math skills influence judgments and choices for a number of years now,” Peters says. “What we’ve realized is that we have multiple, inter-related skills with numbers, and each skill has different influences on how we think and decide in our everyday lives.”
Peters’ results show that mathematical ability isn’t such a black-and-white quality. Those who scored higher in objective numeracy were most likely to use traditional calculations to determine the attractiveness of the different bets, while confident participants who tested highly in subjective numeracy were prone to finding all bets appealing. The people who scored higher in symbolic-number mapping used a rough estimation system in analyzing the different bets, producing fairly accurate results.
Peters also found that those with high opinions of their skills were more likely to stick with tougher test questions, while those who scored low in that department would more often give up on the problem.
As for the memory test, those who scored higher in subjective numeracy fared best—a result of greater guessing confidence.
Peters’ research shows that some of us are having a tough time diagnosing our strengths and weaknesses, which may, in part, explain America’s slow academic decline into mathematical mediocrity. Her study also shows that there could be more than one way to approach mathematical problems.
“The study points out that other non-traditional ways of assessing math skills—not just solving equations and memorizing figures, but also beliefs about number ability and number intuitions—appear to influence how everyday people go about making choices,” Peters adds. “As a result, they suggest that we should assess math skills in ways that include them.”
Still—I don’t really care if I’m some kind of unknown mathematical prodigy. Keep that parabola away from me.
We tolerate jerks in the workplace because we value their creativity. Maybe it's time we stopped.
We’ve all known the type: that manic, frustrated genius, whose creativity seems contingent on an even greater ability for being an absolute ass. In the office, they are the ones thinking outside the box—and they’ll berate and belittle you for failing to understand their genius. We allow these individuals to be … well, jerks, because they are, after all, the workplace spark-plug. Capable of coming up with that next big idea, they can create the next great thing. We tolerate the jerkiness, because it’s accompanied by genius, which always benefits the workplace.
Maybe it’s time we stopped.
In a recently published study in the Journal of Business and Psychology, professors Samuel Hunter, of Pennsylvania State University, and Lily Cushenbery, of Stony Brook University, determined that these creative bullies can actually harm their companies—by hurting their co-workers’ feelings.
“It never made sense to me or Lily Cushenbery why being a jerk would be linked to actually coming up with original ideas,” Hunter says. “Instead, it made sense that being a bit pushy may help in getting your ideas heard and used by others.”
To test their theory, Hunter and Cushenbery applied a process view of creativity, looking not just at idea generation but also at idea testing, evaluation, and the ability to convince peers of an idea’s usefulness. In their first experiment, 201 students, having first taken a personality quiz, were asked to develop their own unique marketing plan for an online university. Afterwards, they were placed into groups of three and told to do the same thing. This, Hunter explains, allowed them to see how each individual’s idea was utilized within a group setting.
The results of this first test show that indeed the “jerk-ish” quality—indicated by lower levels of the agreeableness trait on the personality test—does result in idea utilization. The jerk quality is not, however, an indicator for innovative thinking. Being a jerk is good for pushing an idea, but not necessarily for creating a good one.
To determine whether this jerk quality was useful in all social contexts, Hunter and Cushenbery ran a second test—online—with 291 individual participants. Here, subjects were told to come up with a solution to a problem and propose it to two other members of a small chat room. The catch: Those two other chatters were actually actors following a script, either offering support for the participant’s idea, or being more confrontational. The results from this second test showed again that the jerk trait helps push through an idea in a more hostile environment, but proves to be harmful to creative thinking in milder settings.
So jerks aren’t necessarily all bad—if, that is, you’re in an office full of other bozos. In this case, Hunter says, “make sure there are ‘jerks’ on hand to push their ideas forward.”
But, Hunter cautions, in an office that really wants to push creative thinking, avoid the pompous windbags.
A new study finds that punishment can actually make our kids lie more.
Like most other kids, I was afraid of lying to my parents. As a result, I lied to them often.
I don’t think I’m alone in saying that my parents embedded a sense of right and wrong in me. Included in that framework was an understanding of how important it is to always tell the truth. But that wasn’t just because telling the truth is the objectively right thing to do. It’s also because not telling the truth is wrong—and there would be consequences for it. But according to a new study by McGill researchers, punishment is actually an ineffective way to deal with lying kids. It might just make them lie more.
The experiment, led by child psychology professor Victoria Talwar, first placed a child in a room with his or her back to a sound-enabled toy, like a stuffed animal. A researcher, also in the room, twice asked the child to guess the toy making this sound. Next, a new toy was placed on the table, this time with a decidedly unrelated sound playing. The researcher explained that he or she would leave the room and return momentarily, at which point the game would resume. But while the researcher was out, the child was very clearly instructed not to peek at the toy. The children were given a specific set of consequences for peeking, ranging from the “punishment-no appeal”—saying that looking at the toy most certainly gets the child into trouble—to the “no punishment-internal appeal”—it’s important to tell the truth about peeking because that’s the right thing to do.
After the researchers finally left the room, a video camera monitored the children—there were 372 in all, from ages four to eight. When the researchers returned they asked the child if he or she looked at the toy, disobeying the instruction.
There must have been a Tickle Me Elmo involved at some point, because two-thirds of the kids broke the rule. Talwar found that two-thirds of the “peekers” lied about having done so, the majority of whom were promised consequences for lying. Interestingly, the “appeals” methods—stressing honesty as being the objectively right thing, or at least a way of pleasing the adult—proved to be much more effective in promoting honesty than focusing purely on punishment. The external appeals, stressing how happy a child’s honesty would make the researcher, proved to be most effective. “Because children at a young age are most concerned about pleasing adults, external appeals may have the greatest potency in motivating children to tell the truth,” the authors write.
So, don’t threaten to punish your kid for lying. Instead, focus on how happy you’ll be when they tell the truth.
“The bottom line is that punishment does not promote truth-telling,” Talwar says in a press release. “In fact, the threat of punishment can have the reverse effect by reducing the likelihood that children will tell the truth when encouraged to do so.”
So yes, mom and dad, I lied sometimes. But that’s all your fault.
An experiment tracks how a new medical test spreads among doctors.
It’s appealing, the notion that ideas can spread like viruses, but even those fond of the analogy acknowledge it’s not necessarily a perfect one. A recent experiment might help clarify the matter, though. Studying the spread of a lab test at Northwestern Memorial Hospital’s intensive care unit suggests that, in some contexts, it takes some persuasion on top of exposure for doctors to adopt a new idea.
Although interest in how ideas spread has exploded in recent years, tracking that process in the real world is generally pretty tough. Most of the time, researchers focus on tweets and other social media to study how relatively simple memes spread. But a collaboration led by Curtis Weiss and Julia Poncela-Casasnovas took an unusual approach. Weiss and another physician co-author knew about a new, faster test for bacteria in critically ill patients and asked the NMH lab to supply the new test—without actually telling anyone about it. On a randomly selected day the week after the team received approval for this experiment, the researchers told two other critical-care doctors about the pros and cons of the test, also mentioning that it was already available at the hospital.
“This happened in one independent informal talk with each one. After that, we just sat back and collected the data of who was using which test, without any further interference,” Poncela-Casasnovas says in an email.
Over the next eight months, Weiss, Poncela-Casasnovas, and their team tracked the schedules of 36 doctors on the ICU—so they knew who had worked with whom—as well as lab orders for both the new and older tests. Data in hand, they tested two kinds of models: contagion models based on epidemiology and models of persuasion. In contagion models, influence goes one way, from one infected person to an uninfected one, or from someone who has adopted an idea to one who hasn’t. In persuasion models, on the other hand, influence is a two-way street, and adoption isn’t an either/or condition. Instead, people have some belief in a new idea’s value, and those who believe in it more are in turn more likely to adopt it.
Contagion models didn’t fit the empirical data very well, either in terms of the final number of doctors who adopted the new test or how that number had changed over time. “However, the persuasion model not only replicates very well the whole empirical dataset … but it is also good at predicting the future evolution of the process,” Poncela-Casasnovas says. That is, when the team fine-tuned the persuasion model with the first several months of data and kept it running, its predictions matched the real-world observations well.
While the results probably don’t apply everywhere—Poncela-Casasnovas says she’d expect different findings if doctors’ interactions had been random, as opposed to being structured by a schedule—they do suggest there’s more at work than infectious-disease models capture. The findings, she adds, may also help researchers design interventions aimed at boosting adoption of new medical tests or other ideas.
A new study suggests intriguing structural differences between the brains of Type I and Type II bipolar disorder sufferers.
While people with Type I and the less-severe Type II bipolar disorder share some of the same symptoms, there are significant differences in the physical structure of their brains. Type I sufferers have somewhat smaller brain volume, researchers report in the Journal of Affective Disorders, while those with Type II appear to have less robust white matter.
As brain imaging technologies have advanced and matured over the past few decades, there’s been considerable interest in understanding whether and how there are differences between the brains of people with mental illness and those without. In particular, neuroscientists studying depression have been interested in structural variation, such as differences in total brain volume. Still, the various forms of bipolar disorder have received somewhat less attention than others, such as major depression, schizophrenia, or autism.
That led Jerome Maller and colleagues at Monash University in Melbourne, Australia, to look into whether there were structural differences among the brains of people with different sorts of bipolar disorder. Using standard MRI scans—much the same as you would get if you’d had a concussion or bleeding in the brain—on 16 Type I and 15 Type II bipolar patients along with 31 healthy control subjects, the team examined whether there were differences in gray matter, white matter, and cerebrospinal fluid. The team also used a relatively new technique called diffusion tensor imaging (DTI) to measure the integrity of the brains’ white matter, the long nerves that connect different brain regions to each other.
Overall, there was less total brain volume—gray and white matter volume added together—and more cerebrospinal fluid volume in bipolar patients than in healthy controls, consistent with other recent studies suggesting a connection between brain volume and depression. After controlling for total brain volume, however, Type II patients’ brains were essentially the same as controls’ brains, while Type I patients had relatively higher volume in the caudate nucleus and other areas associated with reward processing and decision making. DTI studies, meanwhile, revealed that while patients with Type I and II bipolar disorder had reduced white matter integrity relative to controls, the effect was stronger among those with Type II, particularly in the frontal and prefrontal cortex, suggesting that Type II bipolar disorder is in some way a cognitive dysfunction.
Though the results are intriguing, the authors point out that their study is just the start. The team didn’t have access to data on how long patients had been diagnosed with bipolar disorder, let alone how long they’d actually had the disease, which often goes undiagnosed for years or even decades. In addition to addressing those issues in future studies, the researchers also hope to improve sample sizes and gather additional data about factors such as medications, family history, and genetics.
Paradoxically, insects and other animals eat more junk food in low-diversity median strips than in parks.
Ants ate food waste at a rate of up to 975 kilograms a year at sites in the medians of Manhattan’s Broadway and West Street, roughly the equivalent of 60,000 hot dogs, 200,000 Nilla wafers, or 600,000 Ruffles potato chips, according to a study just published in Global Change Biology.
The point of the study wasn’t just to see how much junk food ants could get behind their chompers, though. What was of particular interest was how much the ants on Broadway and West, mostly a variety known as pavement ant (Tetramorium Species 5), scarfed compared to the much more diverse ants in Central Park and 13 other parks in Manhattan.
To reach that conclusion, entomologist Elsa Youngsteadt and colleagues at North Carolina State University deliberately littered, placing Ruffles, Nilla Wafers, and Oscar Mayer Extra Lean Franks out in 21 park sites and 24 grassy street medians. There were two samples at each site, one open and one caged, so only ants and other insects could get in—that allowed the researchers to study what insects would consume in the absence of vertebrates such as rats.
Medians, the team found, were generally less ecologically diverse places, hosting two fewer ant species and fewer arthropod families as well. In particular, pavement ants, which came to the United States from Europe about a century ago, were much more common in medians, showing up in nearly all median sites and only around a third of park sites. That’s significant, because the researchers’ analysis shows that in sites with pavement ants, animals including insects and other arthropods ate more than in sites without. Independent of that result, streetwise animals ate about two to three times as much in medians as their park-dining relatives. Environmental factors such as temperature and the depth of leaves on the ground also influenced consumption.
Overall, the results suggest that ants and other insects eat enough food to keep the population of less-desirable scavengers such as rats in check. But the more interesting aspect of the results is what they say about biodiversity, the authors explain—namely, that how well an ecosystem functions is simply a matter of biodiversity. Usually, increasing biodiversity goes along with more efficient use of resources, though that seems not to be the case with urban-dwelling ants in New York City.
“We expected that the more diverse arthropod assemblages in parks should consume more food waste. Although we confirmed that park sites supported more ant species and more hexapod families than did median sites, park arthropods ate 2-3 times less food than those in medians,” the authors write. “Our analyses point to the importance of species identity and habitat characteristics, rather than diversity, as predictors of food removal,” and likely other ecological functions as well.
Researchers use fMRI and computer algorithms to identify neural markers of autism.
Like many neurological and cognitive disorders, diagnosing autism spectrum disorder can be tricky business. As the Centers for Disease Control and Prevention puts it, there’s no blood test for it. Fortunately, it might not be too long before reading words describing social interactions and a quick brain scan could diagnosis autism with remarkable accuracy, researchers report today in the journal PLoS One.
Some kind of autism affects about one in 68 children in the United States, according to CDC statistics, though exactly what causes autism remains a bit of a mystery. Genetic, neurological, and other explanations abound, and there’s no particularly clear consensus on which are correct. Still, there’s an intriguing common thread. Somehow, people with autism have an abnormal sense of self.
“This is a very long-standing idea, that the representation of self is altered,” says Marcel Just, the new study’s lead author. It goes back to early studies of autism in the 1940s, when researchers noticed that autistic children “referred to themselves as ‘you.'” Most kids learn that while others use second or third person to describe them, they’re supposed to use first person. “In autism, that somehow doesn’t work.”
The fact that autism is connected to mental representations of the self fit in nicely with another of Just’s research interests, algorithms for decoding what words a person is thinking about based on their brain activity. Since autism involves atypical representations of the self, Just says, it made sense to try out these methods as a way of diagnosing the disorder. First, they used functional magnetic resonance imaging to track brain activity in 17 high-functioning autistic people and 17 healthy people as they read eight words referring to social interactions—compliment, insult, adore, hate, hug, kick, encourage, and humiliate—while thinking about that interaction from their own perspective or another person’s. Using that data, the team extracted a series of fine-grained activation patterns correlated with reading the social-interaction words at both the individual and then group level. Sorting those patterns, or “features,” with a standard classifier computer algorithm, Just and his colleagues could identify which were associated with autism—that is, which showed up more or less often in people with autism.
While there’s something to learn from which features worked best, the more interesting thing may be how well the researchers could tell the difference between people with and without autism. As a test of their method, the team used data from 33 participants to identify the most useful features and then used the presence or absence of those features in the remaining person to predict whether he or she had autism. They got the answer right 33 out of 34 times.
That suggests that fMRI-based autism diagnosis—and perhaps diagnosis of myriad psychiatric diseases as well—could be just around the corner, Just says. When combined with additional results on differences between white matter in autistic brains and others, the findings may also help sort out the disorder’s underlying causes, he says.
Researchers uncover a brain-chemistry connection between smoking and alcohol dependence.
Alcoholics are usually smokers, too, and that presents something of a problem for someone trying to get back on the wagon. It seems that smoking makes it harder to quit drinking. Puzzlingly, it’s not nicotine but rather an as yet unknown component of tobacco smoke that’s to blame, according to research published today.
Both alcohol and nicotine dependence result from complex processes in the brain, involving a number of different chemicals, known as neurotransmitters, that help regulate signals sent from neuron to neuron and across regions of the brain. At the same time, tobacco smoke in particular has many different components, making it tough to sort out how alcohol and nicotine interact. One known point of overlap between alcohol and nicotine addiction, however, is a neurotransmitter called gamma-aminobutyric acid A. GABA-A actually slows down signals as they spread through the brain, but it’s thought that it underlies the particular kind of high one gets when drinking and that nicotine similarly stimulates production of the chemical.
Complicated brain chemistry aside, the connection led Yale University’s Kelly Cosgrove and her team to wonder what would happen if someone went through alcohol withdrawal while either continuing or quitting smoking. They studied 22 alcohol-dependent men and five alcohol-dependent women along with 20 men and five women who weren’t addicted to alcohol. About three-fifths of both groups were smokers. Doctors admitted the alcoholic group into the Department of Veterans Affairs’ Clinical Neuroscience Research Unit for a treatment program, during which time they underwent PET scans to measure the availability of GABA-A receptors—basically, the number of spots where a GABA-A molecule could attach and do its work—in different parts of the brain.
Over a four-week period, alcohol withdrawal led to higher levels of GABA-A receptor availability in both smokers and non-smokers, and at the same time, cravings for alcohol declined in both groups. However, alcohol-dependent non-smokers’ levels returned to that of the non-smoking controls by the end of four weeks. Alcoholic smokers’ levels remained level throughout the treatment period. Meanwhile, while both alcoholic smokers’ and non-smokers’ displayed less desire to drink as the treatment went on, smokers generally had about twice as much desire to drink compared with non-smokers. An additional analysis suggested that there was a connection between GABA-A receptor availability and cravings for alcohol, but only among smokers. Finally, a study using rhesus monkeys found that nicotine didn’t play a role in GABA-A receptor availability, leaving exactly how smoking affected GABA-A unclear.
“Continued smoking during withdrawal interfered with the subsequent normalization of the GABA-A receptors and was associated with higher levels of craving, which may increase relapse risk,” the authors write in Proceedings of the National Academy of Sciences. The team also concluded that researchers looking to treat alcoholism should focus on the GABA-A system in future studies.
Tiny effects of attitudes on individuals' actions pile up quickly.
Even when attitudes have only a small impact on actions, individual prejudices can easily turn into racist institutions, according to a recent study in the Journal of Personality and Social Psychology.
That conclusion was one of several mostly technical results in a paper about the real-world value of something called the Implicit Associations Test. The IAT uses a special categorization task and translates the time it takes people to perform that task into a measure of a person’s underlying, perhaps even unconscious, attitudes about anything from names to racial and ethnic groups. And, as dozens of studies show, most of us, regardless of who we are and where we come from, show signs of underlying racial and other sorts of prejudice, at least as measured by the IAT. Sad, but apparently true.
So far, none of this is precisely what authors Anthony Greenwald, Mahzarin Banaji, and Brian Nosek—all key figures in the development of the IAT—were after. Mainly, they focused on a fairly technical debate about the IAT’s ability of their test to predict behavior in the real world. Indeed, some say, there’s only a weak link between how a person does on an IAT and how they act on a day-to-day basis—for example, between scores on an IAT for race and police shooting unarmed black teens and preteens.
Greenwald, Banaji, and Nosek dispute that contention, but even then that debate misses a broader point, they argue: Even if the connection between an individual’s implicit attitudes and explicit acts is very weak, that connection can have big consequences for society.
Here’s an example, based in part on the idea that interactions with employers or police happen over and over again. Let’s say that the effect of attitudes on actions is tiny. To be concrete, suppose the differences across police officers in an IAT measure—or any attitude measure, really—of race attitudes predicts just 0.17 percent of the difference between the fraction of blacks and whites a police officer stops. By any reasonable standard, that’s a very weak relationship. Yet if whites are stopped one percent of the time they encounter a police officer, blacks will get stopped twice as often. And over 25 such encounters, blacks will get stopped at a rate 17.4 percent higher than whites.
Setting aside for the moment the inside-baseball debate about whether IAT is the most useful measure of prejudiced attitudes, the big picture is that it doesn’t take much at the level of individual attitudes to make society a little—or a lot—less equal. On the statistics side, it’s not even a particularly new argument. But in this case, it makes a big difference. From a nearly non-existent connection between attitudes and actions, institutional racism is born.
The online lexicon spreads through racial and ethnic groups as much as it does through geography and other traditional linguistic measures.
Everyone who has driven across the United States or even across a state knows that not everybody speaks the same way. There are regional dialects and slang, and within regions, there are demographic differences as well. Something similar is true of emoticons, abbreviations, and other forms of online shorthand, except that online, race plays an outsized role compared with other more traditional measures like geographical distance.
With the global reach of the Internet and communication systems like Twitter, you might reasonably think that at least some of its neologisms would have similar global reach. And you’d be right in many cases. Others you might not be so familiar with. The abbreviation ikr—short for “I know, right?”—is much more common in Detroit than other parts of the U.S., and that’s just one of many examples. Exactly what determines how such words spread, however, is an open question.
So Jacob Eisenstein and colleagues at Georgia Tech, University of Massachusetts-Amherst, and Carnegie Mellon University decided to collect three years of Twitter data—107 million tweets sent by 2.7 million users between 2009 and 2012, complete with the users’ locations—to see what kinds of patterns they could find. They focused on 2,603 words which vary from the obvious (“crzy”) to the more cryptic (“ion,” meaning “I don’t,” as in “ion even care”) and beyond (“;3,” which apparently has something to do with anime cats).
But Eisenstein and company weren’t interested in merely cataloging Twitter dialects. Instead, they built a mathematical model that describes how words like “ikr,” “ion,” and “;3″ spread between different parts of the United States over time. Using that model, they constructed a network of connections between cities—technically, Metropolitan Statistical Areas—and, finally, looked at demographic and geographic factors that could explain those connections.
The single most important predictor of whether words would spread between two cities was the difference in the percentage of African Americans living there. That was closely followed by the percentage difference in Hispanics. More traditional predictors of the spread of a lexicon, such as geography, played important but somewhat smaller roles. For example, Boston and Seattle share similar demographics, as do Los Angeles and Miami and Washington, D.C., and New Orleans, and those pairs of cities were quite likely to share similar Twitter lexicons despite thousands of miles between them.
The particularly strong effects of racial similarity between cities, the authors write in PLoS One, suggests that at least one pattern prevalent in the physical world is retained—perhaps even amplified—in the online world. “In spoken language, African American English differs more substantially from other American varieties than any regional dialect; our analysis suggests that such differences persist in the virtual and disembodied realm of social media.”
A new study suggests it's relative wealth that leads people to oppose taxing the rich and giving to the poor.
Psychologists and political scientists have puzzled for some time about why the same Americans who favor greater economic equality don’t also support the kinds of redistributive economic policies that would get them there. Perhaps, some say, it’s political ideology. Perhaps it’s some kind of twisted self-interest. Or perhaps it’s because of how easy it is to make someone feel relatively poor or relatively well-off.
Conventional wisdom holds that much of the opposition to redistributive economic policies—policies that emphasize taking some money from the wealthiest people and using it to provide services for everyone, including the poor—stems either from ideology or economic interest. Unfortunately for the conventional wisdom, neither is a very good match to reality. Political scientists have known for decades that most people don’t know enough about politics to have anything like an internally consistent ideology—in fact, one influential school of thought argues that most Americans’ political opinions are essentially random. Meanwhile, economic theory suggests that in a society with such economic inequality as ours, nearly everyone should support redistribution, yet that clearly isn’t the case.
So what is it then? While it’s not a complete answer, Jazmin Brown-Iannuzi and colleagues at the University of North Carolina-Chapel Hill say that subjective social status, rather than actual financial well-being, is likely playing a role. They backed that up with a series of studies they conducted online in which they carefully manipulated how people thought about their own social status.
In one experiment, the researchers asked 152 participants to answer a series of questions about their income, spending habits, and so on. After answering the questions, the subjects got feedback about how their discretionary income compared with that of other similar people using a “Comparative Discretionary Income Index.” The team next asked participants to place themselves on a 10-step ladder indicating what they thought their social status was. Finally, each person answered a series of questions gauging their support for redistributive economic policies.
The CDI Index participants got was, of course, fake, but it did have a real impact. People who got scores indicating they were higher status than their peers placed themselves a rung higher on the social status ladder, and those who placed themselves higher on the ladder supported redistributive policies less, even after controlling for actual indicators of social status, such as education and income, and political leanings.
“We suggest that social comparisons are critical for understanding attitudes toward economic inequality, as differences in relative status can contribute to differences in political preferences,” the authors write in Psychological Science. “H.L. Mencken once quipped that a wealthy man was one who earns $100 a year more than his wife’s sister’s husband. Attitudes toward the distribution of wealth in society may follow the brother-in-law rule as well.”
The first study of friends' perceptions suggest they know something's off with their pals but like them just the same.
Social anxiety disorder (SAD) can be devastating. In the worst cases, sufferers of the illness struggle with basic tasks, like signing a check in front of another person, and the majority routinely report less satisfying friendships and other relationships. A new study, however, suggests people with SAD might be underestimating how much others like them: While friends perceived the relationships somewhat differently, they reported higher levels of friendship intimacy and satisfaction than their SAD friends.
Researchers have known for a while that social anxiety disorder leads people to perceive their situations more negatively than others, specifically with regard to friendships. However, previous studies have generally relied on patients’ own impressions of their friendships, and typically those impressions concern friendships in general rather than any specific relationship in particular. As a result, scientists who study SAD don’t really know how bad patients’ friendships are—maybe the disease really does harm friendships, or maybe it just harms one’s perceptions of those friendships.
Sorting that out requires something no one seems to have done before: finding friends of people with SAD and asking them how they felt about their chums. That’s precisely what Thomas Rodebaugh and a team at Washington University in St. Louis did. They asked 77 people diagnosed with generalized social anxiety disorder—meaning they experienced social anxiety in a number of different situations—and a control group of 63 people without SAD symptoms to bring a pal into the lab. There, both the primary participants and their friends filled out surveys concerning how much they liked each other, how close they felt to each other, and how satisfying they felt the friendship was.
The survey results indicated that people with SAD viewed their relationships more pessimistically compared with people in the control group. And, as Rodebaugh and team suspected they might, SAD sufferers reported feeling less close to their friends than their friends did to them. People with SAD also reported liking their friends less than the other way around and being slightly less satisfied with the relationship than their friends.
That’s not to say that friends of those with SAD couldn’t tell the difference. Compared with the control group’s friends, friends of SAD sufferers perceived their friends to be less dominant in the relationship and also less well adjusted.
“We found clear evidence that SAD is related to self-report of impairment in specific friendships, consistent with the hypothesis that SAD is a fundamentally interpersonal disorder,” the authors write in the Journal of Abnormal Psychology. “However, we found little evidence that friends experienced the same level of friendship impairment, despite them seeing differences” between those with and without social anxiety disorder. That, the authors explain, provides support for treatments that focus on helping people with the disorder see that they’ll come across better than they think they will.
Members of a minority ethnic group are less likely to express support for gay equality if they believe their own group suffers from discrimination.
A common-cause coalition of oppressed minority groups was one of those 1960s fantasies that failed to materialize. A new study published in the Journal of Experimental Social Psychology suggests one reason why.
In two large surveys and a lab experiment, African Americans, Asian Americans, and Latinos were less likely to express support for gay equality if they believed their ethnic group suffered from discrimination.
Maureen Craig and Jennifer Richeson of Northwestern University attribute this to the psychological phenomenon known as social identity threat, in which the self-esteem of a devalued group is bolstered by derogating other groups.
While that’s a disheartening dynamic, the researchers, to their surprise, found members of one racial minority—Asian Americans—who had personally experienced discrimination expressed more positive attitudes toward homosexuality.
Individually hurtful experiences, as opposed to a general sense that one’s entire race has been wronged, “may better promote sympathy and/or perceived commonality with other disadvantaged groups.” But absent direct experience with intolerance, group solidarity trumps empathy for outsiders.
For more on the science of society, and to support our work, sign up for our free email newsletters and subscribe to our bimonthly magazine. Digital editions are available in the App Store (iPad) and on Google Play (Android) and Zinio (Android, iPad, PC/MAC, iPhone, and Win8).
A new study shows that the neural plasticity needed for learning doesn't vanish as we age—it just moves.
Turns out you can teach an old dog new tricks—the dog just needs to use a different kind of nerve cell to learn them. That’s the thrust of a study out today that presents perhaps the first clear evidence that aging people’s brains still undergo physical changes as they learn, just not the way their youthful counterparts do.
At issue is the idea that brains don’t change much in adulthood. A host of magnetic resonance imaging (MRI) studies have proved that wrong, but those same studies seemed to confirm another intuition that, as we age, we lose the capacity to grow, repair, and modify connections between nerve cells, a capacity called neural plasticity.
“It has been said that old people are less plastic, meaning the effect of learning is much less,” says Takeo Watanabe, a psychologist at Brown University and one of the authors of the new study. But, he says, behavioral experiments “have shown that is not necessarily the case.” In visual learning experiments in which participants must remember sets of images or look for minute changes in an image, older experimental subjects can learn at about the same rate as younger people, Watanabe says. But if older brains are less plastic, how are older people still able to learn so well?
The key turns out to be which kinds of brain tissue are plastic in younger and older people. Standard MRI techniques are really only designed to study the brain’s gray matter, which mainly comprises neuron cell bodies. White matter, on the other hand, is made up of long fibers called axons that can stretch far across the brain. White matter is just as essential as gray matter, but it doesn’t show up in much detail on a standard MRI. For that, the team needed a relatively new technique called diffusion tensor imaging.*
After using standard MRI and DTI to scan the brains of 18 adults aged 65 to 80 and 21 others aged 19 to 32 before and after several days practicing a visual learning task, the researchers found that both age groups learned at similar rates—but their brains responded differently. Gray matter changed only in younger adults, while white matter changed only in older ones—there, DTI results suggested that axons had grown thicker and developed more robust myelin shells, which can help prevent crosstalk between the brain’s electrical connections.
That sort of “double dissociation” between changes in younger and older people’s brains is a clear sign that something is changing, despite similar learning abilities in the young and old alike. Perhaps, Watanabe says, increased axon plasticity in older people helps compensate for a degradation in synapse efficiency—synapses are where nerve cells take in information from others—though he is quick to point out that’s just one guess. Our brains may have to age a bit more before we actually figure it out.
But it's not in the rainbow and sing-along way you'd hope for. We just don't trust outsiders' judgments.
Ethnic diversity could help prevent stock market and housing bubbles, according to new experiments, though the reason might be a little bit depressing. Basically, we’re less likely to trust others’ judgment‚ and therefore less likely to follow their leads, when they come from different ethnic groups than our own.
That’s the conclusion of a paper just out in Proceedings of the National Academy of Sciences that reports the results of two stock-trading experiments conducted in Singapore and Kingsville, Texas. The project was motivated in part by a desire to understand how the housing bubble followed so closely on the heels of the 1990s tech bubble, lead author and Columbia University economist Sheen Levine writes in an email. “In 1999, everybody I knew was starting an Internet company, and in 2005 the same people assured me that real estate prices can only go up,” he says. “I wondered how intelligent people, versed in economics and finance, can all ignore reality so well.”
One hypothesis is a kind of groupthink. If somebody’s buying one stock and I’m not, the groupthink goes, he must know something I don’t, and I should follow suit—while in truth the buyer might actually need a reality check.
Ethnic diversity, some suggest, could be a solution to this conundrum. University presidents have defended programs aimed at racial and ethnic minorities on those grounds, and research seems to back up the idea that a wider range of viewpoints leads to more balanced, groupthink-free decisions. Yet ethnic diversity has a dark side too, Levine and co-authors point out. Sometimes, it leads to more conflict than progress.
To see whether diversity could improve stock-market decisions—and if so, why—the researchers divided 180 people with backgrounds in business or finance into groups of six. Those groups played a 10-round stock-market game in which players traded a dividend-paying stock. Half the groups were ethnically homogeneous, while the other half had at least one ethnic minority—say, five Chinese players and one ethnically Malay player. While traders knew the ethnic make-up of their groups, they couldn’t communicate with each other, and all trades were anonymous.
As expected, homogeneous groups set inflated selling prices, yet traders in those groups still bought the stock, and the stock price climbed over 10 rounds. Just the opposite happened in ethnically diverse groups: Traders refused inflated selling prices, and over time the stock price fell to roughly the price it would have in an idealized market with rational traders.
It would have been nice if that had happened because traders in diverse groups took others’ views into account when setting prices, but with anonymity and a lack of communication, it’s more likely they simply didn’t trust others’ judgments when it came to setting reasonable buying and selling prices.
“Homogeneity, we suggest, imbues people with false confidence in the judgment of coethnics, discouraging them from scrutinizing behavior,” the authors write.
Even under the guidance of a specialist trainer, computer-based brain exercises have only modest benefits, a new analysis shows.
Maybe the scariest part of growing old is the possibility of cognitive decline—forgetfulness, difficulty thinking clearly, and, in the worst cases, full-on dementia. It’s therefore natural that researchers and entrepreneurs hoped that specialized brain training could make a difference, just as daily walks might keep an aging body fit.
Unfortunately, that hope remains for the most part unfulfilled, according to a study published Tuesday in PLoS Medicine. In healthy older adults, computer-based brain exercises have limited benefits, and then only when supervised by a trainer once to three times a week. And despite what Lumosity and BrainHQ will tell you, doing the training at home had no effect at all, at least in the short term. The meta-analysis and an accompanying commentary add to a growing movement among scientists, who argue that cognitive training may have value, but as yet there is very little evidence to support that claim.
Amit Lampit, Harry Hallock, and Michael Valenzuela of the University of Sydney’s Brain and Mind Research Institute reached their conclusions following a meta-analysis of 51 studies that investigated the effects of computerized cognitive training, or CCT, on nearly 5,000 senior citizens. Limpet, Hallock, and Valenzuela focused specifically on experiments which used at least four hours of CCT and which tested cognitive abilities just before and just after training. Despite those criteria, that left a considerable range of CCT approaches, including both center- and home-based methods as well as measures of information processing speed, working memory, attention, and other skills.
Overall, the most important result was that center-based CCT guided by a specialist has a small but discernibly positive effect on cognitive abilities, much as you might expect a fitness trainer at the gym to have a small positive impact on your physical health. Home training, as the analogy might suggest, had essentially no effect.
Breaking the results down further, the researchers found that CCT had less impact on some skills than others. The largest effects, though still generally small, were on memory for images, working memory—the system that lets you keep track of different pieces of an idea you’re pondering, for example—and processing speed. CCT had little to no effect on attention or executive functions, the sorts of things involved in impulse control, planning, and generally avoiding bad spending decisions. And as with the big picture, at-home CCT had no effect on cognitive abilities.
The research is not without limitations, the team notes. The results do not necessarily apply to those already experiencing cognitive impairments, and it remains possible there are more substantial long-term benefits from computer-based brain training—though that mainly highlights the need for more study, the authors write.
In an accompanying perspective article, PLoS Medicine consulting editor Druin Burch writes that CCT’s modest effectiveness “is a conclusion of value to academics in the field and to those with interest in selling training programmes. The value to others depends on how well they understand the conclusion’s limits.” In particular, Burch warns against consumers interpreting the results with anything but caution.
Brief, directed conversations are more effective at identifying liars than fancy behavioral analysis, experiment suggests.
By now, it’s safe to say that the Transportation Security Administration’s behavioral detection officers—agents trained to detect suspicious behavior simply by watching people—aren’t very effective. Still, the TSA would like to have tools for detecting potential threats beyond current body scanners, which have their own problems. Now, a pair of English researchers report a new interview approach that could help tell the difference between liars and others.
Lie detection is a controversial subject historically, and a field perhaps dominated more by the hope that it’s possible than particularly strong scientific research. Though a few prominent scientists think we can detect lies using physiological measurements or facial expressions, most think that interview techniques are more effective for identifying prevaricators. Interviews, the thinking goes, are more mentally taxing on liars than truth tellers, and they yield more opportunities for liars to contradict themselves. On the other hand, an interview must last long enough to set traps and make them work.
Thomas Ormerod and Coral Dando‘s solution is to engage passengers in brief, friendly conversations that elicit fairly detailed accounts of individuals’ travel plans and backgrounds. Those conversations are meant to be quite flexible, so that officers can probe details of a passenger’s story as they come up. Key to the approach is to let the traveler do most of the talking, giving agents more information to go on when evaluating a passenger’s truthfulness. This contrasts with other methods such as “suspicious signs,” which emphasizes a fixed set of questions with generally shorter answers and which often emphasize supposed behavioral tells over information gathering.
To see if their approach worked, the pair went into London Heathrow Airport and a few others and trained 79 officers in their method, called Controlled Cognitive Engagement (CCE). Another 83 trained in the suspicious-signs method also took part. To test the methods, Ormerod and Dando recruited 204 people and gave them one goal: con their way past airport security agents using falsified boarding passes and false identities. The agents’ goal was to stop as many of the fakes as they could—a particularly difficult challenge since the fakes had blended in with legitimate air travelers who showed up simply to catch a flight.
The contrast between methods was stark. Agents trained in CCE stopped two-thirds of the mock passengers, compared with a dismal three percent stopped by agents using suspicious signs, which is standard protocol at many airports around the world. Meanwhile, agents using CCE stopped only three percent of real passengers who agreed afterwards to participate in the study—about the same false-positive rate as the suspicious signs method.
“Our results have implications for practitioners, both in security screening, and more generally for professional lie catchers such as police officers and court officials,” Ormerod and Dando write in the Journal of Experimental Psychology: General. “In contrast to current practice, we propose that security agents should not be trained to identify specific behaviors associated with deception.” Instead, agents should work to draw out potential inconsistencies through conversation, they argue.
People who live closer to the shore are more likely to believe in climate change and to support regulation of carbon emissions.
If you can feel the sea breeze on your face when you walk out of your house, you’re more cognizant of climate change.
That’s the conclusion of a new study of 5,815 New Zealanders, which finds “people living in closer proximity to the shoreline expressed greater belief that climate change is real, and greater support for government regulation of carbon emissions.” This held true even after taking into account the respondents’ age, gender, education, personal wealth, and political leanings.
The researchers, led by psychologist Taciano Milfont of Victoria University of Wellington, can’t definitively say why residents of coastal communities hold views more in line with the scientific consensus. But they suspect predictions of such disasters as flooding and sea level rise hit home for seaside dwellers in a more immediate, psychologically impactful way.
The ocean, they write in the online journal PLoS One, “may inspire a sense of respect for the power of nature and its changeability.” If so, the challenge for policymakers is to inspire similar reverence among the landlocked.
We at Pacific Standard are already convinced—but then, our offices are only about a mile from the ocean.
For more on the science of society, and to support our work, sign up for our free email newsletters and subscribe to our bimonthly magazine. Digital editions are available in the App Store (iPad) and on Google Play (Android) and Zinio (Android, iPad, PC/MAC, iPhone, and Win8).
A new survey of eighth graders suggests that an unjustifiably high opinion of oneself has subtler effects on relationships than previously thought.
Nobody likes a know-it-all or a snob, or so the conventional wisdom goes. But in a new study, psychologists argue that students who feel unreasonably high on themselves academically don’t actually engender their peers’ contempt. It takes a sense of superiority targeted at one person to do that, and the chilly feeling that results is often mutual.
Much has been made in years past about the effects of self-esteem, both in academia and the popular press, but the conclusions are often inconsistent with each other. Some studies find enhanced or even inflated self-perceptions can be good for you and lead others to perceive you more positively. Others suggest that an enhanced self-image alienates others and leads you to a life of narcissism and apathy. But goals and methods often vary in these kinds of experiments. In particular, some studies examine a general sense of superiority to others, while some look at what happens when individuals feel superior to specific colleagues or peers. That led German psychologists Katrin Rentzsch and Michela Schröder-Abé to wonder whether there really is a difference between Johnny thinking he’s the smartest kid in the room, and Johnny thinking he’s smarter than Jenny.
To find out, Rentzsch and Schröder-Abé brought their science to that bastion of fraught social politics, eighth grade. They surveyed 330 eighth-grade boys and girls in eight schools in southeast Germany about personality traits, academic self-esteem, whether they felt academically superior to each of their classmates, and whether they liked each fellow student. They also calculated the average of each students’ scores in math, physics, German, and English, a measure that allowed them to determine whether students harbored unrealistically high opinions of themselves relative to specific others.
Analyzing their data, the psychologists found that students didn’t like or dislike other kids who had unrealistically high opinions of themselves any more than others, as long as they weren’t being singled out as the target of a big-headed peer’s feelings of superiority. When they were—when one student had an inflated sense of academic ability relative to a specific classmate—targeted students disliked the kids targeting them. Big-egoed students didn’t entertain such subtleties, though—they just disliked everybody.
“Our findings may help to explain previous controversial findings on the interpersonal consequences of self-enhancement in that they reveal different effects at two levels of analysis,” the authors write in Social Psychological and Personality Science. “Although in our study, students high in habitual self-enhancement tended to dislike others, they were not disliked by others in return; whereas at the relationship level, feeling superior to a specific other was not so easily forgiven.”
It's not as surprising as you think.
Scientists have figured out how to control genes with their minds.
You read that right. A team of bioengineers has developed a proof-of-concept system with which a person can regulate simple gene functions using electrical signals in his or her brain. Odd though it seems, it might one day be a useful medical tool, the team reports in Nature Communications.
Actually, it shouldn’t be that surprising. The biology and neuroscience behind their technique isn’t all that new or even complicated by modern standards. Biologists first began to understand how to control gene expression—the process that allows organisms to produce different kinds of cells from the same DNA—in E. coli during the 1970s. More recently, bioengineers have devised ways to regulate gene expression in mice and humans. Theoretically, doctors could use gene expression to treat disease through various relatively non-invasive techniques—for example, illuminating light-sensitive proteins that bind to particular, targeted genes in the brain could help treat depression.
At the same time, brain scientists have stretched the boundaries of what we can do with our minds alone. Motivated in part by a desire to help those who’ve lost limbs, researchers have designed robotic arms a person can control using brain signals alone, and you can buy similar, though somewhat less sophisticated, devices online.
Still, it is something of a novelty to combine the two areas of technology into one. To do so, researchers at ETH Zurich‘s Department of Biosystems Science and Engineering first designed implants to be placed inside a group of mice. Each had three main parts: a wireless receiver used to power the device, a near-infrared light-emitting diode, and a semi-permeable chamber containing a variant of the bacteria Rhodobacter sphaeroides, which had been modified so that when near-infrared light shined on it, the bacteria would release a protein, secreted alkaline phosphatase, that plays a number of roles in humans, including regulating the immune-system protein interferon.
The power source is where mind control comes in. Using a commercially available headset that measures electrical signals on the scalp, a group of human test subjects trained themselves to control a brain-computer interface. The researchers then hooked the interface up to the implant’s wireless power source, allowing humans to control gene expression in mice.
You’d be forgiven at this point for wondering whether the work is the product of “because we can” thinking or even a mad scientist, but in the long term it might have practical medical value, writes senior author Martin Fussenegger in an email. Doctors could use devices like the one his team designed to manage gene therapy through thoughts. Farther down the line, “it may become possible to capture brain wave signatures associated with chronic pain and epileptic seizures” ahead of time, and those signals might be used to trigger an implant to provide treatment before pain or a seizure strikes.
All indications suggest that’s a long way off, however. For one thing, there remain ethical questions about using such implants, let alone having patients control them.