Parents, not educational setting, may be the key.
Girls taught in single-sex schools are no more competitive than their co-ed counterparts, according to a new study. That’s bad news for proponents of single-sex schools, and suggests it might be harder than we thought for women to break into competitive, male-dominated college majors and careers.
Though there’s no shortage of opinions on education in general and girls’ education in particular, reliable studies are hard to come by. Even if there was a reliable measure of success, or at least an interesting one, the wide range of factors and the web of their interactions makes it difficult to sort out the effect of any one variable. Researchers would rather do experiments to see what works and what doesn’t, though parents and teachers aren’t so sanguine about their kids becoming lab rats.
But sometimes parents and teachers don’t decide, and that’s where economists Soohyung Lee and Muriel Niederle and Korean cultural researcher Namwook Kang got lucky. Middle school students in Seoul, South Korea, were randomly assigned to single-sex or co-educational schools, setting up a natural experiment for the researchers to analyze. Six hundred forty students, including boys and girls from 21 single-sex and co-ed schools, took part in an experiment the team designed to test competitiveness.
First, students solved arithmetic problems—what’s 70+84+61+27+87, for example—earning 50 cents for each correct answer. That was followed by a tournament version, in which whoever solved the most problems earned two dollars for each correct answer. Crucially, the third task gave students a choice of doing another tournament, in which case their performance would be compared with those of their competitors from the second task, or on their own. Choosing the tournament, the team reasoned, was a sign of competitiveness. (The researchers chose, at random, which round each subject would be paid for after the fact.)
After controlling for how well students had actually done on the first two tasks, Lee and colleagues found that single-sex schools didn’t increase girls’ competitiveness—if anything, it magnified the gender gap. Though the difference was not statistically significant, girls in co-ed schools chose the tournament option eight and a half percent less often than boys, while girls in single-sex schools chose it 15 percent less often. Though the numbers changed slightly, the results held up even after controlling for confidence—how well boys and girls thought they’d done in the second-task tournament—and risk aversion—whether students said they’d rather be paid for the second task as if they’d done it on their own.
The results contrast with observational studies, which have suggested that girls in single-sex schools are more competitive than their co-ed counterparts. The trouble with such studies is that the girls—or their parents—may have been more competitive to begin with, perhaps explaining how they ended up in single-sex schools in the first place.
“Girls are less competitive than boys,” the authors write this month in Economics Letters, and, “this gender gap can be reduced not by expanding single-sex schooling, but by altering parental inputs” to their children’s education.
Sexy women may turn heads, but for pro-social and charitable products, they won't change minds.
Imagine you walk into a liquor store, and a sexy woman is pouring sample shots of a new brand of whiskey. She urges you to buy a bottle, and you find that idea pretty appealing. Now, instead, imagine that that same woman is asking you to donate to charity.
Moment ruined, right?
That’s the gist of the results from a series of experiments conducted at universities in Hong Kong and Singapore, to be published in the December issue of the Journal of Consumer Research. Researchers Xiuping Li and Meng Zhang argue that when people have heightened physiological needs (when they see someone sexually attractive or when they are hungry), they feel less connected to other people, and thus, are less likely to care about others’ well-being, to share resources, or otherwise try to help.
Most of the experiments started the same way. Male participants were given a series of photos and asked to choose which one should be the cover of a new magazine. Some were given photos of sexy women (for a fashion magazine, ostensibly), and others were given photos to serve as a control condition—in some cases, natural landscapes (for a travel magazine), or regular-looking women (“the life edition of a magazine,” which required “an average person” on the cover). Some did not look at photos at all.
Directly afterwards, the participants were told they were completing a totally different experiment in which they had to make a series of decisions that tested their “psychological connectedness” to other people and their willingness to help others.
Among their findings, Li and Zhang report that men who looked at sexy pictures of women regularly rated themselves as less “connected” to their best friends, acquaintances, and even their future selves than men who looked at landscape pictures. The photos also appeared to have an influence on decision-making—men who looked at sexy photos were less likely to donate money, to buy a T-shirt promoting a pro-social cause, and to value products described as beneficial to others (versus beneficial to themselves) than men who looked at neutral photos. Statistical analysis showed, at least in the case of buying a T-shirt, that reduced feelings of connectedness were the underlying reason that sexy photos led to uncharitable behavior.
In a different but related experiment, both male and female participants either entering or leaving a cafeteria rated how connected they felt to other people—as the researchers expected, those who had high physiological needs (in this case, hunger) reported feeling less connected to other people.
Li and Zhang write that these findings contribute to work on the “narrowing effect,” in which visceral factors (such as cravings, moods, and emotions) may “narrow the focus of attention, both toward the present over the future and toward the self over others.”
While there’s a time-worn tradition of advertisers using scantily clad women to attract consumers’ attention, a marketer for a pro-social cause might consider steering away from attractive models and sticking with the stranded polar bear.
A new study shows that taxing carbon dioxide emissions could actually work to reduce greenhouse gases without any negative effects on employment and revenues.
How can we reduce greenhouse gases without intrusive government regulations? Economists have long advocated taxing carbon dioxide emissions, and a new study by British researchers suggests the approach can actually work.
In 2001, the U.K. government introduced a new carbon tax amounting to 15 percent of a manufacturing plant’s energy bill. The legislation also, importantly, included a less onerous option under which facilities in certain energy-intensive industries would have the tax reduced by 80 percent in return for adopting “a specific target for energy consumption or carbon emissions.”
A research team led by Ralf Martin of Imperial College London examined energy usage at U.K. plants over the first three years of the plan, and found far greater reductions in electricity use and carbon dioxide emissions among those that were taxed at the higher rate. What’s more, the reduced emissions had no significant impact on employment, revenue, or overall productivity.
When it comes to cutting greenhouse gases, this study strongly suggests taxes are more effective than targets.
For more on the science of society, and to support our work, sign up for our free email newsletters and subscribe to our bimonthly magazine. Digital editions are available in the App Store (iPad) and on Google Play (Android) and Zinio (Android, iPad, PC/MAC, iPhone, and Win8).
Ticking off a category of things to do can feel like progress or a fun time coming to an end.
All good things must come to an end, and business meetings go on forever. We’re pretty much powerless. Not so, according to new research. Our tendency to categorize things helps us savor what’s good and push through what’s not.
Imagine you’re working your way through a list of chores—you might have to dust some bookshelves, vacuum the rugs, and sweep the floors. Probably you’ll check off one group—one category—of chores at a time. If you dust one shelf, vacuum part of a rug, and sweep a bit over there, you’re still left with more dusting, more vacuuming, and more sweeping, making your to-do list feel interminable.
But maybe your list comprises the rivers, museums, and sights you want to see in Berlin. In that case, you might want the list to be interminable. Sail the Spree, the Havel, and the other rivers before hitting Museum Island, and that’s it. No more boat times—just the inexorable march of time. But if you sail the Havel, hit the Neues Museum, visit Mauerpark, there are still three categories to sample from—more rivers, more museums, and more sights.
That intuition turns out to be correct, according to Anuj Shah and Adam Alter. In a series of seven experiments, they show that the average person will try to tick off categories of unpleasant tasks before moving on to the next. They’ll do the opposite for more enjoyable things. That way, there are more categories left to sample from—even if the number of things in those categories is the same, that makes it feel like the good times last a little longer.
In one of their studies, Shah and Alter asked 40 undergraduates to taste a series of chocolates from two different brands. In one round, they tasted six milk chocolates, three each from both brands, and in another they tasted a series of six dark chocolates comprising 99 percent cacao—a task the students generally considered unpleasant. Within each round, the students tasted two from one brand and one from the other and then made a choice: finish off the samples from the first company, or sample from the other, leaving one more chocolate from each brand.
When tasting milk chocolate, it was a toss up—48 percent of the students chose to finish off the first company’s samples. That number jumped to 79 percent when they were trudging through dark chocolate taste testing. The researchers argue that by getting one company out of the way in the dark chocolate testing, tasters felt as if they were making more progress, even though they had three bitter pills to go either way.
Businesses might want to take notice. Dentists, the authors write, might want to get the drilling and the shots out of the way up front before moving to the fun, or at least less unpleasant, stuff. Similarly, an amusement park planner may consider mixing rides and games—that way, visitors will naturally sample different categories rather than finishing up one first, which could make the fun seem more fleeting.
Elimination-style voting is harder to fiddle with than majority rule.
Americans might be surprised to learn that majority rule is a terrible way to decide most elections. Among other things, shrewd voters can manipulate majority rule by voting against their true preferences. Fortunately, computer scientists have figured out there are better ways—the approaches called the Nanson and Baldwin methods.
There’s a universe of voting systems out there. Popular among social scientists is the Borda count, in which voters rank candidates, the ranks are converted to points, and whoever tallies the least points wins. (Nanson and Baldwin are variations on Borda, in which the Borda tallies are used to eliminate low-ranking candidates before a final tally is computed.) No one system is perfect, though, and nearly every one is susceptible to what’s known as manipulation or, more politely, strategic voting. (The exceptions to this rule include dictatorship and not allowing pre-selected candidates to win, even if voters would want them to.) That’s what you call it when someone votes for a second-choice candidate that stands a better chance of winning than her first choice, and you don’t need to look far afield for examples. It was a hot topic in the 2000 United States presidential election, when some viewed Ralph Nader as a spoiler candidate and urged liberals to vote for Al Gore instead.
Strategic voting isn’t obviously a bad thing, but a system like majority rule that’s susceptible to manipulation also encourages people to suppress their true political preferences. As a result, political philosophers say, elections and subsequent debates do little to generate fresh ideas. But if there’s no manipulation-proof system, what can we do?
As the saying goes, the perfect is the enemy of the good, so University of Toronto computer scientist Jessica Davies and her colleagues set out to discover what’s good—at least, whether the additional steps in the Nanson and Baldwin methods help make them harder to game than Borda.
The team proved that manipulating those three is so complex that in the toughest cases even verifying the efficiency of a proposed strategy would take an impractically long amount of time. But the real world is rarely that tough, so they also tested several more practical algorithms for influencing elections. For instance, a voter who knows others’ preferences ranks her favorite candidate first and ranks all others opposite to the rest of the electorate—if enough people do that, it could cancel out the others’ choices. Although Borda was theoretically as hard to manipulate, it was in practice very easy to influence. In a series of computer simulations, the team’s algorithms could manipulate Borda voting more than 99 percent of the time, compared with about 75 percent for Nanson and Baldwin voting.
Davies cautions that the results really only apply in a world in which potential manipulators know how everyone else has voted. Still, “I do think that they would be less likely to try to manipulate a Borda election,” she says in an email, “and after that, even less likely to try to manipulate elections where there are elimination rounds.”
Not literally, but debunkers and satirists do fuel conspiracy theorists' appetites.
You know who you are. Somebody posts some daft claim about chemtrails, faked moon landings, and a supposed connection between vaccines and autism. You step in, trying valiantly to show them the error of their ways.
Well, your plan won’t work. No, if anything, it’ll make it worse.
That’s the conclusion of a new study by a team of Italian computer scientists, physicists, and, yes, social scientists. They scoured data from Italian Facebook—acquired through the publicly available Graph system—that showed how users had interacted with Facebook pages devoted to science news, conspiracy theories, conspiracy debunkers, and satirists and trolls.
Sorting through 1.2 million users in all, the team first identified individuals who had used 95 percent of their likes on either science or conspiracy pages. Then, they turned to how often science and conspiracy aficionados liked, shared, and commented on posts from their favorite pages and how long they stuck around—that is, the time between their first and last posts on a page.
Generally speaking, fans of actual science news and fans of conspiracy theories were pretty similar. Each group posted about as often as the other, and they followed their preferred pages for about as long. In other words, they were about as enthusiastic in their beliefs—it’s just that one group’s ideas were demonstrably false.
So is there anything to be done about it? Basically, no—at least, the researchers found, there’s not much the average person can do on Facebook. Most of the time, beliefs drive consumption, so we tend not to pay much attention to information we don’t already agree with, and much of what we read or watch reinforces what we already believe.
Italian Facebook users who followed conspiracy pages were no exception. When the team looked at how conspiracy theorists reacted to counter-arguments or to trolls openly mocking them, they found it had little to no effect on the least engaged conspiracy theorists. Those who didn’t like, share, and comment very frequently stuck around on conspiracy pages for just as long, whether or not they’d been exposed to intentionally false information from trolls or logical counter-arguments. On the other hand, the most devoted conspiracy theorists reacted to that information by sticking around even longer than they would have otherwise.
In other words, you really are wasting your time trying to change their minds.
Psychologists find that 3-D doesn't have any extra emotional impact.
In theory, three-dimensional video heightens the emotional impact of a movie scene—imagine if that guy from My Bloody Valentine appeared to be swinging his pickaxe straight at you, or if The Polar Express swooped out of the screen and just narrowly avoided running over your popcorn.
Pretty exciting, huh? Perhaps, but psychologists—who are a bit more even-tempered about the latest technology and a bit less interested in your wallet, compared with movie producers—say otherwise in a recent study.
Though movie makers might well take note, Daniel Bride, Sheila Crowell and five others at the University of Utah wanted to find out whether 3-D influences our emotions more than 2-D in large part because it might affect their own research. Film clips, it turns out, are one of academic psychology’s main tools for eliciting emotions. If you want to study the effects of happiness on someone, show them some stand-up comedy. If you want to study fear, show them a horror movie. And if 3-D makes you feel more of those feelings, then psychologists need to know.
With 408 Utah undergrads as their subjects, the team showed each one five-minute clips from the most easily accessed library of movies that come in 2-D and 3-D versions, namely, feature films, including My Bloody Valentine, Despicable Me, Tangled, and The Polar Express, each of which is available in Blu Ray and Blu Ray 3-D. To measure emotional responses in an unbiased way, the researchers hooked up their subjects to a series of electrodes and measured their heart rates, skin conductance—a test that detects minute changes in perspiration that is often used to measure overall emotional intensity—and other physiological responses to the film scenes.
Fortunately for those whose job it is to manipulate emotions, the extra cost of producing 3-D films doesn’t have much of an impact. As far as the physiological measures go, 2-D and 3-D produced the same emotional results, with one exception. The Polar Express managed to get more people going, at least as far as skin conductance was concerned. That may have been a chance result, though the authors point out that the 3-D effects in that clip were of higher quality, were more varied, and lasted longer overall, essentially filling the entire five minutes.
“The results should be encouraging for researchers who lack the resources to incorporate 3-D technology into their laboratory,” the authors write in PLoS One. While the subject pool—mostly young adults—and The Polar Express results highlight the need to replicate the study, “our results suggest that participants respond to the content and novelty of film more strongly than to the visual technology.” Chin up, Christopher Nolan.
A new model suggests looking beyond balance sheets, studying the network of investment as well.
How do you prevent a cascade of collapsing firms, banks, or seaside European states? A new model suggests taking a holistic approach, one that looks beyond balance sheets to understand the complex network of financial interactions at play.
At first glance, preventing those cascades isn’t so difficult. Step one: Increase diversification, the number of firms any particular entity owns shares in. Then, one firm’s failure isn’t enough to bring down others. Step two: Decrease integration—that is, the extent to which firms, rather than private investors, own each other. That way, firms are less exposed to each other’s foibles.
And that’s not necessarily bad advice—it’s just that it ignores the complex web of interconnections between financial organizations, argue economists Matthew Elliott, Benjamin Golub, and Matthew Jackson in a study forthcoming in the American Economic Review. While some fear that globalization and interdependency lead to financial danger, the network of interactions means “there’re two ways to go,” Golub says.
To see why, start with diversification. If businesses own shares in just a few others, they might be very sensitive to failures among those few, but the economy is not so interconnected that the failures spread very far. As diversification increases, connectivity grows, and, with it, the potential for cascading failures. It’s only at higher levels where the standard intuition works—there, firms’ investments are finally diversified enough that no one failure is likely to bring any other firm down.
In other words, whether more diversification is good depends on how diversified financial institutions already are. Integration works similarly. At low levels, bumping it up a bit makes potential cascades worse, but when organizations are highly integrated, it has a way of spreading out the consequences of collapsing, Golub says. “They all share the shock enough that they survive.”
So what are central bankers and other economic decision makers to do? For one thing, it may not be a good idea to back off on their current policies. “It’s more complicated than, ‘This is bad, let’s undo it,’” Golub says.
“They have to take a more holistic view of things, and I think people realize that,” Jackson adds. To understand how the system works—and how to intervene when necessary—means mapping out the web of connections between organizations. “It provides a base that they can begin to work from.”
You may like to talk about how much happier you'd be if the government didn't interfere with your life, but that's not what the research shows.
Finding people who insist they’d be happier if the government would stay out of their lives is not difficult. But new research suggests those people may be fooling themselves.
Using data from surveys conducted in numerous nations between 1981 and 2007, a team led by Baylor University political scientist Patrick Flavin focused on the question: “All things considered, how satisfied are you with your life these days?” Respondents answered on a 10-point scale. Their ratings were then juxtaposed with four key indicators of government involvement in the economy, including the generosity of welfare benefits and the extent to which labor markets are regulated.
“Our results,” the researchers write, “firmly and robustly point to one conclusion: At least in the advanced industrial democracies in question, government intervention increases the likelihood that citizens find their lives to be satisfying.”
Their data suggests the impact of activist government on personal happiness is “quite substantial,” and benefits the rich and poor alike.
Perhaps enjoying life is easier when you know there’s a safety net.
For more on the science of society, and to support our work, sign up for our free email newsletters and subscribe to our bimonthly magazine. Digital editions are available in the App Store (iPad) and on Google Play (Android) and Zinio (Android, iPad, PC/MAC, iPhone, and Win8).
Psychologists discover that we underestimate the value of looking back.
Don’t feel like you have the time to keep a diary or bury a time capsule? You might be missing out, according to psychologists at Harvard Business School: The joy of rediscovering something even a few months old is greater than you might think.
In case you weren’t aware, we’re pretty bad at predicting our future choices and emotions. Economists find over and over that we’ll choose to invest money as long as we make the choice well before we actually see the money: If you get it today, you’ll probably head for the mall. Meanwhile, we’re also fairly bad at predicting how we’ll respond emotionally to future events.
It follows, HBS graduate student Ting Zhang and her colleagues reasoned, that we might well underestimate the value of rediscovery—though that’s not where they got the idea.
“The project actually started from a realization I had as I was going through old family photos. Most of the photos we had were of extraordinary occasions, such as vacations, birthdays, and holidays,” Zhang writes in an email. “On the rare occasion we came across those photos, we had a lot of fun rediscovering the little things that reminded us of what life was like.” That led Zhang and her collaborators to wonder whether people might overlook the value of ordinary moments, she writes.
To find out, they asked 135 undergrads to make time capsules including recent photos, Facebook statuses, and—how’s this for mundane?—final exam questions. The students next rated how curious they’d be to see those glimpses of their recent past in a few months’ time, how interesting they’d find them, how surprising, and so on, using a seven-point scale. Generally, participants didn’t think they’d be particularly curious, interested, or surprised. Indeed, the 106 participants that followed up three months later weren’t very curious, interested, or surprised—but they were about nine percent more curious, eight percent more interested, and 14 percent more surprised than they’d thought they would be.
What’s more, people might come to regret those choices. Using Amazon Mechanical Turk, the team asked 81 people to choose between writing about a recent conversation they’d had or watching a video—though afterward everyone did both tasks—and later say whether they’d rather revisit what they’d written or watch another video. While only 27 percent chose the writing assignment and only 28 percent said they’d want to take a second look at it, a month later 58 percent chose to revisit what they’d written.
Ultimately, encouraging people to write down their experiences can make a real difference in a person’s day, the authors report online in Psychological Science. One participant, they write, took “incredible joy” in re-reading her description of mundane things she’d done with her daughter. “By recording ordinary moments today,” Zhang and her co-authors write, “one can make the present a ‘present’ for the future.”
A new study suggests a way to quantitatively measure a team’s style through its pass flow. It may become another metric used to evaluate potential recruits.
While baseball has the most storied quantitative tradition, plenty of other sports now use sophisticated statistical analysis in the pursuit of success. It’s the latest thing in hockey, and some particularly die-hard fans of professional cycling dug into the data in 2010 to see if changes in Union Cycliste International’s doping rules had an impact on that year’s Tour de France.
Soccer might be next if Laszlo Gyarmati, Haewoon Kwak, and Pablo Rodriguez get their way, though there are technical challenges. Discovering what leads to success in soccer is tricky business, they argued in research presented last month in a workshop at the Association for Computing Machinery’s Knowledge and Data Mining Discovery conference in New York. As plenty of Americans learned for the first time this past summer, soccer is not a high-scoring game, so much so that matches are frequently scoreless after 90 minutes of play. And although one person gets official credit for a goal, it’s usually a sequence of well-chosen, well-timed passes rather than a single player’s heroic charge that wins games.
That latter observation inspired Gyarmati and his colleagues’ methodological approach. Drawing on detailed, publicly available data, the trio looked for “flow motifs,” patterns of three passes between players that could be used to identify a team’s footballing style. The ABAB motif, for example, indicates that two players passed the ball back and forth a couple of times. ABCA, on the other hand, implies that one player passed to a second, who passed to a third, who finally passed back to the first.
In an age where every team can watch tape of every other teams’ games, the researchers expected that teams would adopt similar strategies and similar styles of play, and to some extent that’s true. Across the Spanish premier league, known as La Liga, nearly all teams used the various motifs in roughly, though not perfectly equal, proportion. While there were different styles—the researchers identified three other more or less distinct styles, including one characterized by increased use of the ABCA motif as implemented by Atletico Madrid and others—they were generally matters of degree.
The exception, they found, was the perennially successful FC Barcelona. Despite being known for its fast-paced “tiki taka” style based on groups of three players passing to each other, it was actually back-and-forth passing—ABAB, ABAC, and so on—that characterizes Barca’s play.
Though they’ve yet to analyze whether one style of play wins more matches or analyze how individual players contribute to a team’s style, the researchers write that their analysis could help football clubs select players that better fit within their larger line-up. Top players, they note, don’t always work out, and statistics that help identify whether an individual can work well as part of a particular team could make a crucial difference.
Experimenters use text messages to study morality beyond the lab.
Psychologists at the University of Cologne in Germany and Tilburg University in the Netherlands were perhaps a bit frustrated by the often artificial nature of experiments on human morality. In an effort to collect more realistic data, they did what anyone would do these days: They texted.
Social science researchers often worry about what they call external validity. Sure, you can get people to do some pretty weird things in the lab—giving other subjects electric shocks, ignoring someone in need while on the way to give a sermon on the Good Samaritan, etc. And sometimes the experiments focus on abstract philosophical matters, like whether you’d flip a switch to save people on a runaway train if it meant killing a person walking the tracks. But do any of these experiments apply in the world outside the lab? Are they, in the vernacular, externally valid?
Wilhelm Hoffman and colleagues figured the best way to find out was to make some observations out in the real world, using what they call “ecological momentary assessment.” In plain English, they recruited 1,252 Canadian and American adults and texted them a link to a survey five times a day for three days. Randomly timed between 9 a.m. and 9 p.m., the surveys asked a series of basic questions: had they done, been the victim of, witnessed, or learned about some moral or immoral act within the last hour? For each such event, participants were to describe what had happened and how they felt about it—for example, how happy they were, or whether they had a sense of purpose.
Some of what the team found was predictable, though there were some surprises. Liberals—they’d asked about political ideology as well as religion on a preliminary survey—were more likely to mention moral and immoral acts related to fairness and honesty, while conservatives were more likely to point out events related to loyalty, authority, and sanctity. Meanwhile, religious participants were no more likely to report having taken part in or otherwise experiencing a positive moral action. And while they reported few immoral events, that was largely because they had learned about fewer such events from others.
The surveys also uncovered real-world evidence that experiencing and doing good affects one’s own actions later on, for better and for worse. Having done something good for someone else, they found, decreased the probability of doing good later in the day by about five percent and increased the probability of doing something bad by about four percent relative to the average person, a phenomenon known as moral licensing. Meanwhile, having someone do something good for you upped your chance of doing good by about 11 percent.
“By tracking people’s everyday moral experiences, we corroborated well-controlled artificial laboratory research, refined prior predictions, and made illuminating discoveries about how people experience and structure morality,” the authors conclude in the journal Science. The research could also inspire new models of what a good or bad life really looks like.
You might be doing it wrong.
Sex and lower back pain might be the perfect recipe for a screwball comedy, but both the pain and the fear of exacerbating it are very real downers for a couple’s sex life. Take heart, though: A new guide to sexual positions could help improve the mood.
Somewhere around four in five people will experience serious back pain at least once in their lifetimes, and a third or more of those report that pain affects their sex lives, says Natalie Sidorkewitz, a doctoral student at the University of Waterloo’s Spine Biomechanics Laboratory and lead author of a new study that takes a look at how men’s backs move during sex.
After working with Waterloo kinesiology professor Stuart McGill as an undergraduate, Sidorkewitz worked for several years treating patients. Over time, those patients began to ask about the troubles their back pain was causing in the sack. “That was the first point where I started thinking about it,” Sidorkewitz says, and soon she returned to Waterloo to pursue better options for those suffering lower back pain.
Specifically, better positions. Doctors and quite a few websites recommend having sex in a spooning or side-by-side position, but, Sidorkewitz and McGill write in Spine, that’s not actually a very good idea.
To figure out what might be a boon in the bedroom, Sidorkewitz and McGill mounted motion-capture sensors similar to those used in special effects and video game production to the backs of 10 men. With those in place, the pair recorded the men and their female partners in the act, albeit in an experimentally controlled way. Each couple cycles through five positions—spooning, two varieties of missionary, and two varieties of what they referred to as “quadruped,” in reference to the woman’s position during sex.
Men who find bending forward painful, the team found, might benefit from the quadruped pose, which typically leads to a fair amount of back arching. Missionary—particularly if the man supports himself on his hands—and spooning followed quadruped in their ease. But that reverses for men with trouble arching their backs: Spooning is probably best, while quadruped might be a back killer.
Sidorkewitz says that the study does have some significant limitations. For one thing, the sexual positions they studied were all male-centric, meaning that the man was always on top—mainly because the sensors were on the man’s back and wouldn’t show up otherwise. For another, their published work so far has focused on ways to alleviate men’s back pain during sex.
Studies already in the works should address those concerns. They’ve already collected data on women, Sidorkewitz says, which points to quite different results from men. They’re looking at expanding the number of positions they study as well.
Even a mildly happy mood can make men overconfident in their abilities.
Hey guys, check it out: you’re being a tad overconfident. Watching Robin Williams do stand-up is not helping. Perhaps unsurprisingly, your women friends do not have this problem.
By the way, this is not good news for the economy.
This is the conclusion, more or less, of a new study by John Ifcher and Homa Zarghamee, researchers who’ve spent a fair amount of time thinking about how happiness might affect everyday economic decision making. They already knew that people are more confident in themselves than they probably should be, and that men are generally more so than women. They also knew that happy people tend to view themselves more positively than others and do better on quizzes and other tasks.
But, Ifcher and Zarghamee wondered, what happens if you give them a good reason—money, that is—to judge their abilities correctly? It’s not an idle question, given the possibility that a few extra happy Wall Street traders might translate into overconfident buying and selling of the kind that creates stock market bubbles.
To find out, the pair first had 107 Santa Clara University undergraduates, including 50 women, take a 30-minute quiz featuring 20 trivia questions—for example, “Who ruled Iraq before Saddam Hussein?”—and 10 arithmetic problems. Quiz takers then watched either tranquil scenes from Alaska’s Denali National Park or selections from Robin Williams Live on Broadway—the experimenters chose at random which one. (The experiments were conducted in 2010, well before the comedian’s untimely death.)
Finally, each participant estimated how well they’d done on the quiz, and they had an incentive to get it right: $5 for an on-the-nose estimate, $3 for getting it within three points, and $1 for getting it within six points of their true score. That way, being overconfident in your abilities meant that “you’re actually making yourself worse off,” Zarghamee says.
Surprising no one, men were more confident than women—men overestimated their scores by about four points on average, while women overestimated by about two points. But men were also more susceptible to a few minutes’ entertainment. After controlling for actual quiz scores, men in the stand-up comedy group overestimated their scores by about two more points than men in the tranquil-mountain-scene condition. But there was no statistically distinguishable effect for women—no effect at all, in fact.
Zarghamee says they can’t be sure why there’s a difference between men and women’s responses to being put in a positive mood, though, in conversations with other researchers, an interesting possibility emerged. “The extent to which men regulate their emotions is different than women,” she says. In other words, when men get excited about funny videos, they’re more likely to get excited about their test-taking abilities as well.
More importantly, the experiments suggest that “good times … can prop up the notion that we’re better at things than we really are,” Zarghemee says, and even expert decision makers are susceptible. If that’s right, good times could spell trouble for us all.
We get started faster when deadlines feel like they're in the present.
The hardest part of getting things done, it seems, is getting them started. If that sounds like you—or more to the point, the people you depend on—here’s a tip: Don’t let your deadlines drift into the future. Set them for right now.
To be clear, “right now” doesn’t mean right this second, but it does mean now, as in this month, this year, or whatever time-frame feels most like the present rather than the future.
Yanping Tu, a Ph.D. candidate at the University of Chicago’s Booth School of Business, and Dilip Soman reached that conclusion after conducting a series of experiments that manipulated the way that people thought about time—in particular, how they categorized a particular event as being in the present or in the future. Deadlines set in the present ought to encourage people to get going with their work, the researchers hypothesized, while setting them in the future—early next month, say, or early next year—could encourage procrastination.
To test that idea, Tu and Soman first recruited 295 farmers in rural India who had attended a lecture on financial literacy in June and July of 2010. After the lecture, researchers approached the farmers one-on-one and encouraged them to open a savings account and deposit 5,000 rupees in it. The account came with an additional incentive: a matching fund that would add an additional 20 percent to what farmers deposited, but only if the deposit occurred within six months. Remarkably, the farmers approached in June were four times more likely to open an account on the spot—32 percent compared with just eight percent of those contacted in July. They were also more than six times as likely to open and fund the account by the deadline. Just 4.5 percent of those contacted in July did so, while 28 percent of those contacted in June managed to do so.
The difference, Tu and Soman argue, is how the deadlines were framed: Six months from June 2010 is still 2010, but six months from July moves the end date into early 2011—in an abstract way, the future. They replicated those results using University of Toronto business students, whom they offered consulting work with deadlines either before or after a traditional formal student dinner. Those who had to complete the work before the dinner were much more likely to say they started sooner than those with a due date later in the year.
The implications are clear for professors, managers, and others who set deadlines, Tu explains. “They should definitely think about time categories,” she writes in an email. It isn’t hard to do—it could be as simple as setting a due date to “this Friday” rather than “next Monday” or color-coding a calendar to emphasize a connection between a project’s start and end dates.
A new study looks at the effects of access to a home computer on the test scores of middle school students.
Concerned that poor youngsters aren’t learning basic computer skills, some school districts have begun purchasing laptops and distributing them to every high school and middle school student.
A new study suggests the policy may be doing more harm than good. It finds public school students in North Carolina who gain their first regular access to a home computer between the fifth and eighth grades experience “a persistent decline in reading and math test scores.”
In their analysis of data from 2000 to 2005, economist Jacob Vigdor and his colleagues warn that, for disadvantaged youngsters, the positive impact of having access to online instruction “may be negated by counterproductive use of computers, particularly by students in unsupervised home environments.”
In other words, low-income 12- and 13-year-old latchkey kids who finally have their own laptops are playing games, watching videos—and neglecting their homework. It seems bridging the digital divide may actually widen the achievement gap.
For more on the science of society, and to support our work, sign up for our free email newsletters and subscribe to our bimonthly magazine. Digital editions are available in the App Store (iPad) and on Google Play (Android) and Zinio (Android, iPad, PC/MAC, iPhone, and Win8).
The problem: Most American mothers don’t meet their breastfeeding goals. The solution: Well, there are many.
If there was a pill that could harness all the benefits breast milk bestows upon a child, it would basically be a miracle drug: Breastfeeding has the power to prevent heart disease, cancer, diabetes, obesity, multiple sclerosis, respiratory infections, ear infections, SIDS , allergies, and psychological maladies—and that’s just in the child. As for mothers, those who nurse babies are less prone to osteoporosis, obesity, anxiety, and several types of cancer.
The case for breastfeeding is so overwhelming, in fact, that the World Health Organization urges mothers around the world to exclusively breastfeed their babies for the first six months of life.
In the United States, at least, that is proving to be unrealistic. Only 25 percent of American moms are still breastfeeding six-month-old babies, down from 50 percent a decade ago. Even among women who aim to breastfeed for three months, 60 percent don’t make it that far.
In response, researchers at the Centers for Disease Control and Prevention set out to figure out why mothers are having difficulty meeting their breastfeeding goals. The problem, it turns out, is work. Women who return to their full-time jobs before their infant is three months old are much less likely to be able to keep breastfeeding, the study found.
The CDC researchers looked at data collected via questionnaire from almost 1,200 women who were employed while pregnant and intended to breastfeed for at least three months. Their average age was 29, they were predominantly white and married, and nearly half (48.6 percent) were college-educated.
The majority of them, due to employment commitments and financial obligations, had to switch their babies to formula, solid foods, and water to facilitate childcare as they returned to work before their baby’s six-month birthday (57 percent of U.S. mothers with infants younger than a year old work, most of them more than 35 hours per week).
“The end of the boob,” as other studies have called it, also spells the end of the aforementioned bonanza of health benefits—creating a real burden on the American health care system.
The good news is that there are lots of solutions to this problem. National support for a longer-than-three-months maternity leave might mean that millions more babies could have access to breast milk (we can look to Scandinavia for inspiration). Employers, for their part, can support flexible scheduling or telecommuting so that women achieve their breastfeeding goals. And hospitals and health care providers could be more adamant about training new mothers how to breastfeed—yes, it’s a learned skill—which would allow more mothers to succeed at it.
“Support for a mother’s delayed return to paid employment, or return at part-time hours, may help more mothers achieve their breastfeeding intentions,” the authors conclude. “This may increase breastfeeding rates and have important public health implications for U.S. mothers and infants.”
Chelsea Hawkins contributed reporting.
A new analysis shows only three percent of surgical studies conducted on animals and cells include both male and female subjects.
The push to encourage women to enter STEM fields has become so ubiquitous that there are op-eds written seemingly every week, a dedicated page on the White House website, and even a line of interactive dolls. A lesser-known gender discrepancy in science, however, is the lack of female research subjects. Despite a 1993 law requiring women and minorities to be included as subjects in clinical research funded by the National Institutes of Health, women continue to be under-represented.
Of course, clinical trials (testing medical interventions on human subjects) are only a small subset of medical studies. Other types of research that inform health care practices are conducted on animals and cells, and those studies, researchers at Northwestern University say, are also vulnerable to sex bias.
In a new study published in September’s issue of Surgery, vascular surgeon Melina R. Kibbe and her colleagues examine studies from 2011 to 2012 from the top five general surgery journals. They found a total of 618 publications using animals and/or cells.
Of those studies, 32 percent did not report the sex of animals or cells at all. Among those that did, 80 percent of publications studied only males, 17 percent had only females, and three percent included both sexes.
Of that three percent (13 papers total), only seven of them separated data by sex.
Cell-based papers were far less likely to indicate the sex of subjects (76 percent) compared with animal-based papers (22 percent). International papers were more likely to state the sex of subjects (80 percent) than American papers (53 percent), though international studies reported even higher levels of male-only research.
Why is there such a bias? Kibbe and her colleagues write that including both sexes for animal/cell subjects can be expensive. And, female subjects may add unwanted variability due to hormonal changes. Still, including female subjects is essential because a growing body of research shows that “women manifest, progress, and react differently than men” when it comes to disease response.
The results are especially alarming for female-prevalent disorders like thyroid and cardiovascular disease—only 12 percent of these studies included female subjects. While there is “robust and surmounting evidence” that women’s experience with heart disease differs from men’s, less than 25 percent of research on the topic includes consideration of sex. This has immediate, scary implications: “Even as mortality [from cardiovascular disease] has decreased in most counties in the United States from 1992 to 2006, female mortality increased in 42.8 percent of these counties.”
The researchers hope this study will be a wake-up call for the scientific community, including practicing physicians, who are often naïve when it comes to gender differences in treatment plans, as well as researchers. Since the completion of this study, the editors of each of the top five surgical journals have agreed to change their author guidelines to require indication of the sex of animals or cells used, and a written justification if researchers do not include both sexes.
“We believe that industry should set the example for sex equality in research,” the authors conclude. “This approach will impact positively the delivery of health care to both men and women.”
While math skills improve, proficiency in reading and writing remains the same.
The good news, according to a recent study, is that what works in charter schools also works in struggling public schools. The bad news? Charters’ best practices have little impact on reading and writing, and even if they did, they might not be scalable.
While the details differ from state to state and from program to program, charter schools all share a key trait: the freedom to try new ideas in the hope that they’ll close the achievement gap between wealthy, predominantly white schools and their poor, mostly black and Hispanic counterparts in rural America and the inner-city.
In theory, their flexibility makes them ideal laboratories for discovering what works and what doesn’t, but few if any studies have actually tested whether charters’ most successful ideas could work in ordinary public schools. It’s not enough to see charters getting results—to really find out whether their strategies work, researchers need to do a randomized experiment in ordinary public schools.
Generally speaking, parents and school officials aren’t crazy about researchers performing randomized experiments on their kids’ education.
All the same, Harvard University economist Roland Fryer got an extraordinary opportunity to do just that. Taking advantage of aggressive Texas education laws, the Houston Independent School District allowed Fryer to take over strategy for 16 of their lowest-performing elementary schools. Fryer and company chose eight schools at random as a control group, and began implementing a series of reforms in the others. Fryer’s changes, based on work he’d done with Princeton economist Will Dobbie, ranged from the relatively uncontroversial—like increasing instructional time by 21 percent, for example, and upping time spent with tutors—to the drastic: In consultation with HISD officials, Fryer replaced nearly every principal in the test schools along with more than a third of the teaching staff.
Those interventions worked, up to a point. Compared with those in the control group, students in the test group improved their test scores in math enough that they could close the racial achievement gap in about three years if they kept up the pace. Meanwhile, the interventions had essentially no effect on the achievement gap in reading. Non-randomized studies in Denver and Chicago public schools, along with Houston secondary schools, backed that up: The reforms work for math, but not for reading.
Beyond the reading challenge, Fryer writes that nationwide reform may not be practical. Compared to other states, Texas law makes it much easier to shake things up in schools where test scores aren’t up to snuff. Even if reforms are possible, they’re expensive: Annual tutoring costs alone were about $2,500 per student. But the biggest challenge may be finding teachers and principals to work in inner-city schools. Much of the new staff HISD hired came from other schools, Fryer writes, and it took 300 interviews to hire 19 principles.
While the average contribution increases, the number of donors falls.
You would think that offering a crowdfunding website’s contributors more privacy options earlier in the process would increase trust and bring in more money.
You would be wrong.
While giving visitors the option to hide their names and how much they’re about to give before they seal the deal does indeed increase the average individual contribution, it also decreases the number of people who actually go through with one. The result is a net loss of three and a half dollars per visitor, according to a forthcoming paper in the journal Management Science.
As visibility and traceability increase online, so does the demand for more privacy, and websites of all stripes are responding, says Gordon Burtch, lead author of the study and a professor of information and decision sciences at the University of Minnesota’s Carlson School of Management. Facebook, for example, now gives users the ability to control exactly who sees any given post, and crowdfunding sites allow users varying degrees of control over their anonymity.
From the point of view of musicians, entrepreneurs, or potato-salad makers, Burtch says, giving potential supporters more control up front makes sense—they’d feel more secure and therefore be more comfortable giving. On the other hand, seeing those options before they decide on how much to give might prompt some users to think harder about online privacy fears, perhaps to the point that they don’t contribute anything at all.
To find out, Burtch and colleagues Anindya Ghose and Sunil Wattal went to “one of the world’s largest online crowdfunding platforms,”as they describe it in their paper, and proposed a simple experiment. They would give each of the website’s actual visitors the same privacy options—whether to post their names and whether to post their contributions—but choose at random whether each visitor saw those options before or after they’d finished making the donation.
The researchers got the go ahead, and as expected, they found a privacy effect, meaning about five percent more people gave when they had to pay first and select privacy options later. But the authors also found what they termed a publicity effect: When users saw the privacy options last, those who went through with a contribution gave $5.81 less on average, the net result of fewer very large or very small (but still non-zero) amounts. Still, the higher donor base was enough to make the “pay first, privacy later” formula worth it: Despite smaller individual contributions, the average earnings per visitor—counting both those who gave money and those who didn’t—went up by $3.55.
“There is demand” for more privacy online, Burtch says, but the results point to a kind of reverse psychology. We want control over our information online, but actually having the choice stokes fears of identity theft or, more mundanely, being exposed as a cheapskate or a wastrel—enough that we end up giving less.
Researchers are much less likely to report null results, and that’s not a good thing.
Science’s failures are sometimes just as important as its successes, especially when “failure” means failing to replicate a well-known result or failing to find evidence for an important hypothesis. But many of these so-called null results are going unreported, largely because researchers never bother to write them up and get them published.
The situation is a bit like finding Snoopy in the clouds or Jesus in a watermelon. In reality, those are the result of random chance and our remarkable ability to find patterns where there are none. But imagine if most of us never saw plain old fluffy clouds or secular melons. We’d be forgiven for thinking that nature was dominated by comic strip characters and prophets. Likewise, if 20 teams test a hypothesis, 19 find null results, but only one positive result gets published while the others go unnoticed, a statistical fluke starts looking like a real effect.
“It’s important that the community is aware of the nulls,” says Neil Malhotra, co-author of a new study on publication bias and a professor at Stanford’s Graduate School of Business. Even if the results aren’t published in top journals, science benefits from the context that null results provide, he says.
So how big a problem is this? In the past, studies demonstrating a bias against publishing null results were largely indirect. For example, scholars know there’s an overabundance of published papers that just barely meet certain statistical criteria. While that’s consistent with publication bias, there could be other explanations.
To search for direct evidence, Stanford political science graduate students Annie Franco and Gabor Simonovits worked with Malhotra to analyze projects run through the National Science Foundation’s Timeshare Experiments in the Social Sciences, or TESS. Since TESS keeps track of the experiments it runs, the team could track down whether they yielded positive or null results and whether they had been published, written but never published, or never written up at all.
Publication bias is real, the researchers found, but more importantly the source of that bias seems to be researchers themselves. Of 47 null results in the TESS data, 29 were never put down on paper, let alone submitted to a journal. Only 12 out of 170 positive results met the same fate. But when researchers submitted null results for review, they were just as likely to be published as statistically significant ones.
“The published literature is much more likely to contain significant findings,” the authors wrote last week in Science. “Yet null findings are not being summarily rejected, even at top-tier outlets.” Researchers simply aren’t submitting them, they write.
Malhotra says he hopes the study leads more researchers to at least write up their null results so that other scientists are aware of them. Still, he says, it will probably take institutional changes, such as new journals that encourage publishing null findings, before scientists are comfortable reporting those results.
Young men who take abstinence pledges have trouble adjusting to sexual norms when they become husbands.
In 2008, 15 evangelical Christian men took an abstinence pledge. To cement their commitment, the unmarried men in their late teens and early twenties attended a weekly support group. So did Sarah Diefendorf, a University of Washington sociologist. Four years later, Diefendorf reunited with these same guys—14 of them are husbands now—and interviewed them to find out whether marriage had changed their views on sexuality. It hadn’t.
At the support groups, the men agreed that sex is a sacred gift from God that needs to be controlled; that pornography, masturbation, and homosexuality are dangerous; and that premarital sex is “beastly.” They had “accountability partners” who sent them text messages (“Are you behaving?”), tracked their Internet search histories, and took other measures to remind them not to stray.
When they got married, the men no longer met with their group. According to the church, they no longer needed the support and, as one participant said, “Having a wife acts as its own accountability.” But they were still tempted by things like pornography and extra-marital sex.
They also found it difficult to talk about sex with anyone—including their wives—and had an immature understanding of their own sexuality. “The men I interviewed still think of sex as something that needs to be controlled in married life—and something that they no longer have the tools to control,” Diefendorf says.
“While sex is framed as ‘sacred,’ ‘wonderful’ and a ‘gift from God post-marriage,” the paper explains, “these married men still think of sex in its ‘beastly’ terms. In focusing solely on the goal of abstinence until marriage, conversations on healthy sexuality within marriage were never part of the discussion.”
The study, which is still under peer review (Diefendorf has already presented it to the American Sociological Association), also found that men who abstain are more likely to think women lack sexuality, and to believe that Christian women never talk about sex (evangelical Christian women do talk about sex, the study clarifies, just not in mixed company).
The key problem, Diefendorf says, is that “abstinence-only education is pervasive in the U.S., and works off of a shame-based, non-evidence-based model that does not provide individuals with the tools to understand sex as something that is healthy when they are ready for it.”
“It’s fine to wait until marriage to have sex if that’s what you want,” Diefendorf says, “but how can we encourage people to make the decision that is best for them while simultaneously providing accurate, positive information about sex and sexuality? The answer is in much more comprehensive, inclusive, accessible sex education.”
Chelsea Hawkins contributed reporting.
We can determine trustworthiness even when we’re only subliminally aware of the other person.
Say you’re in the supermarket parking lot holding your infant, bags of groceries, and fumbling to open your car. A stranger walks up and says, “Here, let me hold your baby.” Should you let him?
According to a new New York University study, knowing whether or not to trust someone is so critically important that we can tell whether a face is trustworthy before we even consciously know it’s there.
The NYU researchers knew from previous studies that people are fairly similar when it comes to how they judge a face’s trustworthiness. They wanted to find out whether that would hold true if people only saw a face for a quick moment—an amount of time so short, in fact, that it would prevent making a conscious assessment.
They found that the “human amygdala is automatically responsive to a face’s trustworthiness in the absence of perceptual awareness.” In short, we don’t have to think logically about whether we should trust someone—our brains know the instant we encounter them.
To carry out their study, the researchers monitored the amygdalae of 37 volunteers (28 females) ages 18 to 35 while showing them 300 faces for 33 milliseconds each. Those faces had already been tested with a different set of 10 subjects, who saw them for much longer to determine whether they were trustworthy. In those previous tests, people had a strongly uniform opinion about whether to trust each face. Faces with “higher inner eyebrows and pronounced cheekbones,” the paper explains, “are seen as trustworthy and lower inner eyebrows and shallower cheekbones are seen as untrustworthy.” Other studies suggest that we can detect trustworthiness in someone else’s face thanks to subtle cues like the amount of white showing in the eyes.
After the new subjects “saw” each face for 33 milliseconds, the image was replaced by “a neutral face mask for 167 milliseconds that disrupted further visual processing of the target.” Effectively, the researchers were letting their subjects see each face only subliminally.
Fascinatingly, different parts of the amygdala lit up when a subject saw an untrustworthy face versus a trustworthy one—and it lit up more when the face in question was suspicious.
“Faces that appear more untrustworthy and likely to inflict harm,” says Jon Freeman, the study’s senior author, “are spontaneously tracked by the amygdala, so the amygdala could then quickly alter other brain processes and coordinate fast, appropriate responses to people—approach or avoid.”
His research suggests that the amygdala’s role in instantly interpreting social cues is more important than previously thought, which makes sense when you consider human history. Aggression and conflict, the study says, “have had a substantial impact on human evolution…. Automatic evaluation of another’s likelihood to harm or help via facial trustworthiness would facilitate survival.”
Our talents for making snap judgments “could either be hard-wired or learned from the social environment,” Freeman says. “Our results can’t speak to that issue.”
So should you trust the guy in the parking lot? Your brain already knows.
Rosie Spinks contributed reporting.
While young migrant workers struggle under poor working conditions, U.S. policy has done little to help.
In the United States, we enjoy congratulating ourselves on the fact that we have a developed country. But according to a new study, there are plenty of young people within our borders that labor “for up to 11 hours per day, six days per week, and report a median of $350 in wages per week.”
“My research is motivated by the painful experiences of poverty, violence, discrimination, and isolation that unaccompanied Central American young adults face,” says Stephanie Lynnette Canizales, a sociologist at the University of Southern California.
Recently, she conducted nearly 200 hours of group interviews in Los Angeles, plus 15 in-depth conversations with “unauthorized youth who came to the U.S. as unaccompanied minors in order to support their families abroad.” The people she spoke to mostly do garment work, service and domestic jobs, construction, and maintenance; range in age from 18 to 35; and are all from indigenous Mayan villages in Guatemala. They had lived in the U.S. between four and 19 years, and most didn’t have more than a fourth-grade education.
Canizales framed the questions in a way that would help her better understand the experiences of migrant youth working in the U.S., with the hope that she could influence future policy decisions that might affect them. After analyzing the interview data, she concluded that at-risk migrant youth are all but ignored when it comes to U.S. policymaking.
There are five million undocumented youth in the U.S. (a fifth of them live in California), and the majority of them—62 percent—are overlooked by both the educational and military-service systems, according to Canizales’ study. “What hit me hardest,” she says, “and poses the greatest challenge in doing this work, is that these young people have no one to turn to for support, guidance, or to shoulder the burden.”
That burden, Canizales learned, includes depression, stress, and anxiety due to poor working conditions and the pressure to provide for their families back home. Interview subjects also said they were unable to pursue professional training or education, let alone leisure or community opportunities—a reflection, Canizales says, of “what is lost when unauthorized youth learn to be illegal.”
When suggesting solutions to the suppressed lives of our undocumented youth, Canizales mentions ideas like “financial literacy courses, workers’ rights education, health clinics, and support groups,” things that “might serve to bolster these youths’ health, well-being, and sense of inclusion.”
The current policies that exist to serve them, such as the DREAM Act’s Deferred Action for Childhood Arrivals program, are based, she says, on a “narrow understanding of undocumented working youth.” What’s really needed, Canizales concludes in the study, is immigration reform that addresses earnings and hours, and “eliminates the fear of being fired or deported for reporting unfair treatment and work conditions.”
“These children leave their homes hoping to escape poverty and violence,” Canizales says, “but face conditions no different in the U.S. And any aspiration for the future is stifled by the immediate need to make ends meet. It’s a vicious cycle.”
Rosie Spinks contributed reporting.
When it comes to educational access, young Syrian refugees are becoming a “lost generation.”
War is perhaps humanity’s most interruptive force. And one of the very first things war tends to interrupt is a young person’s education. That’s happening on a mass scale right now in Syria, as well as in the diaspora of Syrian refugees, according to a recent study out of the University of California-Davis and the Institute of International Education.
Almost three million Syrians have been displaced to other countries. Lebanon has become the top destination for those trying to escape the mayhem—which so far has killed more than 190,000 people. While there’s a perception that refugees come from impoverished parts of society, it’s not just poor, rural Syrians that are fleeing their homeland’s civil war: “Middle-class families have joined the exodus as well,” the study explains, “including a high number of 14- to 24-year-olds whose education and training have been disrupted by the war.”
The researchers wanted to gauge how three years of violent conflict is affecting young Syrian refugees in terms of a lost education. To find out, they conducted focus groups with 75 college-age Syrian refugees in Lebanon, including enrolled students, as well as those hoping to enroll. The study’s subjects were in three of Lebanon’s major areas—Beirut, Tripoli, and Biqa’a Valley—and represented Syria’s ethnic and religious diversity. The team also interviewed Lebanese university administrators and policymakers.
After assessing the data from their interviews, the researchers concluded that Syrians between the ages of 18 and 24 are at highest risk of losing access to education. They also found that the vast majority of Syrian youth in Lebanon are not pursuing higher education at all. And for those who are, many are facing serious threats, including poverty, unemployment, language barriers, stigmatization, fears of violence, and feelings of isolation. These problems also affect older Syrian scholars, who “are unable to secure academic work at Lebanese universities without external support from international organizations.”
Encouragingly, the study highlighted case studies of non-profits working to improve Syrian students’ prospects, like the Lebanese Association for Scientific Research’s scholarship program for Syrian refugees. These projects are successful on a local level, the study found, because they can nimbly and creatively respond to problems. Still, the authors note, larger-scale support is desperately needed, especially in the form of more merit- and need-based scholarships for Syrian refugees who want to go to college.
It’s worth noting that being displaced by war might be more likely to ruin a female’s chance at an education than a male’s: “Despite pre-conflict Syria’s rough gender parity between females and male attendance at universities,” the authors write, “male enrollment in Lebanon post conflict appears to stand at a much higher rate.” This might be in part because young men “indicated that they were pursuing a university education because it grants them the added benefit of deferment. They would otherwise be required to serve in the Syrian military.”
Whatever the motivation is for Syrians to go to college abroad, the paper’s authors argue, it’s important that they have the opportunity to do so.
Rosie Spinks contributed reporting.