Menus Subscribe Search

Genes Are Us

statistics

(Photo: bloomua/Shutterstock)

Why Statistically Significant Studies Aren’t Necessarily Significant

• June 06, 2014 • 9:52 AM

(Photo: bloomua/Shutterstock)

Modern statistics have made it easier than ever for us to fool ourselves.

Scientific results often defy common sense. Sometimes this is because science deals with phenomena that occur on scales we don’t experience directly, like evolution over billions of years or molecules that span billionths of meters. Even when it comes to things that happen on scales we’re familiar with, scientists often draw counter-intuitive conclusions from subtle patterns in the data. Because these patterns are not obvious, researchers rely on statistics to distinguish the signal from the noise. Without the aid of statistics, it would be difficult to convincingly show that smoking causes cancer, that drugged bees can still find their way home, that hurricanes with female names are deadlier than ones with male names, or that some people have a precognitive sense for porn.

OK, very few scientists accept the existence of precognition. But Cornell psychologist Daryl Bem’s widely reported porn precognition study illustrates the thorny relationship between science, statistics, and common sense. While many criticisms were leveled against Bem’s study, in the end it became clear that the study did not suffer from an obvious killer flaw. If it hadn’t dealt with the paranormal, it’s unlikely that Bem’s work would have drawn much criticism. As one psychologist put it after explaining how the study went wrong, “I think Bem’s actually been relatively careful. The thing to remember is that this type of fudging isn’t unusual; to the contrary, it’s rampant–everyone does it. And that’s because it’s very difficult, and often outright impossible, to avoid.”

We shouldn’t put much stock in one statistically significant precognition result that defies everything we know about the physical world. Studies with small, unrepresentative samples can be valuable, but we should treat them cautiously before they are replicated with other samples.

That you can lie with statistics is well known; what is less commonly noted is how much scientists still struggle to define proper statistical procedures for handling the noisy data we collect in the real world. In an exchange published last month in the Proceedings of the National Academy of Sciences, statisticians argued over how to address the problem of false positive results, statistically significant findings that on further investigation don’t hold up. Non-reproducible results in science are a growing concern; so do researchers need to change their approach to statistics?

Valen Johnson, at Texas A&M University, argued that the commonly used threshold for statistical significance isn’t as stringent as scientists think it is, and therefore researchers should adopt a tighter threshold to better filter out spurious results. In reply, statisticians Andrew Gelman and Christian Robert argued that tighter thresholds won’t solve the problem; they simply “dodge the essential nature of any such rule, which is that it expresses a tradeoff between the risks of publishing misleading results and of important results being left unpublished.” The acceptable level of statistical significance should vary with the nature of the study. Another team of statisticians raised a similar point, arguing that a more stringent significance threshold would exacerbate the worrying publishing bias against negative results. Ultimately, good statistical decision making “depends on the magnitude of effects, the plausibility of scientific explanations of the mechanism, and the reproducibility of the findings by others.”

However, arguments over statistics usually occur because it is not always obvious how to make good statistical decisions. Some bad decisions are clear. As xkcd’s Randall Munroe illustrated in his comic on the spurious link between green jelly beans and acne, most people understand that if you keep testing slightly different versions of a hypothesis on the same set of data, sooner or later you’re likely to get a statistically significant result just by chance. This kind of statistical malpractice is called fishing or p-hacking, and most scientists know how to avoid it.

But there are more subtle forms of the problem that pervade the scientific literature. In an unpublished paper (PDF), statisticians Andrew Gelman, at Columbia University, and Eric Loken, at Penn State, argue that researchers who deliberately avoid p-hacking still unknowingly engage in a similar practice. The problem is that one scientific hypothesis can be translated into many different statistical hypotheses, with many chances for a spuriously significant result. After looking at their data, researchers decide which statistical hypothesis to test, but that decision is skewed by the data itself.

To see how this might happen, imagine a study designed to test the idea that green jellybeans cause acne. There are many ways the results could come out statistically significant in favor of the researchers’ hypothesis. Green jellybeans could cause acne in men, but not in women, or in women but not men. The results may be statistically significant if the jellybeans you call “green” include Lemon Lime, Kiwi, and Margarita but not Sour Apple. Gelman and Loken write that “researchers can perform a reasonable analysis given their assumptions and their data, but had the data turned out differently, they could have done other analyses that were just as reasonable in those circumstances.” In the end, the researchers may explicitly test only one or a few statistical hypotheses, but their decision-making process has already biased them toward the hypotheses most likely to be supported by their data. The result is “a sort of machine for producing and publicizing random patterns.”

Gelman and Loken are not alone in their concern. Last year Daniele Fanelli, at the University of Edingburgh, and John Ioannidis, at Stanford University, reported that many U.S. studies, particularly in the social sciences, may overestimate the effect sizes of their results. “All scientists have to make choices throughout a research project, from formulating the question to submitting results for publication.” These choices can be swayed “consciously or unconsciously, by scientists’ own beliefs, expectations, and wishes, and the most basic scientific desire is that of producing an important research finding.”

What is the solution? Part of the answer is to not let measures of statistical significance override our common sense—not our naïve common sense, but our scientifically-informed common sense. We shouldn’t put much stock in one statistically significant precognition result that defies everything we know about the physical world. Studies with small, unrepresentative samples can be valuable, but we should treat them cautiously before they are replicated with other samples. As Gelman and Loken put it, without modern statistics most people would not believe a remarkable claim about general human behavior “based on two survey questions asked to 100 volunteers on the internet and 24 college students. But with the p-value, a result can be declared significant and deemed worth publishing in a leading journal in psychology.”

Michael White
Michael White is a systems biologist at the Department of Genetics and the Center for Genome Sciences and Systems Biology at the Washington University School of Medicine in St. Louis, where he studies how DNA encodes information for gene regulation. He co-founded the online science pub The Finch and Pea. Follow him on Twitter @genologos.

More From Michael White

A weekly roundup of the best of Pacific Standard and PSmag.com, delivered straight to your inbox.

Recent Posts

September 19 • 4:00 PM

In Your Own Words: What It’s Like to Get Sued Over Past Debts

Some describe their surprise when they were sued after falling behind on medical and credit card bills.



September 19 • 1:26 PM

For Charitable Products, Sex Doesn’t Sell

Sexy women may turn heads, but for pro-social and charitable products, they won’t change minds.


September 19 • 12:00 PM

Carbon Taxes Really Do Work

A new study shows that taxing carbon dioxide emissions could actually work to reduce greenhouse gases without any negative effects on employment and revenues.


September 19 • 10:00 AM

Why the Poor Remain Poor

A follow-up to “How Being Poor Makes You Poor.”


September 19 • 9:03 AM

Why Science Won’t Defeat Ebola

While science will certainly help, winning the battle against Ebola is a social challenge.


September 19 • 8:00 AM

Burrito Treason in the Lone Star State

Did Meatless Mondays bring down Texas Agriculture Commissioner Todd Staples?


September 19 • 7:31 AM

Savor Good Times, Get Through the Bad Ones—With Categories

Ticking off a category of things to do can feel like progress or a fun time coming to an end.


September 19 • 6:00 AM

The Most Untouchable Man in Sports

How the head of the governing body for the world’s most popular sport freely wields his wildly incompetent power.


September 19 • 4:00 AM

The Danger of Dining With an Overweight Companion

There’s a good chance you’ll eat more unhealthy food.



September 18 • 4:00 PM

Racial Disparity in Imprisonment Inspires White People to Be Even More Tough on Crime

White Americans are more comfortable with punitive and harsh policing and sentencing when they imagine that the people being policed and put in prison are black.



September 18 • 2:00 PM

The Wages of Millions Are Being Seized to Pay Past Debts

A new study provides the first-ever tally of how many employees lose up to a quarter of their paychecks over debts like unpaid credit card or medical bills and student loans.


September 18 • 12:00 PM

When Counterfeit and Contaminated Drugs Are Deadly

The cost and the crackdown, worldwide.


September 18 • 10:00 AM

How Do You Make a Living, Molly Crabapple?

Noah Davis talks to Molly Crapabble about Michelangelo, the Medicis, and the tension between making art and making money.


September 18 • 9:00 AM

Um, Why Are These Professors Creeping on My Facebook Page?

The ethics of student-teacher “intimacy”—on campus and on social media.


September 18 • 8:00 AM

Welcome to the Economy Economy

With the recent introduction of Apple Pay, the Silicon Valley giant is promising to remake how we interact with money. Could iCoin be next?



September 18 • 6:09 AM

How to Build a Better Election

Elimination-style voting is harder to fiddle with than majority rule.


September 18 • 6:00 AM

Homeless on Purpose

The latest entry in a series of interviews about subculture in America.


September 18 • 4:00 AM

Why Original Artworks Move Us More Than Reproductions

Researchers present evidence that hand-created artworks convey an almost magical sense of the artist’s essence.


September 17 • 4:00 PM

Why Gun Control Groups Have Moved Away From an Assault Weapons Ban

A decade after the ban expired, gun control groups say that focusing on other policies will save more American lives.


September 17 • 2:00 PM

Can You Make Two People Like Each Other Just By Telling Them That They Should?

OKCupid manipulates user data in an attempt to find out.


September 17 • 12:00 PM

Understanding ISIL Messaging Through Behavioral Science

By generating propaganda that taps into individuals’ emotional and cognitive states, ISIL is better able motivate people to join their jihad.


Follow us


For Charitable Products, Sex Doesn’t Sell

Sexy women may turn heads, but for pro-social and charitable products, they won't change minds.

Carbon Taxes Really Do Work

A new study shows that taxing carbon dioxide emissions could actually work to reduce greenhouse gases without any negative effects on employment and revenues.

Savor Good Times, Get Through the Bad Ones—With Categories

Ticking off a category of things to do can feel like progress or a fun time coming to an end.

How to Build a Better Election

Elimination-style voting is harder to fiddle with than majority rule.

Do Conspiracy Theorists Feed on Unsuspecting Internet Trolls?

Not literally, but debunkers and satirists do fuel conspiracy theorists' appetites.

The Big One

One in three drivers in Brooklyn's Park Slope—at certain times of day—is just looking for parking. The same goes for drivers in Manhattan's SoHo. September/October 2014 new-big-one-3

Copyright © 2014 by Pacific Standard and The Miller-McCune Center for Research, Media, and Public Policy. All Rights Reserved.