Menus Subscribe Search

Follow us


Genes Are Us

statistics

(Photo: bloomua/Shutterstock)

Why Statistically Significant Studies Aren’t Necessarily Significant

• June 06, 2014 • 9:52 AM

(Photo: bloomua/Shutterstock)

Modern statistics have made it easier than ever for us to fool ourselves.

Scientific results often defy common sense. Sometimes this is because science deals with phenomena that occur on scales we don’t experience directly, like evolution over billions of years or molecules that span billionths of meters. Even when it comes to things that happen on scales we’re familiar with, scientists often draw counter-intuitive conclusions from subtle patterns in the data. Because these patterns are not obvious, researchers rely on statistics to distinguish the signal from the noise. Without the aid of statistics, it would be difficult to convincingly show that smoking causes cancer, that drugged bees can still find their way home, that hurricanes with female names are deadlier than ones with male names, or that some people have a precognitive sense for porn.

OK, very few scientists accept the existence of precognition. But Cornell psychologist Daryl Bem’s widely reported porn precognition study illustrates the thorny relationship between science, statistics, and common sense. While many criticisms were leveled against Bem’s study, in the end it became clear that the study did not suffer from an obvious killer flaw. If it hadn’t dealt with the paranormal, it’s unlikely that Bem’s work would have drawn much criticism. As one psychologist put it after explaining how the study went wrong, “I think Bem’s actually been relatively careful. The thing to remember is that this type of fudging isn’t unusual; to the contrary, it’s rampant–everyone does it. And that’s because it’s very difficult, and often outright impossible, to avoid.”

We shouldn’t put much stock in one statistically significant precognition result that defies everything we know about the physical world. Studies with small, unrepresentative samples can be valuable, but we should treat them cautiously before they are replicated with other samples.

That you can lie with statistics is well known; what is less commonly noted is how much scientists still struggle to define proper statistical procedures for handling the noisy data we collect in the real world. In an exchange published last month in the Proceedings of the National Academy of Sciences, statisticians argued over how to address the problem of false positive results, statistically significant findings that on further investigation don’t hold up. Non-reproducible results in science are a growing concern; so do researchers need to change their approach to statistics?

Valen Johnson, at Texas A&M University, argued that the commonly used threshold for statistical significance isn’t as stringent as scientists think it is, and therefore researchers should adopt a tighter threshold to better filter out spurious results. In reply, statisticians Andrew Gelman and Christian Robert argued that tighter thresholds won’t solve the problem; they simply “dodge the essential nature of any such rule, which is that it expresses a tradeoff between the risks of publishing misleading results and of important results being left unpublished.” The acceptable level of statistical significance should vary with the nature of the study. Another team of statisticians raised a similar point, arguing that a more stringent significance threshold would exacerbate the worrying publishing bias against negative results. Ultimately, good statistical decision making “depends on the magnitude of effects, the plausibility of scientific explanations of the mechanism, and the reproducibility of the findings by others.”

However, arguments over statistics usually occur because it is not always obvious how to make good statistical decisions. Some bad decisions are clear. As xkcd’s Randall Munroe illustrated in his comic on the spurious link between green jelly beans and acne, most people understand that if you keep testing slightly different versions of a hypothesis on the same set of data, sooner or later you’re likely to get a statistically significant result just by chance. This kind of statistical malpractice is called fishing or p-hacking, and most scientists know how to avoid it.

But there are more subtle forms of the problem that pervade the scientific literature. In an unpublished paper (PDF), statisticians Andrew Gelman, at Columbia University, and Eric Loken, at Penn State, argue that researchers who deliberately avoid p-hacking still unknowingly engage in a similar practice. The problem is that one scientific hypothesis can be translated into many different statistical hypotheses, with many chances for a spuriously significant result. After looking at their data, researchers decide which statistical hypothesis to test, but that decision is skewed by the data itself.

To see how this might happen, imagine a study designed to test the idea that green jellybeans cause acne. There are many ways the results could come out statistically significant in favor of the researchers’ hypothesis. Green jellybeans could cause acne in men, but not in women, or in women but not men. The results may be statistically significant if the jellybeans you call “green” include Lemon Lime, Kiwi, and Margarita but not Sour Apple. Gelman and Loken write that “researchers can perform a reasonable analysis given their assumptions and their data, but had the data turned out differently, they could have done other analyses that were just as reasonable in those circumstances.” In the end, the researchers may explicitly test only one or a few statistical hypotheses, but their decision-making process has already biased them toward the hypotheses most likely to be supported by their data. The result is “a sort of machine for producing and publicizing random patterns.”

Gelman and Loken are not alone in their concern. Last year Daniele Fanelli, at the University of Edingburgh, and John Ioannidis, at Stanford University, reported that many U.S. studies, particularly in the social sciences, may overestimate the effect sizes of their results. “All scientists have to make choices throughout a research project, from formulating the question to submitting results for publication.” These choices can be swayed “consciously or unconsciously, by scientists’ own beliefs, expectations, and wishes, and the most basic scientific desire is that of producing an important research finding.”

What is the solution? Part of the answer is to not let measures of statistical significance override our common sense—not our naïve common sense, but our scientifically-informed common sense. We shouldn’t put much stock in one statistically significant precognition result that defies everything we know about the physical world. Studies with small, unrepresentative samples can be valuable, but we should treat them cautiously before they are replicated with other samples. As Gelman and Loken put it, without modern statistics most people would not believe a remarkable claim about general human behavior “based on two survey questions asked to 100 volunteers on the internet and 24 college students. But with the p-value, a result can be declared significant and deemed worth publishing in a leading journal in psychology.”

Michael White
Michael White is a systems biologist at the Department of Genetics and the Center for Genome Sciences and Systems Biology at the Washington University School of Medicine in St. Louis, where he studies how DNA encodes information for gene regulation. He co-founded the online science pub The Finch and Pea. Follow him on Twitter @genologos.

More From Michael White

A weekly roundup of the best of Pacific Standard and PSmag.com, delivered straight to your inbox.

Recent Posts

September 30 • 8:00 AM

The Psychology of Penmanship

Graphology: It’s all (probably) bunk.



September 30 • 6:00 AM

The Medium Is the Message, 50 Years Later

Five decades on, what can Marshall McLuhan’s Understanding Media tell us about today?


September 30 • 4:00 AM

Grad School’s Mental Health Problem

Navigating the emotional stress of doctoral programs in a down market.


September 29 • 1:21 PM

Conference Call: Free Will Conference


September 29 • 12:00 PM

How Copyright Law Protects Art From Criticism

A case for allowing the copyright on Gone With the Wind to expire.


September 29 • 10:00 AM

Should We Be Told Who Funds Political Attack Ads?

On the value of campaign finance disclosure.


September 29 • 8:00 AM

Searching for a Man Named Penis

A quest to track down a real Penis proves difficult.


September 29 • 6:00 AM

Why Do So Many People Watch HGTV?

The same reason so many people watch NCIS or Law and Order: It’s all a procedural.


September 29 • 4:00 AM

The Link Between Depression and Terrorism

A new study from the United Kingdom finds a connection between depression and radicalization.


September 26 • 4:00 PM

Fast Track to a Spill?

Oil pipeline projects across America are speeding forward without environmental review.


September 26 • 2:00 PM

Why Liberals Love the Disease Theory of Addiction, by a Liberal Who Hates It

The disease model is convenient to liberals because it spares them having to say negative things about poor communities. But this conception of addiction harms the very people we wish to help.


September 26 • 1:21 PM

Race, Trust, and Split-Second Judgments


September 26 • 9:47 AM

Dopamine Might Be Behind Impulsive Behavior

A monkey study suggests the brain chemical makes what’s new and different more attractive.


September 26 • 8:00 AM

A Letter Becomes a Book Becomes a Play

Sarah Ruhl’s Dear Elizabeth: A Play in Letters From Elizabeth Bishop to Robert Lowell and Back Again takes 900 pages of correspondence between the two poets and turns them into an on-stage performance.


September 26 • 7:00 AM

Sonic Hedgehog, DICER, and the Problem With Naming Genes

Wait, why is there a Pokemon gene?


September 26 • 6:00 AM

Sounds Like the Blues

At a music-licensing firm, any situation can become nostalgic, romantic, or adventurous, given the right background sounds.


September 26 • 5:00 AM

The Dark Side of Empathy

New research finds the much-lauded feeling of identification with another person’s emotions can lead to unwarranted aggressive behavior.



September 25 • 4:00 PM

Forging a New Path: Working to Build the Perfect Wildlife Corridor

When it comes to designing wildlife corridors, our most brilliant analytical minds are still no match for Mother Nature. But we’re getting there.


September 25 • 2:00 PM

Fashion as a Inescapable Institution

Like it or not, fashion is an institution because we can no longer feasibly make our own clothes.


September 25 • 12:00 PM

The Fake Birth Mothers Who Bilk Couples Out of Their Cash by Promising Future Babies

Another group that’s especially vulnerable to scams and fraud is that made up of those who are desperate to adopt a child.


September 25 • 10:03 AM

The Way We QuickType


September 25 • 10:00 AM

There’s a Name for Why You Feel Obligated to Upgrade All of Your Furniture to Match

And it’s called the Diderot effect.


September 25 • 9:19 AM

School Counselors Do More Than You’d Think

Adding just one counselor to a school has an enormous impact on discipline and test scores, according to a new study.


Follow us


Dopamine Might Be Behind Impulsive Behavior

A monkey study suggests the brain chemical makes what's new and different more attractive.

School Counselors Do More Than You’d Think

Adding just one counselor to a school has an enormous impact on discipline and test scores, according to a new study.

How a Second Language Trains Your Brain for Math

Second languages strengthen the brain's executive control circuits, with benefits beyond words.

Would You Rather Go Blind or Lose Your Mind?

Americans consistently fear blindness, but how they compare it to other ailments varies across racial lines.

On the Hunt for Fake Facebook Likes

A new study finds ways to uncover Facebook Like farms.

The Big One

One company, Amazon, controls 67 percent of the e-book market in the United States—down from 90 percent five years ago. September/October 2014 new-big-one-5

Copyright © 2014 by Pacific Standard and The Miller-McCune Center for Research, Media, and Public Policy. All Rights Reserved.