Reproducibility Project Attacked By Researchers

Updated on

A 2015 report famously made waves when it suggested that dozens of published psychology studies were deeply flawed and unable to be reproduced by an “independent” group of researchers. The report called the Reproducibility Project almost cleaved the psychology community into two groups. Now a number of researchers have fought back calling the authors of the skeptical projects statistical morons.

Reproducibility Project’s statistical methodology flawed?

Yesterday saw a group of four researchers firing back at group that questioned the ability to reproduce psychology studies with strong language (in academia). The paper in question was published in September and basically said that less than 40 studies in a sample of 100 psychology papers published in leading journal actually held water when retested by an independent group of researchers.

The paper published Thursday said that when they fixed the paper’s statistical methodology, the reproducibility of the papers tested was considerably closer to 100%.

Let’s be clear, the authors of neither the September paper or yesterday’s critique made any suggestions of maleficence, manipulation of data or fraud.

“That study got so much press, and the wrong conclusions were drawn from it,” said Timothy D. Wilson, a professor of psychology at the University of Virginia and an author of the new critique. “It’s a mistake to make generalizations from something that was done poorly, and this we think was done poorly.”

A colleague of Wilson’s in the same department at the University of Virginia, Brian Nosek, was the coordinator of the Reproducibility Project and spent a number of years on his and his teams work. It would be interesting to take a look at the psychology behind this obvious rift between the two.

“They are making assumptions based on selectively interpreting data and ignoring data that’s antagonistic to their point of view,” said Nosek in rebuttal of the critique.

The fight between the two groups represents a spat in the “old” camp of psychology. Younger researchers, in recent years, have shown a willingness to share their studies with colleagues prior to publication in order to give their studies more credence through transparency. Rather than risking dissent and calls of data manipulation they are publicly showing the design of their studies as well as data before approaching journals in their field.

The fight will go on

The authors of the critique while not suggesting data manipulation nor fraud, did take issue with how true the replicated studies were to the original studies being tested. The critique suggests that even the smallest alteration to the original study could throw the results into complete flux. However, Dr. Nosek and his team of independent researchers claim that they consulted the original studies’ authors before replicating the study.

Following the publication of the report last year, a number of the authors that were essentially “called out” by the findings were understandably upset with calls for Nosek to once again replicated their studies. Dr. Nosek, in response, said that he would retest 11 of these nearly 40 debunked studies to ensure that the original study designs were adhered to in the replication process.

When the Reproducibility Project began Nosek and his team had not agreed to a statistical method ahead of time, something the critique took issue with from the start. Finally, Nosek’s team agreed on five ways to measure the original study including the effect and what happened when both studies were combined.

The critique strongly took issue with this believing that only one measure should have come under focus. That one measure boils down to failure by design differences, essentially, chance.

Uri Simonsohn, a researcher at the Wharton School of the University of Pennsylvania, has repeatedly called out the replication process and paper calling the groups statistical methodology “predictably imperfect,” calling the whole project into question without even adding to the findings with arguments of chance and design differences in the replicated studies.

But, Simonsohn isn’t choosing sides. He points out that the Reproducibility Project’s paper essentially calls the psychology researchers work a glass of water 40% full, while nothing that the critique is effectively saying, “well, it could also be 100% full.” He thinks both teams have it wrong.

“State-of-the-art techniques designed to evaluate replications say it is 40 percent full, 30 percent empty, and the remaining 30 percent could be full or empty, we can’t tell till we get more data.”

More transparency before publication is clearly the answer here but don’t look for this fight between camps to go away any time soon and clearly the results of the re-replication of the eleven studies by Dr. Nosek will be eagerly awaited by the psychology community.

 

 

 

Leave a Comment