Dworkin discusses replication

Kayla Trail

Hang on for a minute...we're trying to find some more stories you might like.


Email This Story






Psychology professor Steven Dworkin gave a lecture to students and faculty in Waggoner Hall on Friday, Oct. 7.

 Dworkin, who teaches in behavioral analysis, behavioral pharmacology, behavioral neuroscience, history and systems in psychology, focused his lecture on replication in psychology and why more needs to be said about it.

 The lecture, titled, “Replication: The Hallmark of Science” focused on the issues with replication in a scientific pursuit as well as what has been going on lately in medicine and research with replication.

 “I’ll start off with some highlighting of what has been going on in the past years and why there is a problem, and there is indeed a problem,” Dworkin said. “It’s continuing to grow with regards to replicating scientific research in general and psychological research specifically, although other areas of sciences are not immune.”

 Dworkin then started showing news clippings of articles posted by websites like the Economist and how their posts that negatively reflect replication affect the studies of psychologists.

 “Of course when things like this hit the New York Times, congress sees that,” Dworkin said. “(They) want to know what’s going on, is there a problem, what is the problem and can it be fixed?”

 Science Magazine’s a peer-reviewed journal of the American Association for the Advancement of Science (AAAS) and this academic journal is one of the world’s top journals.

 “Science is probably the most highly rated journal magazine in the world so it has a very high impact,” Dworkin said. “Very highly ranked, very prestigious paper that’s published, so what happened was a group got together and decided that they would replicate 100 studies that were published. In order to replicate those studies, they contacted the original authors to get copies of the methods that were used and any other information that those authors would share in regards to results and data analysis. That narrowed down essentially a larger number of articles that they took to 100 to try and replicate those studies. The results of this are that replication effects turn out to be half the magnitude, the effects essentially that they found were less.”

 In a PowerPoint presentation, Dworkin showed an animation clip sourced by The Economist that showed how false positives can prove to be very misleading.

 “Scientific findings are considered sound, they’re unlikely to happen by chance,” reads the animated video. “But statistically, logic shows that errors are rampant. Consider 1,000 hypotheses to be tested. Not all of them are true; perhaps 10 percent are true, in this case 100 (hypotheses). But sometimes random errors make a hypothesis really false look true, called a false positive. Most disciplines accept the possibilities that this happens 1 in 20 times; so 900 negatives produce 45 false positives. If there were 100 real positives, and 45 false positives then almost a third of them that look true would be wrong.”

 One of Dworkin’s students in his Foundations of Learning and Behavior Psychology classes asked him during class if chewing gum while studying for a test and chewing gum while taking a test would improve your test-taking ability.

 “I don’t know if the studies have been done,” Dworkin said. “But it would be pretty marginal so maybe it would improve from 96 percent to 96.5 percent maybe even statistically significant. It really depends on the context, if you are say a runner in the Olympics, that kind of increase is very important. Essentially that’s talking about effect size, so things can be significant but not relevant.”

 Dworkin then focused on looking at data for preclinical research when it comes to developing new drugs.

 “It takes about $3 billion to bring a new drug to the market,” Dworkin said. “Where does all that money go? This is before it ever gets to humans. What’s happening in the lab to synthesize it? What happens in preclinical studies? The prevalence of reproducibility was 53 studies and of those only 10 percent of the studies they find significant that they can replicate. You can see that it’s hovering at 50 percent below and the estimate for replication in this type of work is between 38 percent to 64 percent.”

 In an article published by Forbes, they elaborate on what needs to be done when spending that much money on producing a new drug.

 “It can take 10-12 years for the new drug to get through the Food and Drug Administration’s (FDA) approval process and hit the market,” the article read. “Moreover, once the drug has made it to market, there is often post-approval research and tests to evaluate dosing strength and a host of other factors. DiMasi et al estimate those efforts can add an extra $312 million to the cost of a drug, for a grand total of $2.87 billion.”

 According to Dworkin, another aspect of reproducibility is the cost of it and then shared data collected from Science AAAS.

 “By Science, research in the United States has about $56.4 billion that is spent on research a year,” Dworkin said. “If reproducibility is at 50 percent on average, then sensely $28.2 billion is wasted because half of those studies can’t be run. Some of the reasons; preclinical research, biological agents and materials aren’t correct. So cell lines have been contaminated, samples that were supposed to be pure from one species are not so pure and have problems.”

 According to Dworkin, a reason that science research cannot prove anything is because of a logical fallacy.

 “Essentially, we say ‘if A, then B’ we start looking for B, if we find B we say ‘Aha, A was there’,” Dworkin said. “If my car doesn’t start, then my battery is dead. I look and my battery’s
dead, I replace it and it starts. But let’s say my battery is not dead; there’s other reasons why my car won’t start. The logical fallacy is the premise that if the car doesn’t start, it’s the battery.”

 With replication, there is direct and indirect replication which both vary on how you go about doing your replication study.

 “Direct replication is when you try to repeat the study, the same person in the same lab tries to repeat the study or someone else tries to repeat it doing everything that they know that they’re aware of that the original authors did,” Dworkin said. “Indirect replication is starting off with a direct replication and then tweaking it to add something new to the field. These are real, true tests of the significance of the study, can it be replaced? Will it be replicated with direct replication or will it be replicated with indirect replication?”

 At the end, the lecture was opened to questions and then Dworkin ended that students should know about things like replication and the topic issues it may raise.

 “I think it’s important that people understand the process of science,” Dworkin said. “That there are many reasons for failure to replicate and not go after the scientist and think it’s something that they did wrong.”

Print Friendly, PDF & Email