So you’ve found research fraud. Now what?

April 26, 2024:

When it is alleged that a scientist has manipulated data behind their published papers, there’s an important but miserable project ahead: looking through the rest of their published work to see if any of that is fabricated as well.

After dishonesty researcher Francesca Gino was placed on leave at Harvard Business School last fall following allegations that four of her papers contained manipulated data, the people who’d co-authored other papers with her scrambled to start double-checking their published works.

Gino was a prolific researcher, and with 138 papers now called into question and more than 143 people who had co-authored with her, it proved a challenge to find who handled what data — so six co-authors began to work through each paper to systematically make public how the data was collected and who had custody of it. Their work was organized as the Many Co-Authors Project.

The group was undeterred by Gino suing all of her accusers last summer, as well as by her condemnation of the project as unfair (“it inadvertently creates an opportunity for others to pin their own flawed studies or data anomalies on me,” she wrote). But their work provides a window into what kinds of manipulations and errors might make it past peer review until they come under heightened scrutiny — and raises in its own way a broader problem with our current research system.

Based on the group’s work, it looks plausible that the data manipulation for which Gino is under fire is not contained to the four papers that have already been retracted. For example, in this 2019 paper, many participants were disqualified for not paying attention to the instructions — but the participants who were disqualified were overwhelmingly ones whose results were contrary to the hypothesis. (Likely because of the litigation surrounding the charges against Gino, the authors are careful not to say outright that what they’ve seen is a surefire sign of fraud.)

But papers like the 2019 one — where the data is available — are the exception, not the rule. For most of the papers, no one has access to the data, which leaves no way to determine whether manipulation occurred.

In some cases, co-authors are wary of participating in the effort to find other sketchy studies, worried that their name will be tarnished by association if they find a fraudulent paper. With systematic fraud, transparency is the only way through. Without a serious reckoning, the discovery of data manipulation doesn’t undo the harm it caused to our understanding of the world. Even after a paper is retracted, it doesn’t mean that other research that relied on those findings becomes amended. Instead, new studies are built atop flawed research.

That’s a problem for scientific inquiry.

We need to do something more systematic about fraud

There’s something simultaneously heartwarming and exasperating about stories of researchers across the globe coming together to check whether their published research was actually faked.

Why is basic information such as “which co-author collected the data?” and “who has access to the raw data?” not included as part of the process of publishing papers? Why is the data itself not available by default, which allows for finding mistakes as well as fraud? And after many researchers have been accused of systematic fraud, why is there still no process for systematically looking for problems in research?

This is one of Gino’s complaints about the Many Co-Authors Project. “Like all scholars, I am interested in the truth. But auditing only my papers actively ignores a deeper reflection for the field,” she wrote. “Why is it that the focus of these efforts is solely on me?”

The focus is on her for a good reason, but I do think that the Many Co-Authors Project is a symptom of a broken system. Even once a researcher is suspected of fraud, no institution is responsible for reviewing the work they’ve published and how it might affect the literature.

Richard Van Noorden reported in Nature last year about what happens when a researcher is well-known to have fabricated data: “A more recent example is that of Yoshihiro Sato, a Japanese bone-health researcher. Sato, who died in 2016, fabricated data in dozens of trials of drugs or supplements that might prevent bone fracture. He has 113 retracted papers, according to a list compiled by the website Retraction Watch.”

So what happened to other work that relied on Sato’s? For the most part, the retractions haven’t propagated; work that relied on Sato’s is still up: “His work has had a wide impact: researchers found that 27 of Sato’s retracted RCTs had been cited by 88 systematic reviews and clinical guidelines, some of which had informed Japan’s recommended treatments for osteoporosis. Some of the findings in about half of these reviews would have changed had Sato’s trials been excluded.”

Journals do not consider themselves responsible for following up when they retract papers to see if other papers that cite those papers should be affected, or to check if other papers published by the same author have similar problems. Harvard doesn’t consider itself to have this responsibility. Co-authors may or may not consider themselves to have this responsibility.

It’s as if we treat every case of fraud in isolation, instead of acknowledging that science builds on other science and that fraud rots those foundations.

Some easy principles for reform

I’ve written before that we should do a lot more about scientific fraud in general. But it seems like a particularly low bar to say that we should do more to, when a person is demonstrated to have manipulated data, check the rest of their work and get it retracted if needed. Even this low bar, though, is only being met due to the unpaid and unrewarded work of people who happened to notice the problem — and some of them have been sued for it.

Here’s what could happen instead:

Data about which co-author conducted the research and who has access to the raw data should be included as a matter of course as part of the paper submission process. This information is crucial to evaluating any problems with a paper, and it would be easy for journals to simply ask for it for every paper. Then you wouldn’t need a project like the Many Co-Authors Project — the data they’re attempting to collect would be available to everyone.

Nonprofits, the government, or concerned citizens could fund an institution that followed up on evidence of data manipulation to make sure that manipulated results no longer poison the literature they’re a part of, especially in cases like medical research where peoples’ lives are at stake.

And the law could protect people who do this essential research by making it faster to dismiss lawsuits over legitimate scientific criticism. Gino sued her critics, which is likely contributing to the slowness of reevaluations of her other work. But she was only able to do that because she lived in Massachusetts — in some states, so-called anti-SLAPP provisions help get quick dismissal of a lawsuit that suppresses protected speech. Part of the saga of Francesca Gino is that Massachusetts has a very weak anti-SLAPP law, and so all of the work to correct the scientific record takes place under the looming threat of such a lawsuit. In a state with better anti-SLAPP protections, she’d have to make the case for her research to her colleagues instead of silencing her critics.

It is very much possible to do better when it comes to scientific fraud. The irony is that Gino’s research and the controversy surrounding it may well still end up having a long-lasting legacy in teaching us about dishonesty and how to combat it.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here!

Source link