At Rutgers, Vitaly Podzorov leads an effort to maintain research integrity worldwide


For the past several years, Vitaly Podzorov, a professor in the Department of Physics and Astronomy in the Rutgers School of Arts and Sciences, voluntarily has worked as part of a research integrity community to detect signs of potential scientific misconduct in peer-reviewed research papers. 

Working via PubPeer, an online platform established in 2012 to allow scientists to discuss research after publication, Podzorov and postdoctoral associate Vladimir Bruevich have posted detailed analyses and post-publication reviews on studies covering material science, solid-state physics and electronics. 

Man wearing red sweater and blue shirt and dark blue jacket.
Physicist Vitaly Podzorov has been recognized for his work in the research integrity community.
Vitaly Podzorov

In January, the founders of PubPeer, which has evolved into a central platform for researchers addressing doubts about the validity or integrity of scientific publications, recognized Podzorov with an award for “outstanding comments” that provide “significant, original contributions to scientific discussions.” Podzorov discusses how and why he became involved in the effort to discern errors in published research. 

What inspired you to focus on the detection of irregularities in scientific papers?

My motivation primarily stems from the realization that the whole spectrum of scientific misconduct, ranging from seemingly harmless exaggerations to outright data fabrication, is typically committed by individual researchers to gain an unfair competitive edge over their peers. 

Such behavior may deliver short-term gains to those individuals, while in the long run harm the research community. The harm comes in various forms and includes misinforming the public and misleading other researchers, robbing the rest of the research community of precious funding, and – very importantly – eroding public trust in science. The realization of the importance of this problem and, specifically, understanding that the research community itself is the primary victim were the main drivers that motivated me.
 
An additional factor that may have contributed was an early encounter I had with what is now considered one of the biggest cases of fraud in science. From 2002 to 2003, while working at Rutgers as a postdoctoral researcher with Professor Michael Gershenson, now an emeritus, we were involved in a global race to reproduce Jan Hendrik Shön’s so-called breakthrough devices: single-crystal organic transistors. 

A transistor is a primary building block of electronics. Those revolutionary “breakthroughs” by Shön have eventually been found to be a fraud. None of his devices existed. 

I was not among the whistleblowers who raised initial concerns about the Bell Labs papers authored by Shön. But my work at Rutgers led to the first true experimental demonstration of high-performance organic single-crystal transistors that work. This was a real breakthrough that played a pivotal role in eventually salvaging this field and leading to several other major advances. That case, encountered early in my career, clearly showed how damaging fraud can be for a research field. 

In a recent interview on scientific fraud in the journal Nature Communications, you mentioned the terms “reproducibility” and “transparency” as key ideas in conducting ethical research. What do you mean by those words?

Reproducibility simply refers to the ability of independent research groups to repeat, within a reasonable time frame, important results published by others. If an important claim is not reproduced by the community for some time, chances are there are issues with the original paper that can range from honest scientific mistakes to intentional cheating. In my experience, most cases we have encountered appear to be intentional data misinterpretation or manipulation to make results look more attractive. 

Transparency is a practice of ensuring that all the technical details necessary to reproduce the work are disclosed in original publications. It is a necessary condition for reproducibility.  
   
What are common signs of fraudulent data? Do you employ specialized tools or techniques to analyze the reproducibility of scientific studies? 

What we most frequently encounter in scientific literature is not utterly fraudulent research, but data misinterpretation to hype up one’s results. Our work at PubPeer shows this is sometimes mixed with data manipulation as well. 

Unlike biomedical research, where one common type of data are microscopy images, my field of electronic materials and devices typically features electrical device characteristics, such as current-voltage curves, as central data. 

To critically assess publications, it is not enough to simply be good at pattern recognition or spotting abnormalities in images. To investigate, we need to well understand device physics and know what behaviors are possible or impossible based on the fundamental principles of device operation. In addition, we need to do serious number crunching to analyze published data, which typically involves fitting the data with established models, to challenge or verify authors’ claims. It is time-consuming and tedious work requiring expertise. 

However, one can be tipped off quickly by the following red flags: claims that are too good to be true; data and curves that are too good-looking or, on the contrary, too bad-looking for a typical experiment; unusual features in experimental data that we refer to as “kinks,” “knees,” or “notches” on plots; unexpected device behaviors; and a lack of basic information about the devices or measurement conditions. 

We do not use any specialized tools in our analysis, except for data plotting and analysis software, such as the Origin, commonly used by other researchers. What impact do you hope your work will have on the scientific community, and, more broadly, on the public?

How do you ensure that your investigations are conducted ethically and fairly?

First, we only include considerations based on publicly verifiable information, including data, claims and conclusions published in a paper in question, as well as well-known scientific texts on the subject matter or fundamental physical principles involved. 

Second, our reports are 100% focused on the facts and the science, with all the expressed concerns presented as a factual description and analysis of the published work. 

Third, each point we raise is supported by a detailed and explicit analysis, so they all can be verified by independent experts. 

Finally, our reports do not contain any allegations of misconduct, fraud or any other wrongdoing by the authors; they just present facts. Neither do they contain any personal comments or speculations about the authors. 

What impact do you hope your work will have on the scientific community, and, more broadly, on the public?

Given that deception is deeply rooted in human nature and thus cannot be eradicated, I think the best practical outcome of our efforts is raising awareness in the research community of specific publications or research groups engaging in bad science. 

I also hope that the culture of critical thinking, a common element of old-school scientific culture, will be actively cultivated again in young generations.

Furthermore, I hope science policy makers will introduce financial incentives and deterrents to stimulate the research community’s vigilance and address the lack of accountability of those generating bad science, as recently proposed in our Q&A in Nature Communications.