Follow Mary Nash Stoddard on Twitter

Wednesday, October 12, 2011

Scientific Safeguards Not Working?


Scientific Safeguards Insufficient to Detect Fakes
Daniel S. Greenberg

Since the scientific community prides itself on openness and mathematical precision, how is it possible for scientists to pull off stunning fakeries? 

It turns out to be astonishingly easy, something on the order of passing a bad check to trusting friends -- and with far less chance of eventually getting caught. That realization is making sinking into the scientific community following recent revelations of fraudulent research at several major institutions. The result has been some unusual introspection about the seamy side of competition and career-building in the profession.

Until the most recent revelations, which concerned a long string of faked research papers by a heart researcher at a Harvard-affiliated hospital, the elders of science offered a simple response when outsiders expressed interest in reports of scientific fakery: It is rare, they insisted, because the internal workings of science virtually guarantee exposure.

How do they guarantee exposure? The answer, offered at congressional hearings and in defensive articles, invariably center on two traditional checks of quality in the scientific process: peer review, whereby scientists examine applications for grants and also review research papers submitted for publication and replication, the process of repeating another scientist's experiments.  

These barriers turn out to be porous, as was acknowledged recently by the editor of the prestigious New England Journal of Medicine in an editorial inspired by the author of the faked Harvard studies, Dr. John R. Darsee. "At Harvard," the editorial noted, "Darsee's chicanery compromised the integrity of nine published papers, necessitating the withdrawal of each, in part or in toto. In addition, 21 abstracts had to be withdrawn."

Prior to his Harvard appointment, Darsee was at Emory University, where, the editorial points out, he "compromised the integrity of at least eight published papers. … In addition, he appears to have manipulated or invented the data published in at least 32 abstracts …."

Exposure, inspired by the suspicions of a laboratory worker, who saw Darsee changing entries in a notebook, came after Darsee had been at it for two years at Harvard. It is doubtful the fakeries at Emory would have come to light without the inquiry inspired by the Harvard fakery. How did he get away with it for so long?

Part of the answer is that he was a skilled faker who exploited a trustful atmosphere. But, as has been noted after fakeries have been exposed, the culprits - commonly ambitious begineers on the way up - functioned in an apprentice system where they did the lab work with limits or no supervision but were required to share the credit with their established bosses. As Britain's leading scientific journal, Nature, recently observed: "In at least three of the proven falsifications in the United States, a relatively senior scientst has been in effect a front man for a junior colleague, acting as a general provider of services and funds and putting his name on an occasional published paper. In both these roles, such people have helped make fraud possible, yet it is the junior colleague whose career has been ruined. The natural justice is that arrangement is not easily discerned."

Why did not peer review and replication promptly detect these fakeries? The New England Journal of Medicine, which published two of Darsee's papers upon reccommendations of peer reviewers, concludes that " unless a maledroit cheat fabricates results that are manifestly impossible or inherently contradictory, even the most rigorous peer review is not likely to uncover fraud."

The reason is that peer reviewers, who are busy scientists themselves, assume that the author of a research paper is reporting his observations honestly. Their role is to check for originality, logic and clarity.

As for replication, for very good reasons it rarely is conducted. Agencies that provide funds for research often will not finance a rerun of completed research, particularly in times of budgetary strains. Ad a scientist looking to make his mark knows that distinction is achieved through originality, not by copycat practices.

Could fakery be thwarted with additional safeguards? Yes, but it would be deadly for science if it were to develop a system akin to bank examiner audits.

What is needed, as several of the introspectionists are arguing, is a restoration of close working relationships between junior and senior scientists - something that has suffered in the ear of lucrative consulting opportunities for many specialists. High standards should be established for collecting and storing research data. And when a big name is included on a research paper, its presence should signify participation in the project.

One of the most shocking aspects of the Darsee episode is that many of his co-authors knew nothing of the research in which they were presumed to have participated and, in some cases, claimed they did not know their names were on papers until after publication.

The kindest thing to be said about this system is that it is unwholesome and that if the scientific community does not clean up its affairs, the non-scientific community and Congress, which pays most of the bills for science are likely to lumber into the picture.
Daniel S. Greenberg is editor and publisher of Science & Government Reports, an independent newsletter based in Washington D.C. (THE OREGONIAN, FRIDAY, JULY 8, 1983)