Fraud in Science: A Look Behind the Scenes
PEERING down the microscope, the scientist jumped at what he saw. “Eureka!” he shouted. And another great scientific discovery was made.
That is the sort of thing we are taught to believe about the triumphs of science. Recall your elementary-school science class for a moment. Remember the great heroes in science’s hall of fame? Men like Galileo, Newton, Darwin and Einstein are extolled not only for their scientific achievements but also for their virtues—objectivity, dedication, honesty, humility, and so forth. The impression was that by the sheer force of their superior intelligence and rational mind, the mysteries of nature just unveiled themselves and the truth simply popped out in front of them.
In reality, however, things are not quite that simple. In most cases, scientists must spend months or years laboring in the laboratories, struggling with results that often are confusing, puzzling and even contradictory.
Idealistically, one might expect that the dedicated scientist would press forward undaunted until the truth is found. But the fact of the matter is that generally we know very little about what goes on behind closed laboratory doors. Is there reason to believe that those engaged in scientific pursuits are less influenced by the baser human characteristics such as prejudice, rivalry, ambition and greed?
“Personal preferences and human emotions are said to be suppressed by the scientist in the interest of securing truth,” wrote Michael Mahoney in Psychology Today. “However, the annals of both early and contemporary science suggest that this portrayal is less than accurate.”
In a similar vein, columnist Alan Lightman wrote in the magazine Science 83: “The history of science is replete with personal prejudices, misleading philosophical themes, miscast players. . . . I suspect all scientists have been guilty of prejudice at times in their research.”
Do these comments surprise you? Have they at least tainted, if not shaken, the image you had of science and scientists? Recent study on the subject has revealed that even scientific luminaries of the past were not above using unethical methods to advance their own ideas or theories.
Isaac Newton is often called the father of modern physics for his pioneering work on the theory of universal gravitation. The idea, when published in his famous treatise Philosophiae Naturalis Principia Mathematica (Mathematical Principles of Natural Philosophy), was strongly opposed by some contemporary scientists, including the German mathematician Gottfried Leibniz. This resulted in an extended feud between them that was not put to rest until the end of their lives.
Writing in Science, Richard S. Westfall asserted that, to strengthen his position, Newton made some “adjustments” in the Principia so that his calculations and measurements would more closely support his theory, making it more convincing. In one example, accuracy of one part in 3,000 was claimed, and in another his computations were carried to seven decimal places, something quite unheard of in those days. “If the Principia established the quantitative pattern of modern science,” wrote Westfall, “it equally suggested a less sublime truth—that no one can manipulate the fudge factor so effectively as the master mathematician himself.”
Newton allowed himself to be drawn into another controversy that eventually got the better of him. To claim priority over Leibniz for the invention of calculus, according to the Encyclopædia Britannica, Newton, as president of the esteemed Royal Society, “appointed an ‘impartial’ committee [made up mostly of his adherents] to investigate the issue, secretly wrote the report officially published by the society, and reviewed it anonymously in the Philosophical Transactions,” thus crediting himself with the honor.
That a man of Newton’s stature would resort to such tactics is indeed a paradox. It clearly shows that conscientious and honorable though a scientist, or anyone, may be in other things, when his own reputation or interest is at stake, he can become quite dogmatic, irrational, even reckless, or take a shortcut.
“It seems a reasonable, not to say trite, thought that scientists are human, subject to the same frailties as we all are, heroic, cowardly, honest and sly, silly and sensible in about the same measure, expert in some fields, but not in many,” writes consultant Roy Herbert in New Scientist. Though this view may not be held universally in the world of science, he adds, “I find no difficulty in accepting that.”
What, though, about the supposedly close-knit, self-correcting and self-policing structure of science—the processes of review, refereeing and replication?
In the wake of the widely publicized recent series of frauds in prestigious research institutes, the Association of American Medical Colleges issued a report setting out guidelines on how to deal with fraud in research. The report, in essence, maintained that “the overwhelming probability that fraudulent data will be detected soon after their presentation” is a safeguard against unethical practices.
This assessment, however, did not sit well with many others, both inside and outside the scientific community. For example, a New York Times editorial, calling the report “a shallow diagnosis of science fraud,” pointed out that “none of the frauds was originally brought to light through the standard mechanisms by which scientists check each other’s work.”
In fact, a member of the report committee, Dr. Arnold S. Relman, who is also an editor of The New England Journal of Medicine, likewise disagreed with the report’s conclusion. “What kind of protection against fraud does peer review offer?” he asked. “Little or none.” To back up his argument, Relman continued: “Fraudulent work was published in peer-reviewed journals, some with very exacting standards. In the case of the two papers we published, no suggestion of dishonesty was raised by any of the referees or editors.”
As for the effectiveness of replication in spotting fraud, there appears to be a vast gap between theory and practice. In today’s highly competitive field of scientific research, scientists are more concerned with breaking new ground than with repeating what someone else has done. Even if a scientist’s work is based on an experiment done by someone else, the experiment is rarely repeated in exactly the same form.
The problem of replication is further compounded by what is sometimes called salami science. Some researchers deliberately ‘slice up’ their experimental findings into small bits in order to multiply the number of publishable works. This “affords an opportunity for dishonesty,” says a Harvard committee, “because such reports are less likely to be verified by others.” Researchers well know that unless an experiment is really important, it is unlikely that anyone will try to repeat it. It has been estimated that as much as half of all published papers are “unchecked, unreplicated, and maybe even unread.”
This does not mean, however, that science, as an institution, is failing or is not working. Quite to the contrary, a great deal of important research is being done, and many useful discoveries are being made. All of this is a credit to what is essentially an honor system—the ideal that scientific advancement is based on mutual trust and the sharing of knowledge within the scientific community.
What the recent cases of fraud in research have demonstrated is the simple fact that this ideal has its limitations and that not all members of the scientific community are equally ready to abide by it. The facts show that within the self-policing and self-correcting mechanism of science there are enough loopholes that anyone bent on beating the system and who knows his way around it could do it.
As in everything else, economics plays a large role in the world of science. The days of the self-supporting, inventive tinkerers are apparently over. Scientific research today is big money, and much of it is funded by government, industry or other foundations and institutions. Yet the economic crunch and budget cuts have made grants harder and harder to get. According to the National Institutes of Health, which funds some 40 percent of all biomedical research done in the United States on a yearly budget of about $4 billion, only about 30 percent of applicants for NIH grants receive them, whereas in the 1950’s the figure was about 70 percent.
What this means for the researchers is that the emphasis has been shifted from quality to quantity—the ‘publish or perish’ mentality. Even established scientists often find themselves more occupied with raising funds to keep their expensive laboratories going than with working in them. This was what led to the downfall of a doctor who was receiving over half a million dollars in grants.
This man was given a paper to check that was sent to his busy supervisor for prepublication review. The paper happened to deal with a subject on which he was also working. Rather than giving an honest appraisal of the paper and taking the risk of losing his claim to priority, and perhaps the grant along with it, the doctor hurriedly touched up his experiment, plagiarized some material from the other paper and submitted his own work for publication.
Actually the pressure to succeed is felt early along in the life of aspiring scientists, especially those in the medical field. “Stories of cheating among premedical students are common,” said Robert Ebert, former dean of Harvard Medical School, “and the race for high grades so as to insure admission to medical school is hardly designed to encourage ethical and humanitarian behavior.”
This early conditioning is easily carried over into the professional career where the pressure is even more intense. “In an environment which can ever permit success to become a more coveted commodity than ethical conduct, even the angels may fall,” lamented Ebert.
The current situation was well summarized by Stephen Toulmin of the University of Chicago, when he said: “You can’t change something into a highly paid, highly competitive, highly structured activity without creating occasions for people to do things they never would do in the earlier, amateur stage.”
Our brief excursion into the world of scientific research has provided us with a glimpse of the scientist at work. We have seen that, despite their training, scientists are just as much subject to human frailties as they are imbued with virtues. Donning the white lab coat does little to change the picture. In fact, if anything, the pressures and competition in today’s world of science may well make it all the more tempting to seek out the shady shortcuts.
The phenomenon of fraud in science is a reminder to all of us that science, too, has its skeletons in the closet. Though they are usually kept well out of sight, they are there, nonetheless. Their occasional exposure ought to make us realize that though science and scientists are often put on a pedestal, their place on it should be carefully reevaluated.
[Blurb on page 6]
“I suspect all scientists have been guilty of prejudice at times in their research”
[Blurb on page 6]
“What kind of protection against fraud does peer review offer?”
[Blurb on page 8]
Science, too, has its skeletons in the closet
[Box on page 7]
The Craft of Fraudulent Science
In 1830 English mathematician Charles Babbage published a book entitled Reflections on the Decline of Science in England to summarize what he observed to be the existing state of scientific affairs. Therein, Babbage listed what he thought some scientists might be doing or were tempted to do when things did not turn out the way they expected.
“Trimming,” wherein irregularities were smoothed out to make the data look extremely accurate and precise.
“Cooking,” wherein only those results that best suited one’s theory were selected and the rest discarded.
“Forging,” the worst of all, wherein some or all of the data, in experiments one might or might not have performed, was fabricated.
[Picture on page 5]
Even Isaac Newton adjusted his data to support his theory