lection

home     authors     titles     dates     links     about

fraud in the lab

4 september 2019

I recently published my 35th essay – let's call them all "essays," those journal articles, book chapters, and magazine pieces that make up the bulk of many an English professor's resumé. Thirty-five essays is an OK total, about one per year since I earned my PhD, middling for a career in the humanities. Of course, most of the time I feel lazy and unproductive. Most STEM professors probably agree. A lot of them have published hundreds of articles in far shorter careers. How do they do it?

Hard work and smarts in most cases, no doubt, but Nicolas Chevassus-au-Louis has his suspicions about some reaches of the research professoriate. More and more research publications are being retracted for outright fraud. This core percentage of really bad science is thankfully small, in the range of a fraction of one percent in some disciplines to perhaps 1% or 2% in the worst cases. But many more studies present work that, while perhaps conscientiously done, cannot be replicated. Many more on top of that are hastily assembled, perfunctorily peer-reviewed, and rushed into publication. Much of the data presented is "beautified"; in fact, the prevailing conventions of publication almost demand such beautification, because un-made-up data is messy and unsightly. Scads more papers appear without any review, and for a fee paid by their authors, in near-worthless "open access" journals – especially in China and Korea, where the prevailing cultures can at times, according to Chevassus-au-Louis, show a cavalier attitude toward reality. The problem is burgeoning, breaking into public awareness recently as the "crisis of replication."

Human nature probably hasn't changed much since the low-productivity days of a few decades ago. The number of Jan Hendrik Schöns in the world, its egregious counterfeiters of knowledge, is likely constant. What's changed, Chevassus-au-Louis argues, is the nature of scientific assessment and reward. Researchers are measured by sheer number of articles, the "impact" of the journals they place them in, and the number of citations they receive in other articles. The tyranny of metrics, in Jerry Muller's phrase, takes over, and scientists who want any kind of career must publish more, faster, and less carefully.

The scientific community's extreme emphasis on the originality of a discovery incites researchers to use forbidden but often effective practices such as fraud, falsification, and plagiarism to attain at least a simulacrum of success: the publication of articles in the most prestigious journals. (158)
You'd think the most prestigious journals would publish only gold-plated stuff, but almost the opposite effect can apply, where titans like Nature, Science, and Cell seek out cutting-edge material that may sometimes be simply edgy. There's a sex-appeal factor at work in academic publication, as in any form of journalism, and the big players in the sciences are hardly immune.

Not just academic tenure and promotion, but funding considerations, drive some of the dubiousness that surrounds current scientific publication practices. Corporate funders in particular do not like to pay for negative or neutral results. Thus, Chevassus-au-Louis says, inabilities to replicate and failures to demonstrate the effect of a new discovery (particularly in biomedical fields) rarely see publication. The picture a Martian might get of 21st-century Earth science is one of remarkably prescient hypotheses invariably confirmed by experiment.

Not all STEM fields find themselves in crisis. Mathematics seems to be exempt. This is in part because mathematical advances either check out or they don't; experimental conditions are irrelevant. But it's in part too because of the social structure of the field. A lot of mathematical research these days is done collaboratively and in public. As Vicky Neale notes, you can watch progress on prime-number research in real time on the Internet, these days. When you have to show every line of your work, and peer review is instantaneous and global, it's really hard to cook your numbers.

For similar reasons, physics has not had much of a fraud problem. Again, it's a mathematical field. But it's also structured uniquely, according to Chevassus-au-Louis. Major physics research tends to be done by huge teams that span many institutions. Checks and balances are everywhere, and replication is both straightforward and essential to innovation: if a physics discovery is unconfirmed, it basically doesn't yet exist. Also, says Chevassus-au-Louis, "let's not be naïve" (165): there's relatively little grant money at stake in math or physics, so inducements to be secretive and competitive – or fraudulent – are far less.

Biomedical fields, psychology, social sciences, though: these disciplines have a problem, and Chevassus-au-Louis argues they've been slow to address it. And the problem, I'd add, is far from academic. Much of the desultory and contradictory "knowledge" about bodies, diets, disease, and human nature that floats around in the media relies on reporting about some rather dubious "science" generated by the rush-to-publish culture.

Chevassus-au-Louis takes the reader through some famous cases, and then into more widespread, structural problems in the "softer" sciences. "To tackle the problem at the root," he argues, "we must modify the social structures of science" (164). He argues that more disciplines should emulate math and physics, and encourage sharing of raw data, no matter how ugly, something the digital age makes a doddle. The tyranny of "bibliometrics" is another of those faulty social structures. Giving journals "impact scores" and rating researchers on their "h-index" give a spurious legitimacy to quick-and-dirty citation-index tabulations. (Your h-index is the highest number of articles you've published that have been cited that many times: so if five of your articles have been cited five times, but six haven't been cited six, yours is five. Doesn't matter if you've got another hundred that have been cited four times, or one that's been cited 400 times, your h-index is five. It's fun and easy to figure and it's stupid.)

Certainly rationalists in the sciences would want to abandon such outright dumb practices. But Chevassus-au-Louis tells a story that seems emblematic if perhaps apocryphal: a bunch of scientists squabbling over an airport cab, and one of them suggesting it should go to the one with the highest h-index.

In any case, the greatest danger to intellectual life may not even lie in fabrication and plagiarism. The greater problem is in the penumbra of fraudulent practices, the mass of undigested, unread, barely-reviewed, shakily significant work that passes as the common currency of so many disciplines. The humanities are certainly not exempt, even if they are less prone to quantification. As more and more academic life is run on the STEM template, pressures for "production" inevitably mean that humanists aim to satisfy metrics rather than understand artifacts and cultural phenomena. And it's a difficult pressure to deflect. If humanists adopt Chevassus-au-Louis' advocacy of "slow science" – really just a return to our ars longa roots, where one essay a year is a pretty good clip – we seem to be excusing our own indolence. You need a thick bark, in the academy, to resist accusations that you're dead wood.

Chevassus-au-Louis, Nicolas. Fraud in the Lab: The high stakes of scientific research. [Malscience: De la fraude dans les labos, 2016.] Translated by Nicholas Elliott. Cambridge, MA: Harvard University Press, 2019. Q 175.37 .C4413

top