Saturday, May 07, 2016

Why I rejected your scientific paper

A little while back I stumbled across a post by Matt Welsh entitled "Why I gave your paper a Strong Reject," and it's had me thinking since then about my own pet peeves as a reviewer.  One in particular has been frustrating me most of all of late, however, and since it strikes at the heart of one of the most difficult yet simple aspects of science, I thought that I might write about it here.

Ultimately, all of the scientific method boils down to one simple question: "How do you know?"  This is the core of every scientific paper, and when I do not find it, I am guaranteed to recommend rejection rather strongly.  I've had a whole string of papers to review that fail in this regard recently, and it frustrates me to the point where I have changed the way in which I read review papers.  When I begin to review a paper these days, I start by looking at the title and abstract, then immediately flip to the end of the paper to see if there are any results.  Whether it be real-world evidence, lab experiments, simulations, or theorems, I want to know that the authors have delivered something that gives substance to the discussions and assertions that are presented in the rest of the paper.  If so, then I will read the paper with much more enthusiasm and give it a careful review; if not, then I will skim quickly to make sure that I have not missed anything fundamental, but with the pre-judgement that the paper is almost certainly going to fail and be rejected.  Even a review paper would not completely escape this judgement of mine: after all, there should be some synthesis that brings all of the different pieces together to differentiate a good review paper from merely being a keyword search.

So far, it's simple.  You can't claim to know something if you don't present some sort of information to justify that knowledge.  Where it gets difficult is that justification of knowledge is often quite a different matter than people seem to think it is---even scientists.  For example, people often present a number in isolation as though it is meaningful, without giving context to compare it to.  If I tell you a current car can drive 60 miles/gallon, that's a meaningful number because you probably know what other cars tend to get; if I told you that a certain type of car in the 1930s got 12 miles/gallon, however, would that be a lot or a little comparatively?  I certainly don't know, and I get frustrated when people drop context-free numbers like this.

This applies not just to obscure bits of scientific inquiry, but also to all our ordinary lives.  News articles, for example, are full of assertions based on numbers without comparison, as are the labels on foods we eat.  Is $10 million dollars for a bridge a crazy boondoggle or remarkably efficient?  Does "low fat" just mean higher sugar and salt, or that it's actually made in a more healthy way?

At the same time, this doesn't mean one should fetishize any particular classical mode of experimental design or demand "absolute proof" of anything.  There's a lot of ways to lay one's hands on information, and most of them do not need to be double-blind controlled randomized trials.  My own scientific work, for example, almost never involves blind trials, because I work with strong signals and can use computer programs to ensure regularity in how each piece of data is treated.

But dammit, at least give me something to support your castles in the air!

No comments: