What is #reviewdouchery?  The short answer is that reviewdouchery are the comments and habits of reviewers that we love to hate. A few random examples (not in order of douchiness):

1. “It would be ‘nice’ if the authors would do these 25 (very expensive and unnecessary to the thesis) experiments.”

I mean, WTF people. I don’t give a rats’ ass about what a reviewer thinks would be “nice”- what I do care about is whether or not a given experiment is essential to proving or disproving the hypothesis that is addressed.

2. 3 page and 30 point reviews accompanying a decision to reject.

Double WTF. If you think a paper should be rejected- a skillful reviewer shouldn’t need 3 pages and 30 detailed points to justify that decision. You should be able to find the fatal flaw and lay that out in a brief single paragraph. Sometimes I think that we have evolved reviewing into proving to the author, the other reviewers, the editors and ourselves that we really ARE smart. And well, that’s just messed up, as my 15 year old would say.

3. Asking for the next obvious experiment.. that might make FIGURE 14.

Data inflation people, I mean do we really think the authors who are living and breathing that work didn’t think of that?? How much data do we REALLY need in a single paper? Do I need to say more. Not a substantive comment.

4. Why don’t you JUST repeat this experiment in elephants!

I mean- the use of the word “just” coupled with what is REALLY REALLY difficult to do- is well, ‘just’ kind of frustrating.

5. Picky comments about terminology that are actually incorrect.

K. If you are going to be type A about terminology- at least get it right. No one likes a know-it-all, and know-it-alls that reek of authority but don’t know shit from shinola… just kind of bothersome… albeit easy to rebut.

I’m sure I’ve done many of these myself… please add your particular favorite bit of #reviewdouchery in the comments…


4 thoughts on “#Reviewdouchery

  1. Here’s one: a rejection + spending 5 pages to tell us how everything we did is *utterly wrong* and that we should instead do it the way the reviewer did it himself/herself back in the 1970s. Because it is just so very trivial. Of course, this comes with no reference to any literature that would support this claim and no lit search produces anything remotely similar.

    Or, another one, a reviewer suggesting we’re missing literature, then listing 7 multiple-author papers we absolutely have to cite in order to have the paper accepted, which 1) are not related to our work but 2) all have exactly one author in common. Now, I wonder, why would they do this… /sarcasm off

  2. Conversely, I’ve seen many cases of minimal comment (as if the reviewer couldn’t be bothered) accompanied with either accept or reject. Accepts were gladly, well, accepted, while rejects provoked much anger. Just what is the reason for rejection? It sure as hell wasn’t in the comments!

  3. I usually give longer reviews to manuscripts for which I recommend rejection. The decision is, ultimately, of the editor. While in most cases I am likely just wasting my efforts and infuriating the authors, I believe that a rejection must be much better supported than an acceptance. Even if I had a single killer argument, again, reviewing is not just a filter. The goal is to put our efforts into trying to make it a better manuscript. The authors may chose to ignore my suggestions, but they may also consider that someone took the time and effort to read their manuscript carefully and make them.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s