Cell, Science, Nature…. and… eLife?

My better half returned from his travels yesterday with some interesting nuggets of information he learned on his trip. Among those was this:

eLife

Wow. Consider me informed. As the new guy in the OA slate of journals… eLife wants your:

“outstanding research in the life sciences and biomedicine, which ranges from the most fundamental and theoretical work, through to translational, applied, and clinical research.”

HHMI, Max Planck and Wellcome are funding this effort, and as of now publishing in eLife is free for a while, and content is open access – an improvement over S/C/N.  The editorial leadership is very strong, led by EIC Randy Schekman, even if a little XY heavy ( only 5/21 senior editors are women – just sayin’). The editorial leadership of all working scientists appears to look at every submitted manuscript and determine its suitability for peer-review- as opposed to those other single word journals that use professional editors who are not working scientists determine the impact of your work (for starters). Haven’t we all complained about that at one time or another?

Emphasis in the peer review process at eLife seems to be on rigor, brevity, and generally less painful review/revision process.. and they have a nifty little video (sorry WP won’t let me embed) about their review process:

eLife: Changing the review process from eLife on Vimeo.

Oh how I have longed for concise guidance on revisions, for limiting revisions to those that are essential to the point of the paper, and for limited rounds of review. Is it too good to be true that you could get a paper into a very selective journal in less than the two years it takes you to do three pages of additional experiments that may or may not be relevant to the conclusion of the paper?

Decisions and responses for manuscripts accepted to eLife are published with the published article (with the author’s OK). And…they keep track of the mean time for submission to acceptance on their homepage… which is info that many journals don’t share (Cell, for example- publishes their mean time from submission to first decision as 21 days… but you could still go through a year of painful revisions after that).

Could eLife be the open access answer to the glamormagz? Maybe. They certainly set themselves up that way. I only learned about this yesterday (and judging by the papers published in eLife from my field (only 23 so far), not many of my colleagues know about it either)… but I’m interested to see how this journal will evolve. I’m reading this little gem with interest right now.

Advertisements

#Reviewdouchery

What is #reviewdouchery?  The short answer is that reviewdouchery are the comments and habits of reviewers that we love to hate. A few random examples (not in order of douchiness):

1. “It would be ‘nice’ if the authors would do these 25 (very expensive and unnecessary to the thesis) experiments.”

I mean, WTF people. I don’t give a rats’ ass about what a reviewer thinks would be “nice”- what I do care about is whether or not a given experiment is essential to proving or disproving the hypothesis that is addressed.

2. 3 page and 30 point reviews accompanying a decision to reject.

Double WTF. If you think a paper should be rejected- a skillful reviewer shouldn’t need 3 pages and 30 detailed points to justify that decision. You should be able to find the fatal flaw and lay that out in a brief single paragraph. Sometimes I think that we have evolved reviewing into proving to the author, the other reviewers, the editors and ourselves that we really ARE smart. And well, that’s just messed up, as my 15 year old would say.

3. Asking for the next obvious experiment.. that might make FIGURE 14.

Data inflation people, I mean do we really think the authors who are living and breathing that work didn’t think of that?? How much data do we REALLY need in a single paper? Do I need to say more. Not a substantive comment.

4. Why don’t you JUST repeat this experiment in elephants!

I mean- the use of the word “just” coupled with what is REALLY REALLY difficult to do- is well, ‘just’ kind of frustrating.

5. Picky comments about terminology that are actually incorrect.

K. If you are going to be type A about terminology- at least get it right. No one likes a know-it-all, and know-it-alls that reek of authority but don’t know shit from shinola… just kind of bothersome… albeit easy to rebut.

I’m sure I’ve done many of these myself… please add your particular favorite bit of #reviewdouchery in the comments…

Did your student write the paper, or did you?

Hmmm. Not sure I’m happy with that title, my students probably could have done better.

I was talking with a colleague about paper writing the other day. In comparing mental notes, we had completely different experiences of how our papers got written as graduate students. I wrote mine. Well, that is a lie.  I wrote the complete first draft. That first draft came back to me absolutely bathed in red pen. Yeah- that’s right PEN. These were the days before you could just hit ‘accept all’ on the track changes function and have a nicely edited draft with the touch of a few buttons. I alternately hated (not really, only figuratively of course), and loved, and hated (only figuratively), and loved my mentor as rounds of drafts were turned in and handed back sometimes with words edited back to read exactly as they had been written in some earlier version. In the end I admired my mentor’s technique with this whole thing- because he/she made paper writing a very valuable learning experience for me, the trainee.

I try to torture teach my students this way now, and I see both the strengths and weaknesses of this approach. The biggest strength I have already mentioned- the learning experience of becoming a better writer and learning the thought process of putting a data, a paper, a story really- together. To build a case, make an argument on paper. The weakness- of this approach is that it can take what feels like forever and a day- as green students cobble together their idea of what constitutes something publishable (with anxious advisers prodding them along). The difficulty, the time and the effort involved for the adviser depends on the language, writing ability, and background knowledge of the student. When English is the second language- having the student write the first draft can mean that the mentor re-writes practically every.single.word. Re-writing at this scale can be incredibly labor intensive for the mentor.

My colleague, on the other hand, never had the experience of putting a paper together and doing the crazy amounts of editing during their graduate training. Oh they may have lightly edited some draft- but the bulk of the text was written at the outset by the mentor themselves. I’m sure that this approach ultimately brings the paper to submission status faster, and students may still learn what pieces of data are needed to put a paper together- but I bet a lot of the learning of scientific writing is lost when papers are written this way. There are clear benefits to the mentor, the student, the lab and the project in being able to publish quickly. What happens though- when the student has to put together a thesis? What happens when they move on to their postdoc and haven’t yet written a whole manuscript from start to finish?

And finally- I wonder how career stage of the PI plays into this… are more seasoned PIs more secure ($s, papers) and not as needy of quick pubs… thus able to let newbie paper writers flounder a little? Does the necessity of as many pubs as quickly as possible that early career stage PIs make them more prone to do the paper writing for their trainees? Or are these factors irrelevant… are we bound to repeat what our mentors trained us to do- do it like they did it. That seems to be how I do it… then again – I think I had a stellar mentor in this respect.

I fell off my horse…say what?? “Nature One”??

Via Martin Fenner and Bjorn Brembs this news…  a press release from Nature Publishing Group announcing “Scientific Reports” :

An online, open access, peer-reviewed publication, Scientific Reports will publish research covering the natural sciences – biology, chemistry, earth sciences and physics. Scientific Reports is accepting submissions from today, and will publish its first articles in June 2011. More information is available on the Scientific Reports website (www.nature.com/scientificreports).

Well, I’ll be a monkey’s uncle. Is this the same Nature Publishing Group that last year crapped all over PLoS One in an editorial by Declan Butler provocatively entitled ‘PLoS One Stays Afloat with Bulk Publishing’, echoing ugly whispers in the scientific establishment that PLoS One would be a dumping ground for data that couldn’t get published anywhere else for a myriad of reasons. Guess the release of PLoS One’s impact factor (somewhere above 4… 4.4 or 4.3 I think I read) in early 2010 probably made some people re-think their assumption that it was a final resting place that people would pay to deposit their  trash data. NPG’s new model appears to be nearly identical to that of PLoS One, publication fee and all.  (BTW I now hear grad students going around saying stuff in the vein of ‘we shouldn’t publish in Journal XYZ… ? It’s OPEN ACCESS.’ They picked that drivel up somewhere- and are repeating it without even knowing what open access means or why they think it is bad. )

Anyway- about the Nature “Scientific Reports” thing- I’d say this is not just a good indication that PLoS One is doing something right (a la Martin Fenner)-  it is an indication that PLoS, and PLoS One* in particular, have changed the paradigm that was scientific publishing in a rather radical way. I hope the champagne corks are popping in San Fransisco this Friday. Congratulations PLoS One, and welcome NPG to the future of scientific publishing- just follow PLoS One and they will show you the way.

*Disclaimer: I’ve been an out-of-the-closet lover of the PLoS One publishing model for several years, I have papers there, I know some editorial board members, and I’ve been an unabashed groupie of Pete Binfield since I saw him speak at scio10.

Cranky Reviewers

It is always that third reviewer (well, actually in my case it was the second reviewer). That one that can just kill ya.

You know the one I mean. The one that said that you did the assay ALL wrong, the assay you’ve been doing for 20 years and can produce at least 15 references from top labs in your field that support the method that you used as perfectly correct. Uh huh. Or the one that uses clearly condescending language- like… THANK GOD they decided to do XYZ  (implies… at least one of the authors over there knows what they are doing!). Or how about the … you didn’t cite my work… disguised as ‘the authors should correct an egregious omission of the work referencing bla bla bla by famous scientist X. These references  should be cited on in the relevant section’. Ok sometimes that one is for real. Or better yet- you didn’t cite the biology I work on, even though it is only peripheral to the biology that you put in this paper. Or how about the reviewer that seems to have trouble integrating panel A with the controls in panel B, and keeps claiming that your image is an artifact of your technique, even though your experimental sample and your control sample use the same technique and the results have been quantified and are clearly statistically significantly different. Finally… there is the reviewer that complains endlessly about the poor grammar and spelling … in a review that is filled with spelling and grammatical errors. (and just so you know, I may have made any or all of these points at one time or another…. although I hope that I did not).

Name your favorite cranky reviewer stock review points.

Article Level Metrics Debut at PLOS

I am SO into this, did anyone else notice it?? I just discovered a couple of days ago that PLOS now has metrics on all of its articles in all of its journals. When you pull up a given article there are several tabs just below the title.

One of these tabs is the ‘metrics’ tab. If you click on it it takes you to a page that shows the metrics – things like article views, and downloads, for that particular article. Here is an example from that article that I posted on the other day.  That article was just published, but you can also see metrics on older articles that were collected prior to the appearance of this feature-… like for this article for example. I love this feature because it reflects reality to the level of readership of a given article better and more immediately than the traditional pre-electronic media age measures such as citation rate or total citation number could. And, I’ve gotten quite used to looking at readership data in terms of hits and page views- running this blog and whatnot… so I’ve kind of got a feeling for this kind of data anyway.

And see that ‘Related Content‘ tab up at the top there too. From that page  you are set up to quickly search for related articles, bookmark things in CiteULike (which I need to become more savvy with), AND LOOK FOR RELATED BLOG POSTS!!! How awesome is that!!?? Now you are immediately connected to related scientific literature, and to the immediate response to a given article in the blogosphere, with all the commentary that brings with it.

The only thing… in my humble opinion… that needs work are the ratings and comment tools. You have to log into PLOS to be able to use both the comment and ratings tools. Which I suppose is fine, but if you are like me, you have like 1 billion accounts here and there- and each journal or network that has these features requires a new account- I find that cumbersome (but I realize that there is probably a security reasoning for this). But I make a concerted effort to look if there are comments on a given article, and what they are when I look at an article in a PLOS journal- and I’m always disappointed. It seems like this comment tool isn’t used very much- and even the ratings tool- I haven’t seen used very frequently.

I guess I’ve gotten accustomed to the kinds of honest and immediate conversations we have on this blog and on those that I read, and I learn so much from interacting with the wider audience that comes here. Wider discussion on the actual science that I do, or related articles that I read, is limited (pretty much still) to 1x or 2x per year at meetings (so infrequent), journal club/lab meeting (can be hit or miss, and same audience every week!), email or phone with colleagues (slow, and one-on-one), one-on-one discussion with colleagues in my institution (also slow and one on one), or with DrMrA while brushing my teeth before bedtime (actually- we’ve got other stuff to talk about and can barely keep our eyes open at that time of day). Feeling the absence of blog-like discussion on issues of science that interest me just doesn’t feel right anymore.

Who should be an author?

Sometimes I like to write about history- i.e. things I’ve learned that you might be able to benefit from. Other times, I like to gather your collective opinions about one topic or another because it helps me firm up my own. This is going to be one of those posts.

I always thought I had pretty firmly set up in my head the guidelines that I follow for including someone as an author on a paper. Lately, circumstances have forced me to confront (again!) the fact that different people have different, and sometimes vastly, vastly different ideas of what contributions constitute criteria for authorship on a scientific paper. This has been an uncomfortable examination of my own standards and those of others.

Let’s just start with my own view. Continue reading