Damn you reviewer #3! And #1 and #2 ...

← back to the blog

Charles Greenberg made the following comment on my previous post comparing the open science movement to the open-source software movement and Wikipedia:

“put everything out there and let the record correct itself” in an age of search engine optimization and offshore pay-per-click opportunities is really not a well thought out proposition. All the places where you can see alt-metric assessment of “quality” continue to rely upon a first pass of editorial oversight and basic peer review. This co-exists with the explosion of dirt cheap and even predatory publishing that increases the quantity of the record. Where’s the incentive to correct the record? Perhaps providing new vehicles of post-peer review that in themselves offer academic status or credit, rather than mechanically derived metrics that offer something, yet nothing explicit. Faculty of 1000 is (was) an attempt to do this, yet do we end up with a credible record? Does anyone get good or useful credit for being a Faculty of 1000 reviewer?

Perhaps the language "let the record correct itself" was a bit flippant. I'm not advocating a lassaiz faire approach to peer-review, I'm simply saying that with publication as prevalent as it is, the ability to publish is no longer the bottleneck it once was, and so bad papers will get published. The question now is how we sort the bad papers from the good. Altmetrics and open review aren't just possible now, they're necessary, because any other method of assessing quality depends on trusting decisions that are made in secret.

I just got reviews on my last paper, and they are terrible. Out of three reviewers, two of them hated the paper and the third thought it was OK. I've left my postdoc lab, the actual content of the paper is not going to change much.

And you know what? Nobody doubts that it will eventually be published: not the editor, not my former boss, who's done this sort of thing before many times, and probably not the reviewers themselves. And not me. It will go to a different journal, a "lesser journal", in more or less it's current form, after we spend days wrangling with the text to see what we can keep and what we have to ditch. Since it will be published anyway, this process is a stupid waste of time. It's not a great piece of work: there are things we could have done if we'd had more time, but it contains data and insights that might be useful. It should be published, or the time I spent working on it was just wasted. We delivered the best we could with the time and resources we had.

And now we have to spend more time on something that has been done, that will be published, just to make sure it gets the right amount of points so the people who are keeping score are satisfied. There is literally no other purpose to this exercise.