What is wrong with scientific publishing and how to fix it.
Randy Sheckman’s recent decision to boycott the so called glam-mag Cell Nature & Science (CNS) made me realize that I never expressed on this blog my view on the problems with scientific publishing. Here it comes. First, one consideration: there are two distinct problems that have nothing to do with each other. One is the #OA issue, the other is the procedural issue. My solution addresses both but for sake of reasoning let’s start with the latter:
- Peer review is not working fairly. A semi-random selection of two-three reviewers is too unrepresentative of an entire field and more often than not papers will be poorly reviewed.
- As result of 1, the same journal will end up publishing papers ranging anywhere in the scale of quality, from fantastic to disastrous
- As result of 2, the IF of the journal cannot be used to proxy quality of single papers and not even of the average paper because distribution is too skewed (the famous 80/20 problem)
- As result of 3, there is no statistical correlation between a paper published on a high IF journal and its actual value and this is a problem because it’s somehow commonly accepted that there should be one.
- As result of 4, careers and grants are made based on a faulty proxy
- As result of 5, postdocs tend to wait years in the lab hoping to get that one CNS paper that will help them get the job – and obviously there are great incentives to publish fraudulent data for the same reason.
Ok, so let’s assume tomorrow morning CNS cease to exist. They close down. How does this solve the issue? It doesn’t.
CNS are not damaging Science. They are simply sitting on the very top of the ladder of scientific publishing and they receive more attention than any other journal. Remove them from the top and we have just moved the problem a bit down the ladder, next to whatever journal is following. Some people criticise CNS for being great pusher of the IF system; “to start”, they say, “CNS could help by making public the citation data of the single papers and not just the IF as journal aggregate”. This would be an interesting move (scientists love all kind of data) but meaningless to solve the problem. Papers’ quality would still be skewed and knowing the citation number of a single paper will not be necessarily representative of its value because bad papers, fake papers & sexy papers can end up being extremely cited anyway. Also, it takes time for papers to be cited anyway.
So what is the solution? The solution is to abolish pre publication peer review as we know it. Just publish anything and get an optional peer-review as service (PRaaS) if you think your colleagues may help you get a better paper out. This can create peer reviewing companies on the free market and scientists would get paid for professional peer review. When you are ready to submit, you send the paper to a public repository. The repository has no editing service and no printing fees. It’s free and Open Access because costs are anyway minimal. What happens to journals in this model? They still exist but their role is now different. Nature, Cell and Science now don’t deal with the editorial process any longer. Instead, the constantly look through the pool of papers published on the repository and they pick and highlight the ones they think are the best ones, similarly to how a music or a videogame magazine would pick and review for you the latest CD on the markets. They still do their video abstracts, their podcasts, their interviews to the authors, their news and views. They still sell copies but ONLY if they factually add value.
This system solves so many problems:
- The random lottery of the peer review process is no longer
- Nobody will tell you how you have to format your paper or what words you can use in your discussion
- Everything that gets published is automatically OA
- There a no publication fees
- There is still opportunity for making money, only this time in a fair way: scientists make money when they enrol for peer-review as a service; journals still continue to exist.
- Only genuinely useful journals continue to exist: all those thousands of parasitic journals that now exist just because they are easy to publish with, will perish.
Now, this is my solution. Comments are welcome.
The alternative is: I can publish 46 papers on CNS, win the Nobel prize using those papers, become editor of a Journal (elife) that does the very same thing that CNS do and then go out on the Guardian and do my j’accuse.
This proposal is almost identical to one I had in mind since a few years ago. I love it.
However, while it solves *a lot* of problems, it would not solve the fashionable-journals topic: your publication record would still be evaluated on the basis of which journals picked up your papers (or none); on citation metrics, etc.
The problem is the quality metrics. For example in an article that I wrote for Wired Italy (still unpublished) I proposed that, for experimental studies, *statistical power* and *reproducibility* should be additional metrics to add to the evaluation. What is needed is some evaluation of *quality*.
It is an interesting proposal, but here comes a problem: those scientists working as professional peer reviewers would become extremely specialized in their field – and their field wouldn’t be plasma physics, or genomics, or polymer chemistry: it would be peer reviewing. Meaning they could well get very skilled at finding out frauds, but they probably wouldn’t be able to keep easily up to date with the actual science that they’re trying to judge (not that now most peer reviewers have such a good knowledge of the matters they review on, though).
There also is an issue with re-educating the scientific community to take everything that they see in this hypothetical repository with way more than a grain of salt. Of course, knowing that there was no peer review at all should keep you on your toes, as opposed to the current “it’s been published, it MUST be good” state of mind. But there’s a risk that the old habits would linger on, with papers that now would be even less controlled than the old ones were.
>>>There also is an issue with re-educating the scientific community to
take everything that they see in this hypothetical repository with way
more than a grain of salt.<<<
Physicists do that already with arXiv, as far as I know.
A public repository seems like a good idea, solves the OA problem. Peer review companies on the “free market” (let’s assume there is such a thing) would not solve the procedural problem. Danger is that a select few peer review companies emerge claiming sovereignty, similar to CNS+ journals – and would be embraced because we rely on filters to help us through the haystack, as a public repository would be. One step in the right direction for the procedural problem: Why not make peer reviewing a professional activity every scientist engages in and can build a public profile on (like on public engagement or papers) by stating reviewers publicly on the article itself and thus making them accountable, but also give their time and effort some value?
Love the points 1-6 above outlining the problem. Most succinct description I’ve seen. Ever!
The solution is pretty much how I see it and have been promoting now for a few yeears, so no objections there 🙂
I think this is a good proposal and introduces, for a change, the idea of companies (peer review, publishing) competing for scientists’ services (specialized review, knowledge).
However I do not see how Journals could survive at all in this scenario. What added value could they possibly add? A fancy font? Design? Citation tracking (not really)? Would a paper be ‘out of the market’ once it’s published by one Journal? Would it be first-come-first-served or could several Journals enter into negotiation with the authors to see where it’s published (this is not very important to clear right now but it is an maliciously entertaining thought :)?
More importantly, how could Journals be convinced/armtwisted to agree with this?
I think this is the perfect solution for, alas, perhaps too a perfect world…
Journals’ main added value would be to actually find the papers that they think would be interesting for their readers and to feature them. Some of them already have a “hidden gem” section doing exactly this.
But then would not we be switching from “I got a paper accepted in Cell” to “I got a paper highlighted in Cell” ?
I think this would certainly overcome the peer review problem, but not career assessment.
I do like the proposal though !
Yep, exactly. The proposal is interesting, but it does not fix the problem -it fixes an unrelated but different set of problems.
I would not be so dismissive so quickly. Taking peer review from the hands of the publishers would certainly introduce much sanity in how the value of a paper is perceived.
And it would allow everybody to pick and highlight papers – Scientific Societies, Patients Organizations, etc (see Jan Jensen’s comment above).
And anyway solving peer review alone would be worthwhile.
It wasn’t meant to be dismissive: I wrote below that I actually love the proposal and I thought on the very same lines some time ago. I agree solving peer review alone would be a huge improvement, and that’s why I like it.
But still, the value of a paper would be linked to what journals/organizations/blogs feature it and what not. Which perhaps is better -after all we would have a more collective assessment. But perhaps is even worse -it would shift value from citations on other papers (thus, at least hopefully, scientific impact of some sort) to fashionable for journals’ readerships.
Therefore the original problem in the post -“bling bling” vs scientific value- is not solved.
Sorry if I attributed too much meaning!
I agree that this proposal would not eliminate the perceived value of a paper. But it would certainly democratize the way value is assigned.
There are other points to this issue:
– “perceived value” is probably something human mind cannot get rid of.
– citations can still be used as a proxy for quality/impact.
– “actual value” is only available in retrospective (even beyond citations).
– we need to ‘rank’ science anyway to allocate funds.
It is difficult to say if it would democratize it. It would make peer review better, so it would allow a better assignment of value, overall.
However, again: ranking would be *still* based on *which* and *how many* journals pick the paper up. If Nature picks a paper, it will be different than if a blog on insect brains does and Nature doesn’t. We’re still ranking based on publishers’ prestige.
The question is, instead, perhaps: do we need journals *at all*? Or could we just have a single database, like F1000, where all literature is poured and with transparent peer review?
It will clash, however with the fact that commentary as the one provided by Nature and Science is, indeed, good added value (there’s a reason I had a personal Nature subscription, after all: its features, commentaries, news etc. are excellent reads, often). And as someone moonlighting as a science journalist, I *love* that there is a space for such commentary…
I think this is right on the mark, I’ve discussed a similar set-up with colleagues in the past. I think the issue is to get journals out of the way of publication rather than dispense with them entirely, the way they run a ‘submit, reject, submit elsewhere’ process is what causes the most delay and lost productivity. Having them simply ‘highlight’ things that get published means they stop doing that.
Interesting proposal although the only way it would work would be if the business model allows journals to make more money than they do now. Love your last paragraph.
Much of what you suggest corresponds with the publishing model of F1000Research, of which Randy Schekmann is on the Advisory Panel (and, disclaimer, I’m the Outreach Director)
We publish (open access) before peer review, but after an in-house editorial check.
Formal, invited, peer review is entirely transparent and reports and referee names are included with each paper. We also show views/downloads and other metrics with each individual paper, so articles can be judged on their own merit.
Articles that pass peer review are then indexed in Embase/PubMed/Scopus. The citation for each paper also includes the referee/indexing status, so you can see if a paper has passed peer review or not.
As incentive for referees, they get a significant discount on their own submission, and referee reports are citable so they can take credit for their work.
It’s not entirely the same as you suggest – we do charge article processing charges, because we do operate a formal peer review model with invited expert referees. (And it’s that model that allows the articles that pass review to be indexed in external databases, where they’re easy to find.)
Which is precisely the reason I support F1000 Research and will send our next manuscript (maybe even the next two) there.
Another advantage of your solution, is that such “overlay” journals can be set up by anyone including scientists. We don’t need commercial publishers to highlight interesting papers. Example: http://proteinsandwavefunctions.blogspot.dk/2012/02/computational-chemistry-highlights-new.html
I prefer your system over the current one too but like to think that we can improve things even further if we do away with the idea that publications have to come in stand-alone 10-page articles following an IMRAD scheme.
What if we would start instead with openly licensed reviews (PubMed Central has on the order of 20k of these), making them editable and then updating them on the go as we move along our respective research cycles?
Those updates would be much shorter than the current 10-pager and thus more easy to write and more easy to read and review (even in cases when multiple topics are affected).
With a policy that any claims in such an editable article have to be linked to data or some other form of evidence (could be a simulation or a thought experiment, for instance, as detailed in your lab notebook – which is public and subject to long-term archiving), quality issues should at least not get worse than they are now.
Peer reviewers’ ratings of individual updates would feed into
altmetrics, and journals could make a living by writing news articles
about individual diffs (or sets thereof), just like news reporters in
other fields do, or they could dig deeper by following the links to the
lab notes, data, code and other auxiliary materials.
For grant and promotion committees, the task of actually reading all the most relevant updates of the candidates would become feasible, thereby lessening the pressure of using proxies.
If an article gets too complex, new articles on subtopics thereof can be created, and if there is no consensus on certain updates, one could just fork some version and work on it independently (Wegener and Einstein might have chosen this route) until things clear up.
Further thoughts along these lines are laid out in
https://en.wikiversity.org/wiki/User:OpenScientist/Open_grant_writing/Beethoven%27s_open_repository_of_research , and we are still working on the components for a test run of such a system.
Hi, just a few quick remarks on your solutions:
-The random lottery of the peer review process is
->Editorial process at NCS is far from random
-Nobody will tell you how you have to format your
paper or what words you can use in your discussion
->For format fine and online is the way to go
for but you need rules or otherwise it will be a mess more often than not.
->Referees not editors tell you what words to
use. Editors make you do it. And no you can;t say whatever you want when you
publish scientific work.
-Everything that gets published is automatically OA
-> Agreed, making information available to
all sounds good. Still, let’s not forget that someone needs to pay for the
publishing process – be it authors, readers or non-profit publishers. Nothing
is free in this world and many things happen between submission and publication
with many people involved.
-There a no publication fees
-> See above. Also, this is misleading and
probably needs to be corrected. Please note that at many high-profile journals,
there are no publication fees. This is inherent to the subscription-based
model… In general, you only pay fees for color pages.
-There is still opportunity for making money, only
this time in a fair way: scientists make money when they enroll for peer-review
as a service; journals still continue to exist.
-> Seems fair .Will this impact free access? How much is
Only genuinely useful journals continue to exist:
all those thousands of parasitic journals that now exist just because they are
easy to publish with, will perish.
-> Time will expunge the bad seeds. It’s happening and
happened in the past even before OA era.
In physics, most papers are posted on the preprint server arXiv (http://arxiv.org) and then submitted to a journal for peer review. Hence, your solution has essentially been working for years: arXiv is the repository and the journal is the peer-review service.
By the way, peer review works most often than not, in particular in journals that focus on quality.
And there actually *is* statistical correlation between a paper published in a high IF journal and its actual value, provided that cites are a measure of value.
I don’t understand what the difference in this system is. In this model, if I understand correctly, all journals have the same access to the paper-pool and it helps to prevent bad papers being published/highlighted (not exactly though, parasitic journals still can fish them out), but what is changing? Commercial optional (!) peer-reviewing is an even bigger joke as it is now and it’ll allow double amount of crazy papers appear in a repository, what will make impossible for normal people to fish out the good ones by themselves. So we will still look for some criteria – like who had highlighted the paper (a.k.a who published). Apparently looks like in your system the “best” journal will be the one with most aggressive editors-journalists, who can immediately find sexy-papers, boost the credibility of the journal and we will be at the beginning again, only now we can get access for free. Let’s be realistic – journals IF has its meaning and no one reads publication from IF-bottom without keeping in mind that something can be very wrong with it. And what a mess happens if (it’s not even if, it will happen) several journals will
highlight the same paper – it’s like facebook for scientist (“my status has so many likes”), making journals incredibly similar, destroying any credibility value (because of parasitic journals who will highlight everything, including outstanding papers, special from well-known scientist, who otherwise would choose where to publish). So renaming publishing to highlighting is not enough, if we gonna keep the journals in general.