Get €500 (or $500) on your prepaid balance! Use it for premium subscriptions or job postings. Read more Close

Challenges of open post-publication review

Commentary Created on 09 Jun 2015 by Alen Piljić

Attention!

This article was published before automated invoicing was introduced in 2016. Some functions may not work as intended. Please contact us if you need more information.

Got Credit (Flickr).jpg

Open post-publication review is seen by many as a critical component of the open science concept. But how do we transit from the current system to the next one?

Nowadays, most publishers utilize closed and anonymous peer review as a way to filter good research from bad and decide what to publish. But the current system has failed and many have already reflected on this issue in articles and blog posts. Now, new forms of open post-publication reviews are being introduced by publishers. In this article, I am going to discuss one of these systems, which has been introduced by F1000 as part of their F1000 Research platform.

Recently, I attended a lecture by a member of F1000 staff who presented their review system at European Molecular Biology Laboratory (EMBL) in Heidelberg. Unlike the most common, closed, anonymous, invited reviews, and unlike the most open post-publication review system, such as the one featured here at lifescience.net (LSN, more details here), which is open, non-anonymous and not invitation based, the system F1000 utilizes is open, non-anonymous and invited.

It has been shown previously that scientists are less likely to engage in reviewing scientific articles (and that they take longer to submit their reviews) if their name is to be revealed as opposed to it being hidden (1). Such behavior is certainly in line with our own experience with trying to promote the post-publication review module of LSN.

In reality, scientists need incentives to engage and review papers and if one implements a review system without incentives, it is simply not going to work. The way F1000 solved this problem is by providing discounts on publishing services to referees. In other words, those scientists that review F1000 articles, have to pay much less to publish their own work. It is a win-win situation, where scientists are grateful because they get more work published, and the publisher is glad because the positive feed loop generates more volume, revenue and higher profits.

Except, the problem arises when you post a negative review. As a scientist, you certainly did your job well, but the authors of the publication won't appreciate it and the publisher won't appreciate it either. Especially if you post negative reviews repetitively. The authors might next time decide to publish their work elsewhere (the publisher loses money), and when another author decides to publish at F1000, you (the negative reviewer) are less likely to be invited again to review. So you lost your incentive as well, and it is now going to cost you more to publish with F1000 again. Suddenly you realize you would have been better off had you posted a milder review, kept those invitations coming in and continued publishing cheaply.

The F1000 post-publication system in its current form is badly implemented. It creates a positive bias in reviews and, while it might be in the interest of the publisher, it certainly can't be in the interest of the scientific community to have poorly reviewed knowledge pumped into the system.

Further, there is no reason a post-publication review system shouldn't be open to all scientists, including those that never published an article in their life. Why PhD students that worked four or five long years on a project and are just about to publish their work wouldn't be considered to review research related to their topics? If a system is created where reviews themselves can be scrutinized, there is no reason why anyone wouldn't be allowed to post a review.

This is why the scientific community shouldn't look towards the F1000 solution and consider it a new standard or a future option (which was the message I felt was being communicated to the audience during the lecture I attended). Instead, novel solutions should be considered which are provided by organisations which do not publish themselves and are not subject to a direct conflict of interest. The system should be fully opened (not invited) to all researchers, include incentives provided by a third party (funding agencies), and involve creation of a larger reputation system that measures scientific output beyond research publications.

At lifescience.net we tried to make a leap in this direction. However, this is just an intermediary step toward a new kind of digital infrastructure that would allow sharing and review of knowledge in a web-native format. In the articles to come, we will present and discuss some of those concepts.

Sign up with lifescience.net and follow our organisation to get notified when we post new content about science policy, open science and similar issues.

Do you have comments or questions? Please post them in the comment box below. You can also share your own thoughts about any life science related issues through our News and Views module. Reviews of institutions, companies or academic research groups can also be shared though the Institution reviews module.

Cover image: Got Credit (Flickr).jpg

References

1. http://dx.doi.org/10.1136/bmj.318.7175.23

Stats

  • Recommendations +2 66.7% positive of 6 vote(s)
  • Views 5107
  • Comments 2

Recommended by

Post a comment

You need to be signed in to post comments. You can sign in here.

Comments

  • Image

    Alen Piljić Tuesday, 16 June 2015 - 11:53 UTC

    Thanks Hernan :)

  • Image

    Hernán Biava Thursday, 11 June 2015 - 14:22 UTC

    Excellent article.There’s no doubt that a post-publishing review system is absolutely required among the scientific community.

Loading ad...