Recently the peer review process in academic publishing has been under increased scrutiny. While many issues have been raised, one of the ongoing sources of complaint for colleagues is the amount of time that the process can take and the lack of publicly accessible information about journal turnaround times. As a first step in beginning to address the lack of information in the disciplines of political science and international relations, Justin Greenwood has established reviewmyreview.eu , a blog that compiles journal turnaround times for a first decision and their acceptance rates.
As someone who is the co-editor of Politics, an associate editor of Critical Studies on Security, and a member of an editorial team for the Popular Culture and World Politics Book Series, I am in favour of greater transparency regarding turnaround times and acceptance rates. Access to better information is particularly important for early career researchers who are under considerable time pressure to have publications in hand when entering the job market. I also hope that being aware of this data might improve the performance of editorial teams and peer reviewers. So, for me, the principle is generally sound.
At the same time, I do have some reservations about how this information is being interpreted by colleagues and how it is shaping perceptions of the peer review process.
First, means are not particularly good at capturing the full spectrum of experience across a process. Beyond the distorting influence of outliers at either end, a focus on the mean, as compared to say a disaggregation of the review process with turnaround targets at each stage, does not tell people much about how the review process is operating. Are there desk rejections and what percentage of submissions are rejected in this way? How many reviewers are selected to review each submission? What percentage of submissions are accepted, rejected, or given a revise and resubmit through the peer review process? Who makes a final decision on the basis of reports?
Second, I fear that increasingly colleagues--as reviewees--are viewing the peer review process as a burden to be endured rather than as a process that can potentially make their work better. Yes, it doesn't always work out that way. I have had papers get rejected where the feedback was completely unhelpful, and in some cases brutally so. I have also had papers get accepted where the feedback was equally banal. But, in the vast majority of my experiences, regardless of the final outcome, peer review made my work better. Reducing the peer review process to mean turnaround times potentially jeopardizes the recognition that it can provide benefits, beyond the speed of decision.
Third, editing a journal, like any other process, takes place inside of a context. Most journal editors are volunteers who are taking on the responsibility to shepherd a significant volume of submissions through the peer review process. For major disciplinary journals in political science and IR that have high ISI rankings, the numbers can range from around 250-1000 submissions a year. Some institutions may grant a reduction in workload for editing a journal but many do not. Very few editors--if any--receive any direct compensation for undertaking this role. In some cases, a publisher and/or disciplinary association may provide resources to hire an editorial assistant to help with the day to day operations of the journal. Even then, this is most often a part-time position. While the under-resourced and volunteer nature of editing a journal doesn't excuse waiting nine months to acknowledge receipt of a submission--or other forms of unprofessional behaviour--it does help to explain why things may not necessarily go as planned in terms of turnaround times. Other aspects of one's working life sometimes do interfere.
Finally, there is an assumption that editors have far more control over the review process than we often do. The primary consumer of time in any review process is finding appropriate reviewers and receiving their reviews. This is not exactly a secret. There are good reasons why it can take time to find suitable reviewers (e.g., difficulties in securing the services of specialists who are able to review within the proposed deadline). There are also good reasons why it may take a few weeks for reviewers to review; as unpaid volunteers, they also have other commitments and responsibilities. And, despite what some colleagues seem to think, journal editors are not passive when it comes to enlisting reviewers. If a reviewer does not respond to a request to review quickly, we won't sit around for two months waiting for a response. What slows down the process more than anything else are the following two scenarios. The first is when a reviewer provides a review--often one or two lines--that is completely unusable because the judgement offered is not grounded in the text itself. The second is when a reviewer agrees to review a submission, a deadline is agreed, the reviewer never delivers, and then does not respond to any further attempts at communication. In all of these instances, editors can find themselves, through no fault of their own, back at square one in the review process after a considerable amount of time has passed.
So, in sum, the temporal dynamics of peer review involves several variables that are not always under the direct control of journal editors. There are things that editors can do to mitigate some of the risks when given the proper resources to do so, but there are also problems that arise to which one can only react. When things do go wrong, it is important to keep in communication with authors.
Hopefully, increased transparency about all aspects of peer review will help to improve the process. For colleagues who would like to see improved turnaround times for journals, taking heed of the best practices for peer review is a good place to start.
Photo credit: Alan Cleaver