A guest post by Ellen Finnie Duranceau, Program Manager, Scholarly Publishing and Licensing, Massachusetts Institute of Technology
In a recent blog post, prompted by a recent listserv thread, Joseph Esposito argues that
the better job Green OA does, the more it will be resisted [by publishers]. To keep Green OA programs going, they have to be imperfectly implemented.
In the real world, though, “perfect implementation” is about as likely as, well, perfect anything.
More importantly, I don’t think it’s feasible for a library to design a process that would allow it to know, on an ongoing basis, at a reasonable cost, whether Green OA has been implemented sufficiently by the authors in any particular journal that the library could afford to cancel its subscription.
Indeed, I can’t define a scenario that seems solid enough to even experiment with, let alone deploy, in a research library in the real world.
For this exercise, I’m leaving aside any broader goals of wider distribution of publicly funded research, etc., or any other philosophical commitment one might have to OA, and am just focusing on providing sufficient service to one’s own community. I’m being completely pragmatic.
The initial study to determine whether to cancel is cumbersome, expensive
First, we have the problem that a wide sampling from any given journal would be required, since author practices in self-archiving vary. This sampling would also have to be repeated regularly, and take in several sample years, since practices will vary over time.
Whoever performs this sampling would have to be trained in recognizing which version of a particular article is posted online, since presumably one wants the peer-reviewed version available to one’s faculty, researchers, and students. This would require, in many cases, comparing the manuscript with the version of record (which, please note, is only available to you if you subscribe).
After all the sampling is done and a spreadsheet created, one would have to calculate what percentage of the journal was openly available (and whether that percentage was acceptable–this would have to be a very high number, presumably), and after what time period. This would not be an easy feat, as one has to have numbers representing the total number of articles in order to make the comparison, and as far as I’m aware, this would involve manually tabulating the number of articles in each issue (again possibly through sampling).
Then this information would have to be used in conjunction with other important data such as usage level, faculty interest and feedback, cost, etc. Of course, this whole approach would only be responsible if one had buy-in from the community one is serving. That community would have to believe that this process is reasonable and that the end goal of replacing library journal subscriptions with reliance on authors’ self-archived articles is a good one.
The cumbersome, expensive survey would have to be repeated, year after year, and would get harder and harder to administer
If the decision were taken to cancel the journal, assuming here that the decision rested in significant part on the availability of OA manuscripts, then one would also have to have a cycle of returning to those titles to be sure a certain acceptable percentage was still available. This would be necessary because author practices vary and there is no reason at all to assume that because for one year, a good percentage of a journal was OA, that will be true the next year. So it’s likely a continuous sampling would be required. We are now talking about a dramatic impact on staff resources, so some other work would need to be stopped or slowed. (And by the way, this assumes the cancellation is likely to free up funds, which, in our package-driven purchasing world, is not always the case.)
But let’s assume one does cancel. Then, if one wants to continue to sample post cancellation —as would seem to be necessary —in many cases one would need the version of record to compare with, to be sure one is looking at the peer-reviewed version. Yet this version would not be available once the cancellation had taken place. So staff would be operating without solid information when carrying out future sampling, as it can be difficult to tell a preprint from a postprint without the version of record as a comparison point.
Self-Defeating Workflow: Publishers would respond, making any cancellation at best temporary, guaranteeing that follow-up surveys would be necessary
If any significant number of libraries followed this labor-intensive workflow and reassigned staff from other tasks to do it, within a year or two the affected publishers would simply change their green OA policy for authors, removing it entirely or adding an embargo. The library would have to track these publisher policy changes—another labor-intensive workflow I won’t attempt to lay out, as there is no reliable and targeted signaling process for such changes.
Resubscribing would probably be difficult
If the journal jettisons Green OA, or its authors stop self-archiving in a reliable manner, the library will want to resubscribe. That could be tricky, as the necessary funds may already have been diverted. Even if funds were available, it would be exceedingly labor intensive to resubscribe and decide about and act upon filling any gaps in access, as well as updating relevant metadata to facilitate useful services like SFX linking. Perhaps one would fill the gaps/restore the access via pay-per-view, but now we are talking about having to do another analysis to determine whether that is cost-effective.
Links would be broken in the meantime
When a library cancels a journal, the buttons in library open URL linking software no longer take users from discovery resources like Compendex, Inspec, Web of Knowledge, etc. to journal articles. Known article searches may function, but index-based searching that links to the actual documents to assist those new to a topic area would be limited to subscribed titles.
When looking at how to operate in this evolving ecosystem, I imagine we all agree it’s important to use funds and staff resources wisely, and to look beyond a quarter or a year in thinking about the impact of our decisions. Without considering any philosophical or social goals (no matter how mission-relevant, or noble), and looking just at the practical need of providing key research articles to a community, I do not see a viable workflow that is worth testing even on a trial basis.
This is probably part of the reason you do not hear about libraries canceling journals based on availability of OA manuscripts. I would also guess that if the numbers were run, there would not be any journals to cancel, as author practices in this area are not consistent, and are likely to stay that way for the foreseeable future.
(Adapted from a post to the SPARC OA Forum Listserv.)