[T]he price of textbooks has risen more than 800% over the past 30 years, a rate faster than medical services (575%), new home prices (325%), and the consumer price index (250%).
There is no doubt that the public interests vested in funding agencies, universities, libraries, and authors, together with the power and reach of the Internet, have created a compelling and necessary momentum for open access. It won’t be easy, and it won’t be inexpensive, but it is only a matter of time.
A guest post by Ellen Finnie Duranceau, Program Manager, Scholarly Publishing and Licensing, Massachusetts Institute of Technology
the better job Green OA does, the more it will be resisted [by publishers]. To keep Green OA programs going, they have to be imperfectly implemented.
In the real world, though, “perfect implementation” is about as likely as, well, perfect anything.
More importantly, I don’t think it’s feasible for a library to design a process that would allow it to know, on an ongoing basis, at a reasonable cost, whether Green OA has been implemented sufficiently by the authors in any particular journal that the library could afford to cancel its subscription.
Indeed, I can’t define a scenario that seems solid enough to even experiment with, let alone deploy, in a research library in the real world.
For this exercise, I’m leaving aside any broader goals of wider distribution of publicly funded research, etc., or any other philosophical commitment one might have to OA, and am just focusing on providing sufficient service to one’s own community. I’m being completely pragmatic.
The initial study to determine whether to cancel is cumbersome, expensive
First, we have the problem that a wide sampling from any given journal would be required, since author practices in self-archiving vary. This sampling would also have to be repeated regularly, and take in several sample years, since practices will vary over time.
Whoever performs this sampling would have to be trained in recognizing which version of a particular article is posted online, since presumably one wants the peer-reviewed version available to one’s faculty, researchers, and students. This would require, in many cases, comparing the manuscript with the version of record (which, please note, is only available to you if you subscribe).
After all the sampling is done and a spreadsheet created, one would have to calculate what percentage of the journal was openly available (and whether that percentage was acceptable–this would have to be a very high number, presumably), and after what time period. This would not be an easy feat, as one has to have numbers representing the total number of articles in order to make the comparison, and as far as I’m aware, this would involve manually tabulating the number of articles in each issue (again possibly through sampling).
Then this information would have to be used in conjunction with other important data such as usage level, faculty interest and feedback, cost, etc. Of course, this whole approach would only be responsible if one had buy-in from the community one is serving. That community would have to believe that this process is reasonable and that the end goal of replacing library journal subscriptions with reliance on authors’ self-archived articles is a good one.
The cumbersome, expensive survey would have to be repeated, year after year, and would get harder and harder to administer
If the decision were taken to cancel the journal, assuming here that the decision rested in significant part on the availability of OA manuscripts, then one would also have to have a cycle of returning to those titles to be sure a certain acceptable percentage was still available. This would be necessary because author practices vary and there is no reason at all to assume that because for one year, a good percentage of a journal was OA, that will be true the next year. So it’s likely a continuous sampling would be required. We are now talking about a dramatic impact on staff resources, so some other work would need to be stopped or slowed. (And by the way, this assumes the cancellation is likely to free up funds, which, in our package-driven purchasing world, is not always the case.)
But let’s assume one does cancel. Then, if one wants to continue to sample post cancellation —as would seem to be necessary —in many cases one would need the version of record to compare with, to be sure one is looking at the peer-reviewed version. Yet this version would not be available once the cancellation had taken place. So staff would be operating without solid information when carrying out future sampling, as it can be difficult to tell a preprint from a postprint without the version of record as a comparison point.
Self-Defeating Workflow: Publishers would respond, making any cancellation at best temporary, guaranteeing that follow-up surveys would be necessary
If any significant number of libraries followed this labor-intensive workflow and reassigned staff from other tasks to do it, within a year or two the affected publishers would simply change their green OA policy for authors, removing it entirely or adding an embargo. The library would have to track these publisher policy changes—another labor-intensive workflow I won’t attempt to lay out, as there is no reliable and targeted signaling process for such changes.
Resubscribing would probably be difficult
If the journal jettisons Green OA, or its authors stop self-archiving in a reliable manner, the library will want to resubscribe. That could be tricky, as the necessary funds may already have been diverted. Even if funds were available, it would be exceedingly labor intensive to resubscribe and decide about and act upon filling any gaps in access, as well as updating relevant metadata to facilitate useful services like SFX linking. Perhaps one would fill the gaps/restore the access via pay-per-view, but now we are talking about having to do another analysis to determine whether that is cost-effective.
Links would be broken in the meantime
When a library cancels a journal, the buttons in library open URL linking software no longer take users from discovery resources like Compendex, Inspec, Web of Knowledge, etc. to journal articles. Known article searches may function, but index-based searching that links to the actual documents to assist those new to a topic area would be limited to subscribed titles.
When looking at how to operate in this evolving ecosystem, I imagine we all agree it’s important to use funds and staff resources wisely, and to look beyond a quarter or a year in thinking about the impact of our decisions. Without considering any philosophical or social goals (no matter how mission-relevant, or noble), and looking just at the practical need of providing key research articles to a community, I do not see a viable workflow that is worth testing even on a trial basis.
This is probably part of the reason you do not hear about libraries canceling journals based on availability of OA manuscripts. I would also guess that if the numbers were run, there would not be any journals to cancel, as author practices in this area are not consistent, and are likely to stay that way for the foreseeable future.
(Adapted from a post to the SPARC OA Forum Listserv.)
…the content scholarly publishers create…
The Board believes that the licensing terms in the Taylor & Francis author agreement are too restrictive and out-of-step with the expectations of authors in the LIS community.
By Greg Cram, Rights Clearance Analyst, The New York Public Library
On March 4, 2013, Maria Pallante, the 12th United States Register of Copyrights, delivered “The Next Great Copyright Act” at Columbia Law School. In the lecture, Register Pallante reflected on the history of other major comprehensive revisions to United States copyright law. She argued that the time has come for the next general revision to begin by noting the complexity of current copyright law and its failure, in some areas, to stay current. She highlighted the work the Copyright Office has already undertaken in preparation for the next act, including reports on Digital First Sale, Orphan Works, Pre-1972 Sound Recordings, Mass Digitization, and others. Finally, she laid out a number of issues that are on the table for consideration in the next round of comprehensive revision.
The content of the next comprehensive copyright act is important to libraries and library patrons. Copyright law impacts library services at all levels, from the basics of making unsupervised copiers available to patrons to the complicated digitization of works in library collections. In the lecture, Register Pallante highlighted a few issues important to libraries, including the first sale doctrine, the libraries and archives exception, the blind and print disabled exception, and the length of copyright protection. The next copyright act is certain to implicate many library services, not to mention the general flow of content in modern society.
Because of the importance of this lecture, I am sharing my notes below. The lecture was recorded, but is not yet available on the Kernochan Center’s website. I strongly recommend watching the recording when it is available. I labored to take accurate notes and do not intend to misrepresent the content of the lecture. Even with my diligence, these notes should not be understood to be an official record or transcript of the lecture.
My notes on “The Next Great Copyright Act”
The next comprehensive review should begin soon. A comprehensive review is needed for two main reasons. First, courts are asking Congress to fix copyright law (see, e.g., Golan, Google Books, Tenenbaum). Second, more people need help navigating a complex law and shouldn’t and army of lawyers to understand copyright law.
There should be two main themes for the next great copyright act. First, it should be forward thinking, but flexible. Second, authors’ rights to enjoy control and exploit works needs to be meaningful. Authors are not the counterweight to the public interest because protecting authors is in the public interest. A copyright act that did not protect authors would be illogical. But, the law needs to recognize that some authors are different by giving weight to Creative Commons licenses and public domain declarations.
The issues on the table for the next comprehensive review include:
—Not all copies are the same
—Perhaps there could be discrete exceptions for certain incidental copies
—For more information on this issue, see the Copyright Offices 2001 study on the Digital Millennium Copyright Act
Public Performance Right for Sound Recordings
—Copyright Office is a “strong supporter” of a public performance right for sound recordings
—Disparities between terrestrial radio and internet radio royalty rates are hampering new business models
—The new law must respect the integrity of the internet, including free speech
—There needs to be, however, a mix of legislative and voluntary efforts to combat infringement online
—On solution may be to increase criminal penalties for streaming, or at least bring them in line with the penalties for distribution through downloads
—The Copyright Office is studying this issue
—Small claims may be a way for rights holders to enforce rights when federal litigation may be too expensive
—The Copyright Office could, potentially, take a lead role in administering small claims
—Review registration requirements
—Look at statutory damages from all angles
—Statutory damages are important part of copyright act and should be retained
—Need to provide guidance to courts about how statutory damages should be applied
The Digital Millennium Copyright Act
—The Internet has evolved since DMCA passage in 1998
—Congress should review the § 512 safe harbors
Registration and Deposit of Published Works
—The deposit requirements for registration should remain in next copyright act
—Congress should review the legal incentives for registration
—How can the Library of Congress add born digital works to its collection through this process?
—The policies surrounding mandatory deposit should not be driven by the collection building activities of the Library of Congress (see the ACCORD Report for more information)
—Digital first sale will be an issue on the table
—Physical first sale may also need to be reviewed, depending on the outcome of the Kirtsaeng v. John Wiley & Sons case currently before the Supreme Court
—The libraries and archives exception in § 108 should be updated
—Update exceptions for the blind and print disabled in § 121 for the digital world
—Explore new exception for higher education institutions
—Need to review growth of licensing schemes
—Review mechanical licenses
Now the “bold” issues:
Term of 50 years, renewable for an additional 20
—The Supreme Court decision in Golan v. Holder is last word on whether life plus 70 years is constitutional
—However, the term of copyright protection could be modified to 50 years after the death of the author, renewable for another 20 years
—This would put the burden on the copyright owner to renew copyright term at the end of 50 years after death
—Modeled after § 108(h), something the Copyright Office is very fond of
—This proposal would be acceptable under various international treaties, including the Berne Convention
Opt-Out v. Opt-In
—Extended collective licensing could potentially solve many problems
Finally, Congress should expand the role of the Copyright Office. The Office could help to resolve questions of law or fact through advisory opinions. The Office could also help to establish best practices on a number of topics, including searching for copyright owners. If an extended collective licensing scheme is devised by Congress, then the Office could provide transparency to that system.
But fewer than 20 percent of the American institutions that have formed partnerships with Coursera are also members of Coapi. That seems downright hypocritical to me, as opening access to faculty research would help level hierarchies and tear down boundaries between academics and the public, between major research universities and less-wealthy institutions, and between the developed and developing worlds.
Both the new directive and the N.I.H. policy allow delays of a year before making papers freely available. That may be too long. Federal agencies should keep any delays as short as possible.
The copyright in most of these works is owned by our faculty members, and it is well past time that we just refused to transfer those rights to commercial entities that undermine our best interests.
Kevin Smith at Duke draws the right conclusion from the ongoing outrage of the lawsuit against GSU.