Author Archives: Krista Cox

Eleventh Circuit Reverses and Remands Georgia State E-Reserves Case (Again)

The long saga of the Georgia State University (GSU) e-reserves case continues as the Court of Appeals for the Eleventh Circuit reversed the district court’s ruling which had found that the vast majority of GSU’s use of works in its e-reserves constituted a fair use. This is the second time the Eleventh Circuit has reviewed the case, and the second time it has reversed.

In 2008, publishers sued GSU for copyright infringement, arguing that the use of unlicensed excerpts of copyrighted works in the e-reserves constituted infringement. GSU defended itself, relying on the right of fair use. In the first bench trial, the district court ruled in favor of fair use for 43 of the 48 cases of alleged infringement. The Eleventh Circuit reversed and remanded the case in 2014, directing the lower court to re-examine its weight to market substitution and re-evaluate the four fair use factors holistically, rather than taking an arithmetic approach (i.e., if three fair use factors favor the use, but one disfavors it, fair use should always apply). On remand, the district court re-evaluated the four factors and found that 44 of the 48 cases constituted fair use. In her analysis, Judge Evans assigned each factor a weight: “The Court estimates the initial, approximate respective weights of the four factors as follows: 25% for factor one, 5% for factor two, 30% for factor three, and 40% for factor four.” The publishers again appealed to the Eleventh Circuit, which heard the case in 2017. (Here’s a link to ARL’s amicus brief in the second appeal.)

On October 19, 2018, the Eleventh Circuit released its 25 page opinion—more than a year after hearing oral arguments in the case—finding that the district court again erred in its evaluation of fair use. The Eleventh Circuit suggests that the district court was only mandated to re-evaluate its analysis on the second and third factors, but had instead also re-evaluated its analysis on factor four (in which the district court found in the first trial that in 31 cases, the fourth fair use factor weighed against fair use).

Additionally, the Eleventh Circuit points out that “The district court again applied a mathematical formula in its overall analysis of fair use,” which it had been instructed against. Although the district court couched the given weights as “initial” and “approximate,” the Eleventh Circuit found that the district court only adjusted these factors in four instances and di not adjust the other factors in the overall analysis. Thus, “We conclude that the district court’s quantitative rubric was an improper substitute for a qualitative consideration of each instance of copying in the light of its particular facts.” The Eleventh Circuit has remanded the case, directing the district court to use a holistic approach to fair use, and avoid any mathematical approach with respect to the four factors.

Another issue the Eleventh Circuit opinion addresses is whether the cost of purchasing licenses affects the third factor; the district court in the second trial considered the price of use on two ocassions. The Eleventh Circuit rules that price should not be taken into account when evaluating the amount and substantiality of the portion of the work used.

While the Eleventh Circuit reversed and remanded on the above issues, it affirmed the district court’s decision not to reopen the record. Publishers in 2015 filed a motion to reopen, asserting the need to introduce “Evidence of GSU’s ongoing conduct (e.g. its use of E-Reserves during the most recent academic term)” as well as evidence of the availability of digital licenses. Here, the Eleventh Circuit notes that this decision is within the discretion of the trial court.

Kevin Smith posted about the GSU case on In the Open, with an excellent summary of what the Eleventh Circuit’s opinion (as well as its last opinion) does not do, and what, as a result, the publishers have lost on:

…But the big principles that the publishers were trying to gain are all lost. There will be no sweeping injunction, nor any broad assertion that e-reserves always require a license. The library community will still have learned that non-profit educational use is favored under the first fair use factor even when that use is not transformative. The best the publisher plaintiffs can hope for is a split decision, and maybe the chance to avoid paying GSU’s costs, but the real victories, for fair use and for libraries, have already been won.

Eleventh Circuit Finds Georgia’s Annotated State Laws Not Copyrightable

On Friday, October 19, the Court of Appeals for the Eleventh Circuit found that Georgia’s annotated laws are not protected by copyright, reversing the district court. In Georgia v. Public.Resource.Org, Georgia argued that its annotated state laws are protected by copyright. Public.Resource.org posted these laws online—it has for several other laws and codes in other jurisdictions—and was subsequently sued for copyright infringement. Public.Resource.org argued that because only the annotated versions are considered official versions, they should be free to be read by the public. As a policy matter, this outcome makes sense; one should be able to read, for free, the laws that they must abide by. The Eleventh Circuit agreed with Public.Resource.org.

The Eleventh Circuit did not state that all annotated laws are not copyrightable, but instead noted that in the present case, the annotations were done at the direction of state officials and intertwined with the law itself. The court sums up its conclusion: “the annotations in the OCGA are sufficiently law-like so as to be properly regarded as a sovereign work. Like the statutory text itself, the annotations are created by the duly constituted legislative authority of the State of Georgia. Moreover, the annotations clearly have authoritative weight in explicating and establishing the meaning and effect of Georgia’s laws. Furthermore, the procedures by which the annotations were incorporated bear the hallmarks of legislative process, namely bicameralism and presentment. In short, the annotations are legislative works created by Georgia’s legislators in the exercise of their legislative authority.”

The district court had ruled that the annotations were subject to copyright, then proceeded to reject the argument that Public.Resource.Org’s use was fair use. However, as the Eleventh Circuit notes, “Because we conclude that no copyright can be held in the annotations, we have no occasion to address the parties’ other arguments regarding originality and fair use.”

ARL submitted an amicus brief in this case together with ALA, ACRL, Public Knowledge and other groups and individuals—as well as in a related case, ASTM v. Public.Resource.Org, supporting Public.Resoure.org.

What’s In (and Out) of the IP Chapter of the United States, Mexico, Canada Trade Agreement

Yesterday, Canada announced—just in time for the self-imposed deadline by the negotiating parties of September 30— that it would join the trade agreement with the United States and Mexico. This agreement, a renegotiation of NAFTA, which apparently is also being called the US-Mexico-Canada Agreement or USMCA, includes much more prescriptive provision on intellectual property than what was included in the original NAFTA. The original NAFTA text on intellectual property, written in a different era of trade agreements, does not include language on copyright term or issues covered by the WIPO Internet Treaties (NAFTA was negotiated before the WIPO Copyright Treaty and WIPO Performances and Phonograms Treaty).

Presumably any deal that Canada agreed to in the renegotiation was going to be more prescriptive, with greater rights for rightholders, than in the original NAFTA. However, it is also worse, at least in some respects, than what Canada, Mexico and the United States—and nine other countries—had agreed to in the Trans-Pacific Partnership Agreement (TPP) (see analysis of that text here), which the United States withdrew from after Trump became President. (Note: after the United States’ withdrawal from the TPP, the remaining 11 countries in the negotiations—Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore and Vietnam—renegotiated and formed the Comprehensive and Progressive Agreement for Trans-Pacific Partnership, or CPTPP, which suspended many of the United States’ demands on copyright and other IP provisions).

Here’s a look at what’s in—and out—of the renegotiated IP chapter, as compared to both the original NAFTA text and the TPP text:

Limitations and Exceptions 

Arguably the biggest disappointment in the recently released text is what the IP chapter does not include. The TPP had included language based off a United States proposal from 2012 on limitations and exceptions. The TPP obligated parties to try and achieve balance in their copyright systems. Article 18.66 of the TPP read: 

Each Party shall endeavour to achieve an appropriate balance in its copyright and related rights system, among other things by means of limitations or exceptions that are consistent with Article 18.65 (Limitations and Exceptions), including those for the digital environment, giving due consideration to legitimate purposes such as, but not limited to: criticism; comment; news reporting; teaching, scholarship, research, and other similar purposes; and facilitating access to published works for persons who are blind, visually impaired or otherwise print disabled.

While the language could have been stronger—for example by mandating that parties achieve a balance, rather than merely “endeavor[in]g” to do so, a provision on balanced copyright was seen as a success, recognizing the importance of limitations and exceptions in copyright. When trade agreements or laws only include provisions regarding the rights of rightholders, the rights of users get ignored. It is disappointing that the United States chose not to propose balancing language, but instead included limiting language with respect to limitations and exceptions (requiring parties to “confine” limitations and exceptions to the three-step test of 1) certain special cases; 2) that do not conflict with the normal exploitation of the work; and 3) do not unreasonably prejudice the legitimate interest of the right holder.). 

Copyright Term

Copyright term is one of the most significant areas with respect to copyright where Canada will be forced to change its law. As noted above, NAFTA did not contain provisions dictating copyright term (and, of course, was negotiated prior to the United States’ own term extension). Canada currently has a copyright term of the life of the author plus fifty years, but with the USMCA text, will need to extend that term to life plus seventy. Perhaps this concession was to be expected since TPP parties also agreed to the term, yet the consequences to the public domain are significant. The United States has seen a moratorium on published works entering the public domain for the last twenty years due to copyright term extension agreed to in 1998. The public domain is critical for the creation of new knowledge and culture and copyright term plays a significant role in closing off the public domain. This term goes well beyond international standards.

Additionally, Canada agreed to further extension of copyright term for corporate works, beyond what had been agreed to in the TPP. While the TPP parties agreed to providing corporate works (works that are not measured on the life of the author) with 70 years of protection, the USMCA text requires 75 years.

Technological Protection Measures

Because NAFTA went into force in 1994, it did not include provisions that have been found in the era after the WIPO Internet Treaties, such as anti-circumvention measures.  The new provisions in USMCA mirror the text on anti-circumvention of several past bilateral trade agreements by the United States. It requires parties to make it an offense to “knowingly, or having reasonable grounds to know” circumvent technological protection measures, or to manufacture or distribute devices primarily designed or are promoted for the purposes of circumvention. This language is highly prescriptive and detailed. It also includes a closed-list set of seven limitations and exceptions to the anti-circumvention measures, plus a provision permitting “additional exceptions or limitations for noninfringing uses of a particular class of works, performances, or phonograms, when an actual or likely adverse impact on those noninfringing uses is demonstrated by substantial evidence in a legislative, regulatory or administrative proceeding in accordance with the Party’s law.” The text also makes circumvention an independent and separate cause of action, apart from any underlying copyright infringement.

On a positive note, the language regarding additional limitations and exceptions is not restricted to a three-year rulemaking cycle, as exists in the United States and several other trade agreements. From the agreed-to text, it appears that parties may provide for permanent limitations and exceptions, if permitted by domestic law.

While similar language regarding making circumvention an independent cause of action existed in the TPP, the TPP provision was potentially mitigated by a helpful footnote reading, “A Party may provide that the obligations described . . .with respect to manufacturing, importation and distribution apply only where such activities are undertaken for sale or rental, or where such activities prejudice the interests of the right holder of the copyright or related right.” Making circumvention a “separate and independent cause of action” is controversial and makes little sense, negatively impacting legitimate and non-infringing circumvention.

It is also disappointing to see the inclusion once more of a closed-list set of limitations and exceptions, mirroring those found in the United States’ copyright law, which have been criticized domestically as being overly-narrow and, in some cases, useless.

Objectives and Principles

The USMCA includes high-level objectives and principles that recognize at least some level of balance and mirrors language found in the TPP. Article 20.A.2, for example, notes that intellectual property protection and enforcement “should contribute to the promotion of technological innovation and to the transfer and dissemination of technology, to the mutual advantage of producers and users of technological knowledge and in a manner conducive to social and economic welfare, and to a balance of rights and obligations.” Similarly, the principles provide that parties may “adopt measures necessary to protection public health and nutrition, and to promote the public interest in sectors of vital importance to their socio-economic and technological development, provided that such measures are consistent with the provisions of this Chapter.”

While this acknowledgement of balance is welcome, the lack of specific provisions regarding balance underscores the fact that the agreement strengthens the rights of rightholders, ratcheting up protections, without providing the same for users.

Remedies Allow for Judicial Discretion

Another welcome inclusion is language on proportionality that was also found in the TPP, requiring parties to “take into account the need for proportionality between the seriousness of the intellectual property infringement, and the applicable remedies and penalties, as well as the interests of third parties.”

ISP Liability

The USMCA language includes prescriptive provisions regarding safe harbors for Internet service providers. Like the TPP, it includes a carve-out to accommodate the Canadian system of notice-and-notice (as opposed to the United States’ notice-and-takedown). As noted on this blog previously, the flexibility to implement notice-and-notice is limited to Canada only because it is restricted to where such a system exists as “the date of agreement in principle” to USMCA.

For additional reading, Michael Geist has a nice summary from a Canadian perspective.

Software Preservation Best Practices in Fair Use to Help Safeguard Cultural Record, Advance Research

*Cross-posted from ARL News*

*Edited to add links to blog posts by Patricia Aufderheide and Brandon Butler*

The new Code of Best Practices in Fair Use for Software Preservation provides clear guidance on the legality of archiving legacy software to ensure continued access to digital files of all kinds and to illuminate the history of technology.

This Code was made by and for the software preservation community, with the help of legal and technical experts. The publication provides librarians, archivists, curators, and others who work to preserve software with a tool to guide their reasoning about when and how to employ fair use—the legal doctrine that allows many value-added uses of copyrighted materials—in the most common situations they currently face.

Libraries, archives, and museums hold thousands of software titles that are no longer in commercial distribution, but institutions lack explicit authorization from the copyright holders to preserve these titles or make them available. Memory institutions also hold a wealth of electronic files (texts, images, data, and more) that are inaccessible without this legacy software. The preliminary report released by the project team in February documents high levels of concern among professionals worried that while seeking permission to archive software is time-consuming and usually fruitless, preserving and providing access to software without express authorization is risky. Meanwhile, digital materials languish, and the prospects for their effective preservation dim.

In interviews with the project team, software preservation professionals made it clear that users and uses for legacy software are as various as human inquiry, and will multiply over time. In the words of Jessica Meyerson, a founder of the Software Preservation Network, “our cultural record is increasingly made up of complex digital objects.” Another interviewee invoked technology-investor Marc Andreessen’s argument that “software is eating the world,” observing that access to the digital cultural record is itself dependent on software.

The Code of Best Practices in Fair Use for Software Preservation will help this community overcome legal uncertainty by documenting a consensus view of how fair use applies to core, recurring situations in software preservation. Fair use has become a powerful tool for cultural memory institutions and their users, allowing them to realize the potential of stored knowledge with due respect for the interests of copyright holders. (See the 2012 Code of Best Practices in Fair Use for Academic and Research Libraries.) Fair use holds the same potential where software preservation is concerned, particularly given the transformative nature of the uses described in the Code.

The Code of Best Practices in Fair Use for Software Preservation presents a series of five situations in which librarians, archivists, curators, and others working to preserve software can employ fair use. The Code describes the activities, states the principle informing the choice to employ fair use, and makes clear the limitations of such use—that is, the outer bounds of the community consensus at this time. The five situations covered are:

  • Accessioning, stabilizing, evaluating, and describing digital objects
  • Documenting software in operation, and making that documentation available
  • Providing access to software for use in research, teaching, and learning
  • Providing broader networked access to software maintained and shared across multiple collections or institutions
  • Preserving files expressed in source code and other human-readable formats

The Code also includes a brief introduction to software preservation and copyright, an epilogue on the future of software preservation, and two appendices on (1) the fair use doctrine and preservation practice in general and (2) other copyright-related issues related to preservation.

This Code is the result of a project funded by the Alfred P. Sloan Foundation. Co–principal investigators Patricia Aufderheide of the Center for Media & Social Impact at American University’s (AU) School of Communication, Brandon Butler of the University of Virginia Library, Krista Cox of the Association of Research Libraries, and Professor Emeritus Peter Jaszi of the AU Washington College of Law conducted extensive interviews and focus groups with software preservation experts and other stakeholders to produce this Code. The project was coordinated by the Association of Research Libraries (ARL), the Center for Media & Social Impact at AU, and the Program on Information Justice and Intellectual Property at AU Washington College of Law.

Download, read, and use the Code of Best Practices in Fair Use for Software Preservation. The Code will be supported by webinars, workshops, online discussions, and educational materials later this year and in 2019. To stay up to date on news about this project, watch the ARL website, follow us on Facebook or Twitter, or subscribe to our email news lists. For more information, contact Krista Cox, krista@arl.org.

See also:

Pat Aufderheide, “Fair Use and the Future of Digital Culture,” CMSImpact

Brandon Butler, “Introducing the Code of Best Practices in Fair Use for Software Preservation,” The Taper

Documentary “Paywall: The Business of Scholarship” Premieres in Washington, DC

*This is a guest blog post by Judy Ruttenberg, ARL program director for strategic initiatives.*
*Updated September 11, 2018, with quotation from Geneva Henry.*

The documentary film Paywall: The Business of Scholarship made its global premiere in Washington, DC, on September 5, 2018, the same week that 11 European countries proclaimed that all their publicly funded research would be open access by 2020. Paywall producer and director Jason Schmitt and director of photography Russell Stone welcomed the DC audience, which comprised many of the scientists, publishers, and open access advocates featured in the 65-minute film. With minimal narration and expertly sequenced interviews, the film weaves together two principal stories: the exorbitant financial cost to access for-profit academic journals and the associated, incalculable human cost when doctors, patients, students, and would-be innovators all over the world hit paywalls that deny them access to the latest research.

Schmitt, an associate professor of media and communication at Clarkson University, told the DC audience that the film was made not for them but for their neighbors, friends, and colleagues who are not immersed in the world of academic publishing. To the uninitiated, the system makes little sense. The labor of writing articles is unpaid, as is much of the editing, peer review, and curation. Taxpayers fund most scientific research, whether done within government agencies, or through universities, and yet the results (until recently) have not been available to them. The top five academic publishers—which dominate the market—earn profit margins up to ten times that of top technology firms. While many of the film’s subjects acknowledged innovation and value within these publishing companies, Elsevier in particular, most were quick to say those contributions are outweighed by the costs to the scientific enterprise of excluding so many people from participating in it.

Some of Paywall’s most compelling interviews address the consequences of exclusion. Brian Nosek, executive director of the Center for Open Science (COS), described a meeting with a cohort of graduate students in Budapest who were all studying implicit cognition. Why so many students, in one sub-field? Because the papers are largely available on the open internet. Schmitt met with medical students and faculty in Africa and India who were unable to access the latest literature, and unable to contribute their own discoveries to it. Paywalls inhibit innovation because they minimize the chance that “the right person will be in the right place at the right time,” with respect to the literature, said Tom Callaway, from the open source software company Red Hat. And the audience laughed along with Sci-Hub creator Alexandra Elbakyan as, in a rare on-camera interview, she explained that Sci-Hub is targeting this exclusion by helping Elsevier fulfill its mission to make “uncommon knowledge common.”

Paywall is a celebration of the open access (OA) movement and its victories to level the playing field through preprint services like arXiv, and through policies mandating public access to government-funded research. The film is also a sober reflection on the OA movement’s progress, as for-profit academic publishers have both stalled and monetized open access while maintaining ever-increasing subscription revenue. The consortium of European national funders, called cOAlition S, announced their initiative this week with a set of principles addressing these exorbitant costs, including a cap on open access publication fees and a prohibition on publishing in hybrid journals (that charge a mix of subscription and open access fees). Peter Suber, director of the Harvard Office for Scholarly Communication, emphasized in Paywall the critical importance of authors retaining copyrights in order for a large-scale open access system to function.

Geneva Henry, dean of Libraries and Academic Innovation at The George Washington University, also attended the premiere and offered this reflection:

Academic library leaders have been raising the concern for years about the unsustainable rate of inflation with online journals, particularly those supporting the sciences. We have shown our faculty and university leadership the solid data that demonstrates this problem, have cut journals each year to fit our budgets and have been met with criticism by the researchers, have provided information about open access and its advantages, and have received polite nods and smiles from everyone. But little has changed and the high-impact (high-cost) journals are still the ones that remain a priority for faculty publications. Paywall has the opportunity to present these audiences with perspectives from a wide variety of scholars and professionals who identify the issues we’ve been trying to communicate for so long. Its format as a film will enable broader distribution and hopefully be that communication vehicle for bringing this issue to the forefront of academic leadership. We’ve known for a long time that something needs to change and this film will hopefully serve as a catalyst for turning the tide on commercial publishing practices that limit the distribution of knowledge in our society. Perhaps librarians will now be viewed as the canaries in the coal mine rather than a bunch of chicken littles.

SPARC Europe, LIBER (the Association of European Research Libraries), and Research Libraries UK (RLUK) have all issued statements in support of cOAlition S. Peter Suber has also blogged about the plan.

Funded by a grant from the Open Society Foundations, Paywall will be screened by more than 175 universities this fall, and is available to stream under a CC BY 4.0 license at www.paywallthemovie.com. SPARC, a global coalition committed to making open the default for research and education, helped organize the DC premiere.

Government Petitioners’ Brief Points Out Verizon Throttling of Fire Department Battling Largest Fire in California History

On August 20, 2017, petitioners challenging the FCC’s abandonment of net neutrality protections in Mozilla v. FCC filed their initial briefs. Coverage of Mozilla’s joint brief with other non-government petitioners (including companies and public interest groups) is available here and here. This blog post focuses on the brief filed by government petitioners, which include 22 states (New York, California, Connecticut, Delaware, Hawaii, Illinois, Iowa, Kentucky, Maine, Maryland, Massachusetts, Minnesota, Mississippi, New Jersey, New Mexico, North Carolina, Oregon, Pennsylvania, Rhode Island, Vermont, Virginia, and Washington), the District of Columbia, the County of Santa Clara, Santa Clara County Central Fire Protection District, and the California Public Utilities Commission. These states represent over 165 million people, approximately half of the United States population.

The brief of the government petitioners make two primary arguments: 1) that the 2017 Order is arbitrary and capricious and failed to take into account harm to consumers, including public safety issues; and 2) the FCC did not have valid authority to preempt state and local laws from enacting their own net neutrality protections.

The highlight of this government petitioners’ brief focuses on clear and real examples of the harms that absence of net neutrality protections will have on safety, health and the public interest. While the FCC’s 2017 reversal of net neutrality protections relies on voluntary commitments, Internet companies have demonstrated that they will prioritize their own interests over the public’s:

BIAS [Broadband Internet Access Service] providers have shown every indication that they will prioritize economic interests, even in situations that implicate public safety. For example, a BIAS provider recently throttled the connection of a County Fire emergency response vehicle involved in the response to the largest wildfire in California history and did not cease throttling even when informed that this practice threatened public safety (emphasis added).

In this case, while the County was fighting the Mendocino Complex Fire—the largest fire in California’s state history—it experienced throttling by its ISP, Verizon. The addendum to the government petitioners’ brief includes a declaration by Santa Clara County Fire Chief, Anthony Bowden, who notes that the fire department relies on “Internet-based systems to provide crucial and time-sensitive public safety services. The Internet has become an essential tool in providing fire and emergency response, particularly for events like large fires, which require the rapid deployment and organization of thousands of personnel and hundreds of fire engines, aircraft, and bulldozers. During these events, resources are marshaled from across the state and country—in some cases even from other countries” and management of these resources depends on the Internet.

As Bowden explains, the unit facilitating resources “typically exchanges 5-10 gigabytes of data per day via the Internet using a mobile router and wireless connection. Near real-time information exchange is vital to proper function . . . Even small delays in response translate into devastating effects, including loss of property, and, in some cases, loss of life.” As a result, high-speed Internet is critical in addressing these fires.

Despite the fact that Santa Clara County Fire believed it had purchased an “unlimited” data plan, Verizon throttled the County’s usage “and data rates had been reduced to 1/200, or less, than the previous speeds.” When employees of Santa Clara County Fire e-mailed with Verizon, requesting the throttling be lifted for public safety purposes:

Verizon representatives confirmed the throttling, but rather than restoring us to an essential data transfer speed, they indicated that County Fire would have to switch to a new data plan at more than twice the cost, and they would only remove throttling after we contacted the Department that handles billing and switched to the new data plan.

Indeed, in the e-mail exchange attached as an exhibit in the addendum, a reported “side by side comparison a crew members personal phone using Verizon was seeing speeds of 20MBps/7Mbps. The department Verizon device is experiencing speeds of 0.2Mbps/0.6MBps, meaning it has no meaningful functionality.”

In another e-mail exchange questioning why Verizon was throttling the Santa Clara County Fire when the County believed it had purchased unlimited data, a Verizon manager replied, “Verizon has always reserved the right to limit data throughput on unlimited plans. All unlimited data plans offered by Verizon have some sort of data throttling built-in.”

While Verizon’s response to the Santa Clara County Fire Department in the midst of fighting the largest fire in California history as an extreme example of an ISP acting in self-interest, there are other examples of concerns for other state and local government seeking to serve the health and safety needs of its residents. For example, the government petitioners’ brief points to California’s updates to manage its energy grid to balance load, manage congestion and satisfy reliability standards.

Another example cited by the County of Santa Clara is its “web-based emergency operations center to facilitate coordination internally with other agencies and with first responders in case of emergency.” It uses a web-based public alert system to notify the public about emergencies such as evacuation orders or disease outbreaks and “Significant delays from blocking, throttling, or deprioritization could impede effective notification and jeopardize safety in public-health emergencies.” The County’s hospital also uses web-based systems that are latency-sensitive, including development of expanded telemedicine capabilities which will allow doctors to “perform triage and improve outcomes in time-sensitive situations (such as strokes or vehicular accidents) where immediate diagnosis can mean the difference between life and death.” In developing these improved systems for public health and safety, the County of Santa Clara notes that it invested substantial resources, including over a million dollars in its medical records system, and did so in reliance on the FCC’s protection of an open Internet.

Ultimately, the government petitioners’ brief highlights the ways that state and local government rely on an open Internet to serve the public, health and safety needs of its residents. As the brief notes, the FCC erred in assuming

that providers’ voluntary commitments coupled with existing consumer protection laws provide sufficient protection. The Commission offered no meaningful defense of its decision to uncritically accept industry promises that are untethered to any enforcement mechanism. Nothing in the order would stop a BIAS provider from abandoning its voluntary commitments, revising its Transparency Rule disclosures, and beginning to block, throttle, or engage in paid prioritization, subject only to the Transparency Rule’s limited disclosure requirements—leading to the very harms to consumer interests and public safety that the Commission’s long-standing commitment to protecting the open Internet was intended to prevent.

Mozilla, Internet Companies, Public Interest Groups and Other Petitioners File Brief in Net Neutrality Case

The litigation around the FCC’s decision in 2017 to abandon net neutrality protections is currently before the D.C. Circuit in the case captioned as, Mozilla v. FCC. Briefs by petitioners challenging the FCC’s 2017 Order were filed on Monday, August 20. The first brief (“non-government petitioners”) was filed jointly by Mozilla, Vimeo, Public Knowledge, Open Technology Institute, National Hispanic Media Coalition, NTCH, Benton Foundation, Free Press, Coalition for Internet Openness, Etsy, the AD Hoc Telecom Users Committee, Center for Democracy and Technology and Encompass and a summary of its arguments is provided below. The second brief, which will be covered in separate blog post, was filed by government petitioners, consisting of 22 states, the District of Columbia, County of Santa Clara, Santa Clara County Central Fire Protection District and the California Public Utilities Commission.

The non-government petitioners include a wide range of affected stakeholders: Internet companies, broadband providers, Internet consumers and public interest groups.

Mozilla’s brief points out that the FCC’s 2015 Open Internet Order was the result of a lengthy notice of proposed rulemaking and careful consideration, “Yet in the aftermath of the 2016 presidential election, the FCC did an abrupt about-face, comprehensively embracing the BIAS [Broadband Internet Access Service] providers’ objections this Court rejected in USTA and Verizon, revoking the telecommunications service designation of fixed and mobile BIAS, repealing all the rules governing BIAS provider conduct, and disavowing every source of authority for such rules.” Indeed, as numerous critics have noted, the 2017 decision by the FCC reversing its early Open Internet Order seemed to be a predetermined outcome.

Mozilla’s brief makes several arguments: 1) the FCC’s Order mischaracterizes the way the Internet works; 2) the FCC impermissibly renounced its enforcement authority; and 3) the FCC’s repeal of the 2015 Open Internet Order was arbitrary and capricious, ignoring the reasoned decision-making required by an agency.

Pointedly, Mozilla’s brief notes: “In 2016, this Court upheld the rules in their entirety. In 2017, a new FCC undid them, again in their entirety, on a record that had changed little, if at all.” Additionally, “One after another, the FCC reversed virtually all of the 2015 Order’s hundred-plus factual findings, proclaiming wrong what had been found to be right in 2015 and upheld as right in 2016. The abrupt about-face was not adequately reasoned.”

In arguing the arbitrary and capricious nature of the FCC’s reversal of the 2015 Open Internet Order, Mozilla’s brief points out that the FCC “erroneously excluded consumer complaints”* resulting in “skewing the record in favor of its preferred outcome and subverting the rulemaking process.” Such behavior contravenes the Administrative Procedures Act (APA) which requires agencies to examine relevant data and provide reasoned explanations; “an agency cannot close its eyes to evidence in its possession on which it chooses not to rely.”

The FCC’s complete abandonment of net neutrality protections ignored not only the lengthy and detailed record in past proceedings, but also the comments submitted in its 2017 notice of proposed rulemaking. Various amici for the petitioners, whose briefs will be due on Monday, August 27, will also point to the arbitrary and capricious decision-making by the FCC.

*A representative (but not comprehensive) list of companies, organizations and governments is listed on the first several pages.  Several library organizations (including ARL, ALA, and AALL) along with city governments, state governments, public interest groups and companies, are included.

Richard Poynder Interview with UCLA University Librarian Ginny Steel on Open Access

A couple of weeks ago, Richard Poynder interviewed Virginia (“Ginny”) Steel, Norman and Armena Powell University Librarian at UCLA on open access. Ginny Steel is also the past chair of ARL’s Advocacy and Policy Committee and chair of the SPARC Steering Committee. She is, of course, deeply knowledgeable and thoughtful about open access—Poynder notes in the introduction to the interview, “In contrast to many OA advocates in Europe, Steel’s views on open access are nuanced and undogmatic”—and the entire interview (the PDF of which runs 24 pages, including Poynder’s intro) is well worth reading. The interview covers a range of OA topics from goals, current challenges, specifics related to University of California actions, publishers and more.

While I do recommend reading the full interview, here are a few highlights:

Ginny notes that while there are numerous OA models, including ones currently under development, it is important to evaluate these models and determine how they serve the ultimate goals of OA:

What’s really important and needs to be carefully evaluated . . . is 1) who controls the copyright of the content, 2) to whom is reading access provided, and 3) is there equity in the opportunities to publish for researchers in institutions or parts of the world that are not able to provide the level of financial support available in Europe and North America.

The ultimate goal of OA is to allow open sharing of research results in a way that offers equal opportunities for researchers around the world to publish, reserves effective peer review, allows authors to retain control over their work, allows worldwide reading access, and provides a sustainable financial model that covers the costs of publishing . . . It’s still very much a work in progress, and there are competing interests that make these conversations difficult.

Ginny’s statement points to the important issue of copyright because it is the copyright owner who chooses to make a work OA. Additionally, while a publisher with copyright ownership over an article might consent to open access so that a reader can read the text itself, it could try and limit other uses (such as text-and-data mining) particularly for licensed, born digital content.

Thus, the importance of academy controlled, rather than publisher controlled, content in ensuring meaningful open access. Ginny points out that new business models need to be developed and that these models will vary based on disciplinary needs. In addition to referencing the UC’s “Pathways to OA” document, she points to “a small group of members of the Association of Research Libraries is working on ‘Academy-owned OA’ (AO-OA) and is partnering with a handful of professional societies and disciplinary repositories to explore new models to move away from subscription-based models dominated by commercial publishers.”

In the final question, Poynder asks about preprint servers and Ginny responds with an “optimistic” view, while again emphasizing the importance of academy retaining control:

Actually I’m optimistic about the potential of preprint servers becoming full-scale platforms that provide access to preprints, peer-reviewed content, and underlying datasets.

If the academy builds open tools that result in a ‘Sustainable Knowledge Commons’ and there is widespread collaboration with professional societies, I would hope that governance models would ensure that control is retained by the academy and the content creators.

But there will have to be a deep institutional commitment to not cede control.

Designing Open Science in a Decentralized World

*This is a guest blog post by Judy Ruttenberg, ARL Program Director for Strategic Initiatives*

In my past and current roles as a program officer in a regional library consortium, and now at the Association of Research Libraries, I’ve had the privilege of visiting many libraries. I have observed that often, while explaining a (usually challenging) aspect of local culture or practice, librarians at research-intensive universities both public and private will characterize their campus as “highly decentralized.” The new consensus report from the National Academies of Sciences, Engineering, and Medicine, “Open Science by Design: Realizing a Vision for 21st Century Research,” recognizes that because institutions and the entire research enterprise is highly decentralized, so too must the stewardship of research assets be coordinated across key stakeholders. If we, the stewardship community, get this coordination right, researchers will be able to practice and reap the benefits of open science with the confidence that their scholarly contributions will be supported, rewarded, and discoverable in the future.

“Open Science by Design” provides a high-level roadmap for the stewardship community in which research libraries embody a unique combination of mission and professional expertise. Research libraries provide enduring and barrier-free access to knowledge for current and future generations. The open science/open scholarship movement has expanded the research community’s definition of knowledge assets worthy of curation for long-term use to include software, data, code, and more. By practicing scholarship openly, researchers not only create knowledge assets across the lifecycle—from hypothesis and study design to data collection and narrative publication—they also generate a digital paper trail that contributes to our collective understanding of research dynamics and workflow. Given the report’s observation that “commercial publishers have undertaken significant horizontal and vertical integration in recent years, … acquiring important pieces of the scholarly communications infrastructure, such as preprint servers, institutional repositories, and expanding data archiving, and analytics services associated with their journals,” (p 118) decentralization is actually a strength against consolidation and enclosure of that workflow.

If research institutions and funders embrace the NAS recommendations to encourage, support, and reward openness across the scholarly workflow, librarians can contribute both information science and archival expertise early and often throughout that workflow, as well as preserve research environments to enable the study of science itself. For example, the preprint communities hosted by the Center for Open Science’s OSF Preprints now have an integrated annotation layer in hypothes.is. Librarians are embedded in the leadership of many of these preprint services—and in many research projects themselves—and can advise on the stewardship aspects of peer review as the hypothes.is service is implemented. Librarians will also continue to work in long-standing coalitions to influence the information policy environment to support openness. This fall, ARL will produce a Code of Best Practices in Fair Use for Software Preservation, funded by the Alfred P. Sloan Foundation, to ensure that the subjects, products, and tools of scholarship will continue to be accessible despite evolving technology.

Decentralization, often presented as a barrier to coordination, is an advantage and a goal in the context of open scholarship, provided that the stakeholder community adheres to the report’s implementation principles of interoperability, including:

  • Researchers choosing open repositories for their preprints, publications, and data
  • Research funders ensuring that research products are available in repositories that allow for bulk transfer of digital objects
  • Requirement of unique, persistent identifiers for digital objects identified for long-term preservation
  • Greater attention and investment in metadata schemas for improved discovery
  • Participation of professional societies and research funders in the networking and federation of existing repositories for improved discovery.

The SHARE project team has worked with many different types of open repositories (data, institutional repositories, preprints, grant databases, etc.) on all of these issues, and the implementation of improved metadata schemas, persistent identifiers, and methods of bulk transfer are complex. What we’ve learned is that more decentralization, not less, is the answer. Rather than continuing the centralized harvest of metadata from many sources, the SHARE technical team is now developing easy to use tools for institutions and repositories to write their own harvesters to push out their metadata to the network and develop local frameworks for hosting the data they exchange with others. The Data Curation Network, also funded by the Sloan Foundation, is leveraging the decentralization of expertise across more than ten institutions to improve the treatment, discoverability, and use of data.

Research libraries will be critical partners within their institutions and within the research enterprise in the implementation of NAS’s open science principles, standards, and business arrangements. ARL looks forward to continuing existing partnerships and developing new ones to support Open Science by Design.

ARL Urges US House of Representatives to Restore Net Neutrality

*Cross-posted from ARL News*

The Association of Research Libraries (ARL) is profoundly disappointed with the US Federal Communications Commission’s (FCC) repeal of the Open Internet Order, which takes effect today, June 11, 2018. ARL is calling on the House of Representatives to reverse the FCC’s decision and restore net neutrality, a bedrock of equitable access to information.

As of today, internet service providers (ISPs) can legally prioritize some voices—those willing and able to pay a premium—over others, such as nonprofit organizations or people holding minority viewpoints. Instead of ensuring that users can access the content of their choosing on an equitable basis, the FCC is now relying solely on market forces to regulate the flow of internet traffic. This will almost certainly lead to many blocking/paid-prioritization arrangements between ISPs and commercial entities.

One possible avenue to retain net neutrality is through the Congressional Review Act (CRA). Under CRA, Congress can overturn an agency’s decision with a simple majority vote in both houses within 60 legislative days of publication of the agency’s decision in the Federal Register. If both houses vote to overturn the decision, it will then require the signature of the President. The CRA resolution to reverse the FCC’s repeal of the Open Internet Order passed the Senate 52-47 on May 16. The House of Representatives can save net neutrality by taking up the issue and voting in favor of the similar CRA resolution introduced by Representative Doyle (D-PA). The House must act by mid-July if it is to pass a CRA resolution restoring the Open Internet Order.

“Net neutrality was essentially a nondiscrimination rule enabling the free and open exchange of ideas, thereby helping libraries fulfill their mission of advancing education, innovation, knowledge creation, and economic growth,” said Mary Ann Mavrinac, president of ARL and vice provost and the Andrew H. and Janet Dayton Neilly Dean of the University of Rochester Libraries. “We call on the House of Representatives to pass the CRA resolution restoring the open internet and we urge President Trump to sign it.”

Challenges to the FCC’s repeal of the Open Internet Order are also currently pending before the US Court of Appeals for the DC Circuit. ARL is working with other library and higher education associations to advocate for the restoration of strong net neutrality protections through submission of an amicus brief highlighting the importance of these rules for access to information, research, education, and freedom of speech.

Take action on this issue by emailing, calling, or tweeting to your Representatives and encouraging them to restore an open internet by voting for the CRA resolution. Battle for the Net provides an easy way to email, call, and tweet to your lawmakers.