ARL Files Comments in NTIA Request for Comment on Consumer Privacy

On Friday, November 9, ARL filed comments responding to National Telecommunications and Information Administration’s (NTIA) request for comment on “Developing the Administration’s Approach to Consumer Privacy.”

In the submitted comments, ARL recognizes that strong privacy protections for users is necessary, but also that overly-prescriptive requirements can cause difficulties in compliance. The comments point to several elements that are critical for meaningful privacy protection, including ensuring transparency and consent, while other areas may be nuanced and policymakers must consider unintended consequences of particular regulations.  For example, the right to deletion raises complex issues and requires a nuanced approach to avoid unnecessary alterations to the cultural and historical record.  The comments also note that effective remedies and enforcement mechanisms are needed to make regulations meaningful.

All filed comments are available on the NTIA site.

Report from AAU-APLU Workshop on Accelerating Access to Research Data

*This is a guest blog post by Mary Lee Kennedy, Executive Director of ARL; Judy RuttenbergProgram Director for Strategic Initiatives; and Cynthia Hudson-Vitale, Head Digital Scholarship and Data Services, Penn State University Libraries*

Over the past two days we participated in the AAU-APLU workshop on Accelerating Access to Research Data, sponsored by the National Science Foundation (NSF). Eighteen of the thirty teams were ARL institutions from Canada and the United States.

This workshop followed directly from the November 2017 AAU-APLU Public Access Working Group Report and Recommendations, and was further informed by the National Academies recommendations in their 2018 consensus report, Open Science by Design: Realizing a Vision for 21st Century Research.  For those who attended the Association meeting, you will remember the update from Alexa McCray, Chair of the National Academies report, and Kacy Redd, Assistant Vice President, Science & Mathematics Education Policy from APLU who staffed the AAU-APLU working group.

This workshop was a pivotal experience at a time in which governmental agencies in the US, Canada, and the EU are focusing on open science, and when many institutions are figuring out how to apply and influence policies, practices and infrastructure. Thirty institutional teams, some of whose members had never worked together before, grappled with the above mentioned report recommendations with a commitment to a set of next steps. Most teams included someone from the research office, IT/high performance or academic computing, and the library, while some included provosts and faculty.

The NSF, NIH, Department of Energy, National Institute on Standards and Technology, the Department of Defense, OSTP, and National Academies actively participated. Alexa McCray and Sarah Nusser (chair of the AAU-APLU Public Access Working Group) set the context upfront: agencies, institutions, and institutional teams including their libraries need to collaboratively design researcher-centered data services and support; RDM is an integral part of good study design; and research data is a valuable institutional asset.

With this context in mind, the teams got to work, with many conversations, and commitments to work together on specific tasks back at their institutions, and to continue to work together as a whole.  I know we all look forward to the workshop report and decisive next steps. In the meantime, please find below a sample of identified priorities and an initial set of next steps for ARL, as well as steps to consider in your institutions.  

Institutional Priorities for Public Access to Research Data

A number of themes emerged when institutions shared their priorities for accelerating public access to research data. A sample of these included:

  • Facilitating low-barrier, seamless support and services for public access to data at the institutional-level through:
    • Establishment of local “one stop services” for research support services, including data management and sharing stakeholder groups to coordinate faculty-centered research data services;
    • Development of training and workshops for public access to data and open science practices, specifically focused on graduate students.
  • Collecting and then mining data management plans (DMP’s) of funded research to:
    • Plan for the deposit and curation of research data;
    • Work with faculty members earlier in the research process to facilitate good data management practices.
    • Identify high value data.
  • Leveraging existing partnerships, cross-institutional collaborations, resources, and tools to extend capabilities for research data services, such as the:

Initial Considerations on Community Next Steps

With our greatest impact being at the intersection between institutional, research and learning community, and public policy communities, ARL will work with:

  • Our colleagues at AAU and APLU, including
    • Articulating a vision, a strategy, and a direction for accelerating public access to data, and
    • Collaborating to scope, and as appropriate participate in, additional workshops for the university and agency communities.
  • Our Advocacy and Public Policy Committee and Research Communications and Collections Working Group to seek ways to influence federal data management policies by representing the needs, capacity, and role of the research library.
  • National agencies, associations, and our ARL Academy (as appropriate) to support the membership in developing open science and open scholarship fluency—particularly as it relates to methods, tools, and data management practices across the institution, and with other research communities.
  • Scholarly and professional societies as potential partners in articulating disciplinary expectations around research data quality, value, and retention.

What can you do as an ARL member?

Please reach out to Mary Lee or Judy to discuss the workshop and its outcomes. ARL member directors James Hilton, Erik Mitchell, and Steve Mandeville-Gamble were also present, along with 15 additional ARL institutions, many of whom included library staff on their teams.

The workshop provided a structured and focused opportunity for institution-based teams to meet and begin to map their assets—technology, policy, people, and other—as well as their challenges. Many institutional teams pledged to continue meeting. If you were not able to attend, you could circulate the agenda to your institutional colleagues (in the research office, in IT, in high performance or academic computing, and other) and encourage discussion along the same lines.

The workshop organizers at AAU and APLU are considering site visits beginning in early 2019 to include institutions that were not able to participate in the workshop. If this advances into a plan, please watch for an announcement of that opportunity.

This was a very engaging workshop, concluding with commitments on concrete deliverables.  It sets an optimistic tone for the path ahead.

ARL Celebrates Open Access Week with Commitment to Open Scholarship

*This is a guest blog post by Judy Ruttenberg, ARL program director for strategic initiatives.*

ARL’s mission is to catalyze the collective efforts of research libraries to enable knowledge creation and to achieve enduring and barrier-free access to information. In celebration of Open Access Week 2018, “Designing Equitable Foundations for Open Knowledge,” we’re sharing ARL’s programmatic priorities in supporting open scholarship in the coming year.

With a new focus area around open scholarship, ARL aims to shift the balance of library strategy, staffing, and budgets in favor of open content and what we are calling academy-owned infrastructure (also known as scholar-owned or scholar-led). The Association is looking at initiatives in support of this big bet in order to support member libraries in:

  • Increasing their purchasing and investment power to support the full range of their collections and research priorities
  • Making informed decisions about where to invest in new forms of open content and infrastructure based on shared criteria and local interests
  • Partnering in the research enterprise within their institutions in a range of activities from data curation and management to publishing

The past several years have seen a decisive global trend among funding bodies, government agencies, and research communities to accelerate scholarly discovery and improve its effectiveness through open practices (such as data sharing and large-scale collaboration) and digital technology. At the same time, some scholarly communities are pushing for greater experimentation and transparency in peer review (ASAPBio), sharing preprints (arXiv, bioRXiv, and many others), and deploying open annotation (hypothes.is) that show a glimpse of what the future of scholarly communication could look like post-journal formats—accessible, dynamic, and networked. Research libraries are positioned to lead in this transformation when deeply engaged in research and scholarly communities and when a shared understanding and commitment exists to collectively steward the full scholarly record. ARL is positioned to  broker a shared agenda with scholarly and learned societies and communities, along with academic leadership and US federal agencies, and partners in Canada, the EU, Australia, and the UK in support of equitable and open knowledge.

More recently, US research library leaders participated in crafting the recommendations of the AAU-APLU Public Access Working Group and the National Academies Open Science by Design, consensus report.  As an Association, ARL looks forward to working with the membership, partners, and stakeholder communities on implementing their recommendations. Our colleagues in Canada provided similar feedback through Portage to the Tri-Agency Research Data Management Policy and CARL will be a key partner in these initiatives.

By strengthening open research practices, policies, and standards, we strengthen libraries’ ability to support local research, scholarship, and collections priorities of all kinds in order to meet their missions.

Eleventh Circuit Reverses and Remands Georgia State E-Reserves Case (Again)

The long saga of the Georgia State University (GSU) e-reserves case continues as the Court of Appeals for the Eleventh Circuit reversed the district court’s ruling which had found that the vast majority of GSU’s use of works in its e-reserves constituted a fair use. This is the second time the Eleventh Circuit has reviewed the case, and the second time it has reversed.

In 2008, publishers sued GSU for copyright infringement, arguing that the use of unlicensed excerpts of copyrighted works in the e-reserves constituted infringement. GSU defended itself, relying on the right of fair use. In the first bench trial, the district court ruled in favor of fair use for 43 of the 48 cases of alleged infringement. The Eleventh Circuit reversed and remanded the case in 2014, directing the lower court to re-examine its weight to market substitution and re-evaluate the four fair use factors holistically, rather than taking an arithmetic approach (i.e., if three fair use factors favor the use, but one disfavors it, fair use should always apply). On remand, the district court re-evaluated the four factors and found that 44 of the 48 cases constituted fair use. In her analysis, Judge Evans assigned each factor a weight: “The Court estimates the initial, approximate respective weights of the four factors as follows: 25% for factor one, 5% for factor two, 30% for factor three, and 40% for factor four.” The publishers again appealed to the Eleventh Circuit, which heard the case in 2017. (Here’s a link to ARL’s amicus brief in the second appeal.)

On October 19, 2018, the Eleventh Circuit released its 25 page opinion—more than a year after hearing oral arguments in the case—finding that the district court again erred in its evaluation of fair use. The Eleventh Circuit suggests that the district court was only mandated to re-evaluate its analysis on the second and third factors, but had instead also re-evaluated its analysis on factor four (in which the district court found in the first trial that in 31 cases, the fourth fair use factor weighed against fair use).

Additionally, the Eleventh Circuit points out that “The district court again applied a mathematical formula in its overall analysis of fair use,” which it had been instructed against. Although the district court couched the given weights as “initial” and “approximate,” the Eleventh Circuit found that the district court only adjusted these factors in four instances and di not adjust the other factors in the overall analysis. Thus, “We conclude that the district court’s quantitative rubric was an improper substitute for a qualitative consideration of each instance of copying in the light of its particular facts.” The Eleventh Circuit has remanded the case, directing the district court to use a holistic approach to fair use, and avoid any mathematical approach with respect to the four factors.

Another issue the Eleventh Circuit opinion addresses is whether the cost of purchasing licenses affects the third factor; the district court in the second trial considered the price of use on two ocassions. The Eleventh Circuit rules that price should not be taken into account when evaluating the amount and substantiality of the portion of the work used.

While the Eleventh Circuit reversed and remanded on the above issues, it affirmed the district court’s decision not to reopen the record. Publishers in 2015 filed a motion to reopen, asserting the need to introduce “Evidence of GSU’s ongoing conduct (e.g. its use of E-Reserves during the most recent academic term)” as well as evidence of the availability of digital licenses. Here, the Eleventh Circuit notes that this decision is within the discretion of the trial court.

Kevin Smith posted about the GSU case on In the Open, with an excellent summary of what the Eleventh Circuit’s opinion (as well as its last opinion) does not do, and what, as a result, the publishers have lost on:

…But the big principles that the publishers were trying to gain are all lost. There will be no sweeping injunction, nor any broad assertion that e-reserves always require a license. The library community will still have learned that non-profit educational use is favored under the first fair use factor even when that use is not transformative. The best the publisher plaintiffs can hope for is a split decision, and maybe the chance to avoid paying GSU’s costs, but the real victories, for fair use and for libraries, have already been won.

Eleventh Circuit Finds Georgia’s Annotated State Laws Not Copyrightable

On Friday, October 19, the Court of Appeals for the Eleventh Circuit found that Georgia’s annotated laws are not protected by copyright, reversing the district court. In Georgia v. Public.Resource.Org, Georgia argued that its annotated state laws are protected by copyright. Public.Resource.org posted these laws online—it has for several other laws and codes in other jurisdictions—and was subsequently sued for copyright infringement. Public.Resource.org argued that because only the annotated versions are considered official versions, they should be free to be read by the public. As a policy matter, this outcome makes sense; one should be able to read, for free, the laws that they must abide by. The Eleventh Circuit agreed with Public.Resource.org.

The Eleventh Circuit did not state that all annotated laws are not copyrightable, but instead noted that in the present case, the annotations were done at the direction of state officials and intertwined with the law itself. The court sums up its conclusion: “the annotations in the OCGA are sufficiently law-like so as to be properly regarded as a sovereign work. Like the statutory text itself, the annotations are created by the duly constituted legislative authority of the State of Georgia. Moreover, the annotations clearly have authoritative weight in explicating and establishing the meaning and effect of Georgia’s laws. Furthermore, the procedures by which the annotations were incorporated bear the hallmarks of legislative process, namely bicameralism and presentment. In short, the annotations are legislative works created by Georgia’s legislators in the exercise of their legislative authority.”

The district court had ruled that the annotations were subject to copyright, then proceeded to reject the argument that Public.Resource.Org’s use was fair use. However, as the Eleventh Circuit notes, “Because we conclude that no copyright can be held in the annotations, we have no occasion to address the parties’ other arguments regarding originality and fair use.”

ARL submitted an amicus brief in this case together with ALA, ACRL, Public Knowledge and other groups and individuals—as well as in a related case, ASTM v. Public.Resource.Org, supporting Public.Resoure.org.

What’s In (and Out) of the IP Chapter of the United States, Mexico, Canada Trade Agreement

Yesterday, Canada announced—just in time for the self-imposed deadline by the negotiating parties of September 30— that it would join the trade agreement with the United States and Mexico. This agreement, a renegotiation of NAFTA, which apparently is also being called the US-Mexico-Canada Agreement or USMCA, includes much more prescriptive provision on intellectual property than what was included in the original NAFTA. The original NAFTA text on intellectual property, written in a different era of trade agreements, does not include language on copyright term or issues covered by the WIPO Internet Treaties (NAFTA was negotiated before the WIPO Copyright Treaty and WIPO Performances and Phonograms Treaty).

Presumably any deal that Canada agreed to in the renegotiation was going to be more prescriptive, with greater rights for rightholders, than in the original NAFTA. However, it is also worse, at least in some respects, than what Canada, Mexico and the United States—and nine other countries—had agreed to in the Trans-Pacific Partnership Agreement (TPP) (see analysis of that text here), which the United States withdrew from after Trump became President. (Note: after the United States’ withdrawal from the TPP, the remaining 11 countries in the negotiations—Australia, Brunei, Canada, Chile, Japan, Malaysia, Mexico, New Zealand, Peru, Singapore and Vietnam—renegotiated and formed the Comprehensive and Progressive Agreement for Trans-Pacific Partnership, or CPTPP, which suspended many of the United States’ demands on copyright and other IP provisions).

Here’s a look at what’s in—and out—of the renegotiated IP chapter, as compared to both the original NAFTA text and the TPP text:

Limitations and Exceptions 

Arguably the biggest disappointment in the recently released text is what the IP chapter does not include. The TPP had included language based off a United States proposal from 2012 on limitations and exceptions. The TPP obligated parties to try and achieve balance in their copyright systems. Article 18.66 of the TPP read: 

Each Party shall endeavour to achieve an appropriate balance in its copyright and related rights system, among other things by means of limitations or exceptions that are consistent with Article 18.65 (Limitations and Exceptions), including those for the digital environment, giving due consideration to legitimate purposes such as, but not limited to: criticism; comment; news reporting; teaching, scholarship, research, and other similar purposes; and facilitating access to published works for persons who are blind, visually impaired or otherwise print disabled.

While the language could have been stronger—for example by mandating that parties achieve a balance, rather than merely “endeavor[in]g” to do so, a provision on balanced copyright was seen as a success, recognizing the importance of limitations and exceptions in copyright. When trade agreements or laws only include provisions regarding the rights of rightholders, the rights of users get ignored. It is disappointing that the United States chose not to propose balancing language, but instead included limiting language with respect to limitations and exceptions (requiring parties to “confine” limitations and exceptions to the three-step test of 1) certain special cases; 2) that do not conflict with the normal exploitation of the work; and 3) do not unreasonably prejudice the legitimate interest of the right holder.). 

Copyright Term

Copyright term is one of the most significant areas with respect to copyright where Canada will be forced to change its law. As noted above, NAFTA did not contain provisions dictating copyright term (and, of course, was negotiated prior to the United States’ own term extension). Canada currently has a copyright term of the life of the author plus fifty years, but with the USMCA text, will need to extend that term to life plus seventy. Perhaps this concession was to be expected since TPP parties also agreed to the term, yet the consequences to the public domain are significant. The United States has seen a moratorium on published works entering the public domain for the last twenty years due to copyright term extension agreed to in 1998. The public domain is critical for the creation of new knowledge and culture and copyright term plays a significant role in closing off the public domain. This term goes well beyond international standards.

Additionally, Canada agreed to further extension of copyright term for corporate works, beyond what had been agreed to in the TPP. While the TPP parties agreed to providing corporate works (works that are not measured on the life of the author) with 70 years of protection, the USMCA text requires 75 years.

Technological Protection Measures

Because NAFTA went into force in 1994, it did not include provisions that have been found in the era after the WIPO Internet Treaties, such as anti-circumvention measures.  The new provisions in USMCA mirror the text on anti-circumvention of several past bilateral trade agreements by the United States. It requires parties to make it an offense to “knowingly, or having reasonable grounds to know” circumvent technological protection measures, or to manufacture or distribute devices primarily designed or are promoted for the purposes of circumvention. This language is highly prescriptive and detailed. It also includes a closed-list set of seven limitations and exceptions to the anti-circumvention measures, plus a provision permitting “additional exceptions or limitations for noninfringing uses of a particular class of works, performances, or phonograms, when an actual or likely adverse impact on those noninfringing uses is demonstrated by substantial evidence in a legislative, regulatory or administrative proceeding in accordance with the Party’s law.” The text also makes circumvention an independent and separate cause of action, apart from any underlying copyright infringement.

On a positive note, the language regarding additional limitations and exceptions is not restricted to a three-year rulemaking cycle, as exists in the United States and several other trade agreements. From the agreed-to text, it appears that parties may provide for permanent limitations and exceptions, if permitted by domestic law.

While similar language regarding making circumvention an independent cause of action existed in the TPP, the TPP provision was potentially mitigated by a helpful footnote reading, “A Party may provide that the obligations described . . .with respect to manufacturing, importation and distribution apply only where such activities are undertaken for sale or rental, or where such activities prejudice the interests of the right holder of the copyright or related right.” Making circumvention a “separate and independent cause of action” is controversial and makes little sense, negatively impacting legitimate and non-infringing circumvention.

It is also disappointing to see the inclusion once more of a closed-list set of limitations and exceptions, mirroring those found in the United States’ copyright law, which have been criticized domestically as being overly-narrow and, in some cases, useless.

Objectives and Principles

The USMCA includes high-level objectives and principles that recognize at least some level of balance and mirrors language found in the TPP. Article 20.A.2, for example, notes that intellectual property protection and enforcement “should contribute to the promotion of technological innovation and to the transfer and dissemination of technology, to the mutual advantage of producers and users of technological knowledge and in a manner conducive to social and economic welfare, and to a balance of rights and obligations.” Similarly, the principles provide that parties may “adopt measures necessary to protection public health and nutrition, and to promote the public interest in sectors of vital importance to their socio-economic and technological development, provided that such measures are consistent with the provisions of this Chapter.”

While this acknowledgement of balance is welcome, the lack of specific provisions regarding balance underscores the fact that the agreement strengthens the rights of rightholders, ratcheting up protections, without providing the same for users.

Remedies Allow for Judicial Discretion

Another welcome inclusion is language on proportionality that was also found in the TPP, requiring parties to “take into account the need for proportionality between the seriousness of the intellectual property infringement, and the applicable remedies and penalties, as well as the interests of third parties.”

ISP Liability

The USMCA language includes prescriptive provisions regarding safe harbors for Internet service providers. Like the TPP, it includes a carve-out to accommodate the Canadian system of notice-and-notice (as opposed to the United States’ notice-and-takedown). As noted on this blog previously, the flexibility to implement notice-and-notice is limited to Canada only because it is restricted to where such a system exists as “the date of agreement in principle” to USMCA.

For additional reading, Michael Geist has a nice summary from a Canadian perspective.

Software Preservation Best Practices in Fair Use to Help Safeguard Cultural Record, Advance Research

*Cross-posted from ARL News*

*Edited to add links to blog posts by Patricia Aufderheide and Brandon Butler*

The new Code of Best Practices in Fair Use for Software Preservation provides clear guidance on the legality of archiving legacy software to ensure continued access to digital files of all kinds and to illuminate the history of technology.

This Code was made by and for the software preservation community, with the help of legal and technical experts. The publication provides librarians, archivists, curators, and others who work to preserve software with a tool to guide their reasoning about when and how to employ fair use—the legal doctrine that allows many value-added uses of copyrighted materials—in the most common situations they currently face.

Libraries, archives, and museums hold thousands of software titles that are no longer in commercial distribution, but institutions lack explicit authorization from the copyright holders to preserve these titles or make them available. Memory institutions also hold a wealth of electronic files (texts, images, data, and more) that are inaccessible without this legacy software. The preliminary report released by the project team in February documents high levels of concern among professionals worried that while seeking permission to archive software is time-consuming and usually fruitless, preserving and providing access to software without express authorization is risky. Meanwhile, digital materials languish, and the prospects for their effective preservation dim.

In interviews with the project team, software preservation professionals made it clear that users and uses for legacy software are as various as human inquiry, and will multiply over time. In the words of Jessica Meyerson, a founder of the Software Preservation Network, “our cultural record is increasingly made up of complex digital objects.” Another interviewee invoked technology-investor Marc Andreessen’s argument that “software is eating the world,” observing that access to the digital cultural record is itself dependent on software.

The Code of Best Practices in Fair Use for Software Preservation will help this community overcome legal uncertainty by documenting a consensus view of how fair use applies to core, recurring situations in software preservation. Fair use has become a powerful tool for cultural memory institutions and their users, allowing them to realize the potential of stored knowledge with due respect for the interests of copyright holders. (See the 2012 Code of Best Practices in Fair Use for Academic and Research Libraries.) Fair use holds the same potential where software preservation is concerned, particularly given the transformative nature of the uses described in the Code.

The Code of Best Practices in Fair Use for Software Preservation presents a series of five situations in which librarians, archivists, curators, and others working to preserve software can employ fair use. The Code describes the activities, states the principle informing the choice to employ fair use, and makes clear the limitations of such use—that is, the outer bounds of the community consensus at this time. The five situations covered are:

  • Accessioning, stabilizing, evaluating, and describing digital objects
  • Documenting software in operation, and making that documentation available
  • Providing access to software for use in research, teaching, and learning
  • Providing broader networked access to software maintained and shared across multiple collections or institutions
  • Preserving files expressed in source code and other human-readable formats

The Code also includes a brief introduction to software preservation and copyright, an epilogue on the future of software preservation, and two appendices on (1) the fair use doctrine and preservation practice in general and (2) other copyright-related issues related to preservation.

This Code is the result of a project funded by the Alfred P. Sloan Foundation. Co–principal investigators Patricia Aufderheide of the Center for Media & Social Impact at American University’s (AU) School of Communication, Brandon Butler of the University of Virginia Library, Krista Cox of the Association of Research Libraries, and Professor Emeritus Peter Jaszi of the AU Washington College of Law conducted extensive interviews and focus groups with software preservation experts and other stakeholders to produce this Code. The project was coordinated by the Association of Research Libraries (ARL), the Center for Media & Social Impact at AU, and the Program on Information Justice and Intellectual Property at AU Washington College of Law.

Download, read, and use the Code of Best Practices in Fair Use for Software Preservation. The Code will be supported by webinars, workshops, online discussions, and educational materials later this year and in 2019. To stay up to date on news about this project, watch the ARL website, follow us on Facebook or Twitter, or subscribe to our email news lists. For more information, contact Krista Cox, krista@arl.org.

See also:

Pat Aufderheide, “Fair Use and the Future of Digital Culture,” CMSImpact

Brandon Butler, “Introducing the Code of Best Practices in Fair Use for Software Preservation,” The Taper

Documentary “Paywall: The Business of Scholarship” Premieres in Washington, DC

*This is a guest blog post by Judy Ruttenberg, ARL program director for strategic initiatives.*
*Updated September 11, 2018, with quotation from Geneva Henry.*

The documentary film Paywall: The Business of Scholarship made its global premiere in Washington, DC, on September 5, 2018, the same week that 11 European countries proclaimed that all their publicly funded research would be open access by 2020. Paywall producer and director Jason Schmitt and director of photography Russell Stone welcomed the DC audience, which comprised many of the scientists, publishers, and open access advocates featured in the 65-minute film. With minimal narration and expertly sequenced interviews, the film weaves together two principal stories: the exorbitant financial cost to access for-profit academic journals and the associated, incalculable human cost when doctors, patients, students, and would-be innovators all over the world hit paywalls that deny them access to the latest research.

Schmitt, an associate professor of media and communication at Clarkson University, told the DC audience that the film was made not for them but for their neighbors, friends, and colleagues who are not immersed in the world of academic publishing. To the uninitiated, the system makes little sense. The labor of writing articles is unpaid, as is much of the editing, peer review, and curation. Taxpayers fund most scientific research, whether done within government agencies, or through universities, and yet the results (until recently) have not been available to them. The top five academic publishers—which dominate the market—earn profit margins up to ten times that of top technology firms. While many of the film’s subjects acknowledged innovation and value within these publishing companies, Elsevier in particular, most were quick to say those contributions are outweighed by the costs to the scientific enterprise of excluding so many people from participating in it.

Some of Paywall’s most compelling interviews address the consequences of exclusion. Brian Nosek, executive director of the Center for Open Science (COS), described a meeting with a cohort of graduate students in Budapest who were all studying implicit cognition. Why so many students, in one sub-field? Because the papers are largely available on the open internet. Schmitt met with medical students and faculty in Africa and India who were unable to access the latest literature, and unable to contribute their own discoveries to it. Paywalls inhibit innovation because they minimize the chance that “the right person will be in the right place at the right time,” with respect to the literature, said Tom Callaway, from the open source software company Red Hat. And the audience laughed along with Sci-Hub creator Alexandra Elbakyan as, in a rare on-camera interview, she explained that Sci-Hub is targeting this exclusion by helping Elsevier fulfill its mission to make “uncommon knowledge common.”

Paywall is a celebration of the open access (OA) movement and its victories to level the playing field through preprint services like arXiv, and through policies mandating public access to government-funded research. The film is also a sober reflection on the OA movement’s progress, as for-profit academic publishers have both stalled and monetized open access while maintaining ever-increasing subscription revenue. The consortium of European national funders, called cOAlition S, announced their initiative this week with a set of principles addressing these exorbitant costs, including a cap on open access publication fees and a prohibition on publishing in hybrid journals (that charge a mix of subscription and open access fees). Peter Suber, director of the Harvard Office for Scholarly Communication, emphasized in Paywall the critical importance of authors retaining copyrights in order for a large-scale open access system to function.

Geneva Henry, dean of Libraries and Academic Innovation at The George Washington University, also attended the premiere and offered this reflection:

Academic library leaders have been raising the concern for years about the unsustainable rate of inflation with online journals, particularly those supporting the sciences. We have shown our faculty and university leadership the solid data that demonstrates this problem, have cut journals each year to fit our budgets and have been met with criticism by the researchers, have provided information about open access and its advantages, and have received polite nods and smiles from everyone. But little has changed and the high-impact (high-cost) journals are still the ones that remain a priority for faculty publications. Paywall has the opportunity to present these audiences with perspectives from a wide variety of scholars and professionals who identify the issues we’ve been trying to communicate for so long. Its format as a film will enable broader distribution and hopefully be that communication vehicle for bringing this issue to the forefront of academic leadership. We’ve known for a long time that something needs to change and this film will hopefully serve as a catalyst for turning the tide on commercial publishing practices that limit the distribution of knowledge in our society. Perhaps librarians will now be viewed as the canaries in the coal mine rather than a bunch of chicken littles.

SPARC Europe, LIBER (the Association of European Research Libraries), and Research Libraries UK (RLUK) have all issued statements in support of cOAlition S. Peter Suber has also blogged about the plan.

Funded by a grant from the Open Society Foundations, Paywall will be screened by more than 175 universities this fall, and is available to stream under a CC BY 4.0 license at www.paywallthemovie.com. SPARC, a global coalition committed to making open the default for research and education, helped organize the DC premiere.

Government Petitioners’ Brief Points Out Verizon Throttling of Fire Department Battling Largest Fire in California History

On August 20, 2017, petitioners challenging the FCC’s abandonment of net neutrality protections in Mozilla v. FCC filed their initial briefs. Coverage of Mozilla’s joint brief with other non-government petitioners (including companies and public interest groups) is available here and here. This blog post focuses on the brief filed by government petitioners, which include 22 states (New York, California, Connecticut, Delaware, Hawaii, Illinois, Iowa, Kentucky, Maine, Maryland, Massachusetts, Minnesota, Mississippi, New Jersey, New Mexico, North Carolina, Oregon, Pennsylvania, Rhode Island, Vermont, Virginia, and Washington), the District of Columbia, the County of Santa Clara, Santa Clara County Central Fire Protection District, and the California Public Utilities Commission. These states represent over 165 million people, approximately half of the United States population.

The brief of the government petitioners make two primary arguments: 1) that the 2017 Order is arbitrary and capricious and failed to take into account harm to consumers, including public safety issues; and 2) the FCC did not have valid authority to preempt state and local laws from enacting their own net neutrality protections.

The highlight of this government petitioners’ brief focuses on clear and real examples of the harms that absence of net neutrality protections will have on safety, health and the public interest. While the FCC’s 2017 reversal of net neutrality protections relies on voluntary commitments, Internet companies have demonstrated that they will prioritize their own interests over the public’s:

BIAS [Broadband Internet Access Service] providers have shown every indication that they will prioritize economic interests, even in situations that implicate public safety. For example, a BIAS provider recently throttled the connection of a County Fire emergency response vehicle involved in the response to the largest wildfire in California history and did not cease throttling even when informed that this practice threatened public safety (emphasis added).

In this case, while the County was fighting the Mendocino Complex Fire—the largest fire in California’s state history—it experienced throttling by its ISP, Verizon. The addendum to the government petitioners’ brief includes a declaration by Santa Clara County Fire Chief, Anthony Bowden, who notes that the fire department relies on “Internet-based systems to provide crucial and time-sensitive public safety services. The Internet has become an essential tool in providing fire and emergency response, particularly for events like large fires, which require the rapid deployment and organization of thousands of personnel and hundreds of fire engines, aircraft, and bulldozers. During these events, resources are marshaled from across the state and country—in some cases even from other countries” and management of these resources depends on the Internet.

As Bowden explains, the unit facilitating resources “typically exchanges 5-10 gigabytes of data per day via the Internet using a mobile router and wireless connection. Near real-time information exchange is vital to proper function . . . Even small delays in response translate into devastating effects, including loss of property, and, in some cases, loss of life.” As a result, high-speed Internet is critical in addressing these fires.

Despite the fact that Santa Clara County Fire believed it had purchased an “unlimited” data plan, Verizon throttled the County’s usage “and data rates had been reduced to 1/200, or less, than the previous speeds.” When employees of Santa Clara County Fire e-mailed with Verizon, requesting the throttling be lifted for public safety purposes:

Verizon representatives confirmed the throttling, but rather than restoring us to an essential data transfer speed, they indicated that County Fire would have to switch to a new data plan at more than twice the cost, and they would only remove throttling after we contacted the Department that handles billing and switched to the new data plan.

Indeed, in the e-mail exchange attached as an exhibit in the addendum, a reported “side by side comparison a crew members personal phone using Verizon was seeing speeds of 20MBps/7Mbps. The department Verizon device is experiencing speeds of 0.2Mbps/0.6MBps, meaning it has no meaningful functionality.”

In another e-mail exchange questioning why Verizon was throttling the Santa Clara County Fire when the County believed it had purchased unlimited data, a Verizon manager replied, “Verizon has always reserved the right to limit data throughput on unlimited plans. All unlimited data plans offered by Verizon have some sort of data throttling built-in.”

While Verizon’s response to the Santa Clara County Fire Department in the midst of fighting the largest fire in California history as an extreme example of an ISP acting in self-interest, there are other examples of concerns for other state and local government seeking to serve the health and safety needs of its residents. For example, the government petitioners’ brief points to California’s updates to manage its energy grid to balance load, manage congestion and satisfy reliability standards.

Another example cited by the County of Santa Clara is its “web-based emergency operations center to facilitate coordination internally with other agencies and with first responders in case of emergency.” It uses a web-based public alert system to notify the public about emergencies such as evacuation orders or disease outbreaks and “Significant delays from blocking, throttling, or deprioritization could impede effective notification and jeopardize safety in public-health emergencies.” The County’s hospital also uses web-based systems that are latency-sensitive, including development of expanded telemedicine capabilities which will allow doctors to “perform triage and improve outcomes in time-sensitive situations (such as strokes or vehicular accidents) where immediate diagnosis can mean the difference between life and death.” In developing these improved systems for public health and safety, the County of Santa Clara notes that it invested substantial resources, including over a million dollars in its medical records system, and did so in reliance on the FCC’s protection of an open Internet.

Ultimately, the government petitioners’ brief highlights the ways that state and local government rely on an open Internet to serve the public, health and safety needs of its residents. As the brief notes, the FCC erred in assuming

that providers’ voluntary commitments coupled with existing consumer protection laws provide sufficient protection. The Commission offered no meaningful defense of its decision to uncritically accept industry promises that are untethered to any enforcement mechanism. Nothing in the order would stop a BIAS provider from abandoning its voluntary commitments, revising its Transparency Rule disclosures, and beginning to block, throttle, or engage in paid prioritization, subject only to the Transparency Rule’s limited disclosure requirements—leading to the very harms to consumer interests and public safety that the Commission’s long-standing commitment to protecting the open Internet was intended to prevent.

Mozilla, Internet Companies, Public Interest Groups and Other Petitioners File Brief in Net Neutrality Case

The litigation around the FCC’s decision in 2017 to abandon net neutrality protections is currently before the D.C. Circuit in the case captioned as, Mozilla v. FCC. Briefs by petitioners challenging the FCC’s 2017 Order were filed on Monday, August 20. The first brief (“non-government petitioners”) was filed jointly by Mozilla, Vimeo, Public Knowledge, Open Technology Institute, National Hispanic Media Coalition, NTCH, Benton Foundation, Free Press, Coalition for Internet Openness, Etsy, the AD Hoc Telecom Users Committee, Center for Democracy and Technology and Encompass and a summary of its arguments is provided below. The second brief, which will be covered in separate blog post, was filed by government petitioners, consisting of 22 states, the District of Columbia, County of Santa Clara, Santa Clara County Central Fire Protection District and the California Public Utilities Commission.

The non-government petitioners include a wide range of affected stakeholders: Internet companies, broadband providers, Internet consumers and public interest groups.

Mozilla’s brief points out that the FCC’s 2015 Open Internet Order was the result of a lengthy notice of proposed rulemaking and careful consideration, “Yet in the aftermath of the 2016 presidential election, the FCC did an abrupt about-face, comprehensively embracing the BIAS [Broadband Internet Access Service] providers’ objections this Court rejected in USTA and Verizon, revoking the telecommunications service designation of fixed and mobile BIAS, repealing all the rules governing BIAS provider conduct, and disavowing every source of authority for such rules.” Indeed, as numerous critics have noted, the 2017 decision by the FCC reversing its early Open Internet Order seemed to be a predetermined outcome.

Mozilla’s brief makes several arguments: 1) the FCC’s Order mischaracterizes the way the Internet works; 2) the FCC impermissibly renounced its enforcement authority; and 3) the FCC’s repeal of the 2015 Open Internet Order was arbitrary and capricious, ignoring the reasoned decision-making required by an agency.

Pointedly, Mozilla’s brief notes: “In 2016, this Court upheld the rules in their entirety. In 2017, a new FCC undid them, again in their entirety, on a record that had changed little, if at all.” Additionally, “One after another, the FCC reversed virtually all of the 2015 Order’s hundred-plus factual findings, proclaiming wrong what had been found to be right in 2015 and upheld as right in 2016. The abrupt about-face was not adequately reasoned.”

In arguing the arbitrary and capricious nature of the FCC’s reversal of the 2015 Open Internet Order, Mozilla’s brief points out that the FCC “erroneously excluded consumer complaints”* resulting in “skewing the record in favor of its preferred outcome and subverting the rulemaking process.” Such behavior contravenes the Administrative Procedures Act (APA) which requires agencies to examine relevant data and provide reasoned explanations; “an agency cannot close its eyes to evidence in its possession on which it chooses not to rely.”

The FCC’s complete abandonment of net neutrality protections ignored not only the lengthy and detailed record in past proceedings, but also the comments submitted in its 2017 notice of proposed rulemaking. Various amici for the petitioners, whose briefs will be due on Monday, August 27, will also point to the arbitrary and capricious decision-making by the FCC.

*A representative (but not comprehensive) list of companies, organizations and governments is listed on the first several pages.  Several library organizations (including ARL, ALA, and AALL) along with city governments, state governments, public interest groups and companies, are included.