Tag Archives: guest post

Demonstrating impact in the social sciences — and thinking like a social scientist

*Guest blog post by Judy Ruttenberg, ARL Director, Scholars and Scholarship*

“I am more than my H-Index,” is a popular laptop sticker in the open science community, where the index—a long-reigning measure of influence in a field—is the highest number of citations garnered by the greatest number of a scholar’s published articles. Core to open science and open scholarship, however, is sharing research results in advance of formal publication—through preprints, auxiliary materials shared on platforms like GitHub and the Open Science Framework, and scholars’ direct engagement with the public through op-eds, blogs, and Twitter. Sage Publishing just released a white paper on “The Latest Thinking About Metrics for Research Impact in the Social Sciences,” and kicked off a public discussion of its findings last weekend at the Association for Psychological Science (APS) annual convention in Washington, DC.

First, key findings of the Sage report include:

  1. The full scholarly community must believe that new impact metrics are useful, necessary, and beneficial to society.
  2. A robust new regime of impact measurement must transcend, but not necessarily supplant, current literature-based systems.
  3. A new regime of social science impact measures must integrate the experiences and expectations of how nonacademic stakeholders will define impact.
  4. All stakeholders must understand that although social science impact is measurable, social science is not STEM, and social science’s impact measurements may echo STEM’s but are unlikely to mirror them.
  5. Social science needs a global vocabulary, a global taxonomy, global metadata, and finally a global set of benchmarks for talking about impact measurement.

Camille Gamboa of Sage moderated a lively panel at APS comprised of Altmetric founder Euan Adie, HuMetricsHSS team member Rebecca Kennison, and Simine Vazire, co-founder of the Society for the Improvement of Psychological Science (SIPS). While all are champions of impact as a greater aspiration than citations or paper quantity, the panel nonetheless raised deeply thoughtful questions about how impact is defined and measured, how new biases might be introduced and mitigated, and the challenge of maintaining scholarly integrity and accountability while making claims of impact in a direct-to-reader environment. “High-impact journals,” and anonymous peer review, despite their known flaws, have traditionally provided that accountability. And, from an information-seeking perspective (I am a librarian, not a social scientist), they provided filters.

So how does open scholarship in the social sciences advance, and how do scholars meet the demands placed on them by universities and funding agencies for public engagement and demonstration of impact? Do they, as one audience member asked, go directly to where readers are, turning their research papers into TED Talks and slick YouTube videos? Dr. Vazire was not so sure. Science journalists, she argued, play an important role. They pay a professional price for overstating claims of impact, while individual scientists may not. While Dr. Vazire said she includes this kind of feedback as a peer reviewer, that may not be the norm. Rebecca Kennison talked about a journal policy review project she recently undertook around publication ethics, and said such guidelines would be both novel and, from her perspective, welcome. In defining impact at the outset, all panelists noted that impact can be both positive and negative—recalling Andrew Wakefield’s highly influential paper claiming that vaccines cause autism. Vazire wondered whether we can learn from professional sports, which use many different statistical indicators to assess a particular player’s impact—some of which are holistic: Does the team perform better or worse when the player is on the field?

Panelist Euan Adie founded Altmetric precisely to help address the vexing measure of scholarly impact, answering the question, “who’s talking about your research?” on social media, in the press, and in policy documents, the latter of which can be the grand prize for social scientists. He described a robust infrastructure of intermediaries, such as think tanks and policy institutes, that sit (with all their biases, flaws, and sometimes helpful filters) between social science researchers and policy-makers. Less formal networks of scholars, such as the Scholars Strategy Network and the Council on Contemporary Families, help social scientists get closer to readers outside of the academy, boosting their capacity for non-academic “impact,” while providing the scholarly oversight to assess claims based on the evidence provided and methods applied. In a tongue-in-cheek backlash to scholars’ social media popularity, we now have the “Kardashian Index,” coupled with the suggestion that if one’s twitter mentions vastly outnumber one’s formal citations, that may indicate an outsized influence in the field.

These issues hit very close to home for research libraries. ARL has decades of data on outputs and transactions of its membership—number of volumes on the shelf, expenditures, number of reference questions answered—and is making the exciting transition now to measuring impact. Resources like “The Latest Thinking About Metrics for Research Impact in the Social Sciences,” and colleagues like Adie, Kennison, and Vazire, provide excellent food for thought for libraries undertaking this work, and confronting its challenges along with the research and learning community.

Opportunities for Libraries in the AI Ecosystem

*Guest post by Cynthia Hudson-Vitale, Head Research Informatics and Publishing, Penn State University Libraries*

Artificial intelligence (AI) for data discovery and reuse was the topic of a recent conference sponsored by the National Science Foundation (NSF) and hosted by Carnegie Mellon University (CMU), in cooperation with the Association for Computing Machinery (ACM). Beth Plale, Senior Advisor for Public Access for NSF, set the context: Harnessing the data revolution will require research, educational pathways, and advanced cyberinfrastructure.

Librarians, researchers across disciplines, computer scientists, industry representatives, and technologists came together at CMU to share practices and discuss methods for leveraging machine learning and artificial intelligence for metadata generation, data curation, data discovery, and data integration. Prominent themes of data privacy, data security, and mechanisms to limit algorithmic bias were found through many of the papers.    

While many institutions and researchers are exploring or developing AI models to solve complex issues, this conference was unique, both in the variety of perspectives it provided and the intentional focus on data discovery and reuse practices. Notable papers and presentations included:

  • Extracting key phrases from texts to aid in discovery
  • Creating descriptive tags for images
  • Recognizing and transcribe handwriting from digitized assets
  • Finding and extracting dataset references from published articles
  • Protecting clinical patient privacy  
  • Developing synthetic control arms for clinical trials

While much research focused on AI, many speakers emphasized human curation and intervention s a required component of workflows for model design and validation.

Keith Webster, Dean of CMU Libraries, summed up the takeaways and themes of the conference as demands for:

  • collaboration across disciplines and domains,
  • improved mechanisms for data discovery,
  • increased incentives for sharing data,
  • improved standards for data interoperability and adoption,
  • A better understanding and application of ethical guidelines,
  • research on the power of data reuse, and
  • enhanced tools for AI  

Huajin Wang, PhD., Research Liaison, Biology & Computer Science at CMU and Co-PI for the conference said, “I am really excited and touched by the enthusiasm participants shared for moving forward as a unique and diverse community.  I look forward to growing the community, and encourage everyone to keep the conversation going and join the mailing list aidr-all@lists.andrew.cmu.edu”.

For Libraries this conference surfaced a number of opportunities, including:

  • Delivering training and education around AI and data science topics
  • Providing expertise around metadata and controlled vocabularies
  • Acting as facilitators of local communities of practices for AI
  • Leveraging AI models to supplement human curation of datasets and enhance the discoverability of library digital assets (including digitized images, text, etc.)
  • Supporting and advocating for AI privacy initiatives

Discussions around  data privacy and AI reinforced many of the ongoing conversations that libraries are having in protecting student and library patron privacy, including:

Presentation and poster abstracts may be found on the conference website, some of which are published as a F1000Reseach collection.  Full papers of selected presentations will be peer-reviewed and published shortly in AIDR ’19 – ACM ICPS.