{{ site.title }}
ARL Views

Demonstrating Impact in the Social Sciences — and Thinking Like a Social Scientist

Last Updated on May 11, 2020, 1:08 pm ET

Guest blog post by Judy Ruttenberg, director, Scholars and Scholarship, ARL

“I am more than my H-Index,” is a popular laptop sticker in the open science community, where the index—a long-reigning measure of influence in a field—is the highest number of citations garnered by the greatest number of a scholar’s published articles. Core to open science and open scholarship, however, is sharing research results in advance of formal publication—through preprints, auxiliary materials shared on platforms like GitHub and the Open Science Framework, and scholars’ direct engagement with the public through op-eds, blogs, and Twitter. Sage Publishing just released a white paper on “The Latest Thinking About Metrics for Research Impact in the Social Sciences,” and kicked off a public discussion of its findings last weekend at the Association for Psychological Science (APS) annual convention in Washington, DC.

First, key findings of the Sage report include:

  1. The full scholarly community must believe that new impact metrics are useful, necessary, and beneficial to society.
  2. A robust new regime of impact measurement must transcend, but not necessarily supplant, current literature-based systems.
  3. A new regime of social science impact measures must integrate the experiences and expectations of how nonacademic stakeholders will define impact.
  4. All stakeholders must understand that although social science impact is measurable, social science is not STEM, and social science’s impact measurements may echo STEM’s but are unlikely to mirror them.
  5. Social science needs a global vocabulary, a global taxonomy, global metadata, and finally a global set of benchmarks for talking about impact measurement.

Camille Gamboa of Sage moderated a lively panel at APS comprised of Altmetric founder Euan Adie, HuMetricsHSS team member Rebecca Kennison, and Simine Vazire, co-founder of the Society for the Improvement of Psychological Science (SIPS). While all are champions of impact as a greater aspiration than citations or paper quantity, the panel nonetheless raised deeply thoughtful questions about how impact is defined and measured, how new biases might be introduced and mitigated, and the challenge of maintaining scholarly integrity and accountability while making claims of impact in a direct-to-reader environment. “High-impact journals,” and anonymous peer review, despite their known flaws, have traditionally provided that accountability. And, from an information-seeking perspective (I am a librarian, not a social scientist), they provided filters.

So how does open scholarship in the social sciences advance, and how do scholars meet the demands placed on them by universities and funding agencies for public engagement and demonstration of impact? Do they, as one audience member asked, go directly to where readers are, turning their research papers into TED Talks and slick YouTube videos? Dr. Vazire was not so sure. Science journalists, she argued, play an important role. They pay a professional price for overstating claims of impact, while individual scientists may not. While Dr. Vazire said she includes this kind of feedback as a peer reviewer, that may not be the norm. Rebecca Kennison talked about a journal policy review project she recently undertook around publication ethics, and said such guidelines would be both novel and, from her perspective, welcome. In defining impact at the outset, all panelists noted that impact can be both positive and negative—recalling Andrew Wakefield’s highly influential paper claiming that vaccines cause autism. Vazire wondered whether we can learn from professional sports, which use many different statistical indicators to assess a particular player’s impact—some of which are holistic: Does the team perform better or worse when the player is on the field?

Panelist Euan Adie founded Altmetric precisely to help address the vexing measure of scholarly impact, answering the question, “who’s talking about your research?” on social media, in the press, and in policy documents, the latter of which can be the grand prize for social scientists. He described a robust infrastructure of intermediaries, such as think tanks and policy institutes, that sit (with all their biases, flaws, and sometimes helpful filters) between social science researchers and policy-makers. Less formal networks of scholars, such as the Scholars Strategy Network and the Council on Contemporary Families, help social scientists get closer to readers outside of the academy, boosting their capacity for non-academic “impact,” while providing the scholarly oversight to assess claims based on the evidence provided and methods applied. In a tongue-in-cheek backlash to scholars’ social media popularity, we now have the “Kardashian Index,” coupled with the suggestion that if one’s twitter mentions vastly outnumber one’s formal citations, that may indicate an outsized influence in the field.

These issues hit very close to home for research libraries. ARL has decades of data on outputs and transactions of its membership—number of volumes on the shelf, expenditures, number of reference questions answered—and is making the exciting transition now to measuring impact. Resources like “The Latest Thinking About Metrics for Research Impact in the Social Sciences,” and colleagues like Adie, Kennison, and Vazire, provide excellent food for thought for libraries undertaking this work, and confronting its challenges along with the research and learning community.

Affiliates