Metrics for ALL

This post originally appeared on the 2:AM Conference blog as a guest blog from Andrea Michalek, the Co-Founder and President of Plum Analytics. Andrea shares her thoughts about the current state of altmetrics and opportunities for the future.

2amlogo

The landscape around alternative metrics has been evolving rapidly. Going from a twitter hashtag in 2011 to a key component in research evaluation at many institutions, metrics that move beyond citation counts and Journal Impact Factor are here to stay.

When I was asked to write a blog post for the 2AM conference blog, my first thought was, “Let’s stop calling it altmetrics!” Mike Buschman and I co-founded Plum Analytics in January 2012, and joined the bustling altmetrics community to share best practices and discuss different approaches in the field.

In April 2013, we wrote the article titled, “Are Altmetrics Still Alternative?” We made the claim that, “it is our position that all these metrics are anything but alternative. They are readily available, abundant and essential.“

The main points in that article still hold true today:

  • As the pace of scholarly communication and science advancement has increased, citation analysis is a lagging indication of prestige. Citations can take 3-5 years to accrue the critical mass necessary for meaningful analysis.
  • Not all influences are cited in an article, thus leaving the whole measure incomplete. Research outputs other than a journal article are typically not cited.
  • Securing research funding is getting more competitive. When applying for grants, researchers’ most highly cited work will typically be several years old, and not necessarily most relevant to the grant application at hand. If researchers can show that their recent research is generating a lot of interaction in the scholarly community, that information can provide an advantage in this tight funding environment.

Two and a half years later, with over 250 universities, research institutes, funders, and corporations around the world using metrics from Plum Analytics to answer key questions about their research, we have learned even more from how real people, solving real problems, use these metrics on a daily basis. Highlighting a few key areas of learning:

Be Comprehensive

At Plum Analytics, we are not scholarly publishers. We did not start with looking at a journal article, and trying to build better metrics around those. Instead, we began with a very different end in mind, where we could use the data of how people were interacting with research, to tell the full story behind their work.

When you start with the question, “What do you consider to be your research output?” and you ask that across many different disciplines, you start to build a base that tells a more complete story of the outcomes of research. Working with librarians and others who support research, we now track over 40 separate types of research artifacts, from articles, to books, clinical trials, conference proceedings, datasets, figures, presentations, videos, and more. For example, when looking at digital traces of how a book has been interacted with, you can find indicators of how many times has the ebook been viewed or downloaded online, how many libraries hold the book in their collection, what Wikipedia articles reference the book, how many online reviews have been written, and what do they say?

Moreover, to get a full picture of research, you need to look beyond a single type of engagement around it. Beyond cited-by counts, there are four categories of metrics to consider:

  • Usage – The raw engagement around research by clicking on a link, viewing the article, downloading the data, playing a video, etc.
  • Captures – A user has indicated that they plan to return to this artifact by favoriting, bookmarking, marking it as a reader, or otherwise digitally indicated their intent.
  • Mentions – blog posts, book reviews, comments, and Wikipedia mentions
  • Social Media – Tweets, likes, +1s, shares

Metrics can now be harvested and applied to research around each of these, in addition to citations, giving a much more comprehensive and holistic view of impact. These new metrics are also much more timely than citation metrics and can keep pace with new formats much faster than the entrenched, legacy practices.

Measure at the Artifact Level – Not the Journal

There are bad articles in high impact factor journals, and great articles in low impact factor journals. Even if Journal Impact Factor (JIF) were a perfect measure of the quality of a journal, it would still be an inappropriate measure of the quality of a particular article in that journal.

Many studies have been performed looking at the serious issues that are caused when only looking at JIF. For a regional example of the harm these practices can cause, see: The hidden factors in impact factors: a perspective from Brazilian science.

In the paper, The Skewed Few: Does “Skew” Signal Quality Among Journals, Articles, and Academics?

Joel Baum makes the statement that, “The idea that a few “top” authors from a few “top” institutions publish a few “top” articles in a few “top” journals has a certain, orderly appeal to it. But this order is not without consequences.“ His paper goes on to point out facts like:

  • 20% of the papers examined had half of the citations
  • Fewer than 20 schools account for over half of all citations

He describes how the skew contributes to the Matthew Effect where the rich get richer and the poor get poorer. (For those without access to the toll access paper, you can view the preprint or a presentation related to this work.)

The Journal Impact Factor has come under considerable scrutiny and criticism, notably from initiatives like the San Francisco Declaration on Research Assessment where over 150 scientists and 75 research organizations stood against having a single score for a journal represent the quality of the articles it contains. Although Eugene Garfield never intended JIF to be used to assess quality, this was invariably what happened.

Better Visualizations Lead to Better Understanding

As we look towards the future of more modern metrics, we believe that any single score (even at the article level) is overly simplistic in what it represents, and cannot be used as a responsible indicator, especially when comparing across disciplines. It is therefore essential to deliver this metric data in ways to make it understandable, without resorting to a single score per document. The key to quickly navigating complex data and be able to gain insight from it, is to use elegant and simple visualizations to do the hard work for you.

Article Level Metrics are just a Building Block

Calculating article level metrics, even when done comprehensively across all five categories of metrics, are just a building block. The power and the insight comes from being able to pull them together to tell the stories of the people, the groups that they are affiliated with, and the topics they care about.

Metrics that Keep Pace with Online Scholarly Communication

As we look at how scholars and those who interact with research outputs do so online, it is clear that the pace of communication, the amount of data produced, and the varieties of mechanisms to consume it will all continue to grow. When designing measurement instrumentation, it needs to function in near real time, and at web scale. The technology infrastructure needs to be robust, and enterprise-grade. And the data that we collect and the way we represent it, needs to capture today’s interactions and yet be flexible for the future.