- Lauren B. Collister, PhD (Scholarly Communications Librarian, University Library System, University of Pittsburgh)
- Timothy S. Deliyannides, MSIS (Director, Office of Scholarly Communication and Publishing and Head, Information Technology, University Library System, University of Pittsburgh)
In the article Collister and Deliyannides lay out:
- A brief history of altmetrics
- An overview of altmetrics providers
- Using Plum Analytics at the University of Pittsburgh
One of my favorite things that they said in their article was:
Rather than thinking of altmetrics as a simple complement or alternative to traditional metrics, the wide range of available metrics can be viewed as existing along a continuum from scholarly impact on one end (traditional citations and bookmarks in reference management databases) to popular and societal impact on the other (tweets and Facebook mentions).
The reason I like this quote so much is because it mirrors our view at Plum Analytics. We don’t think of altmetrics as better than, or an alternative to citations, in fact this is one reason we often shy away from the term “altmetrics.” Rather, we think of altmetrics as providing data and analysis to help you understand what is happening all over the world with your articles, your researchers, your research labs, your departments, your grants, your journals, your issues and any other way you want to evaluate your institution or organization.
We think altmetrics should help you tell the stories of your research and answer questions you could not before.
This is precisely the reason why we categorize our metrics into five different categories and give you transparent metrics for each one. Here are our categories:
- Usage – the most sought after metric after citations
- Captures – a leading indicator of citations
- Mentions – where people are truly engaging with the research
- Social Media – tracks the promotion and buzz of research
- Citations – the traditional measure of research impact
Actually, Collister and Deliyannides said it better than I can:
PlumX provides both altmetrics and traditional metrics from a variety of sources. Showing these side-by-side allows a researcher to see all of the different impacts of the article in one place. This presentation allows researchers to see traditional and alternative metrics side by side and evaluate them as they see fit. It does not place one metric above another as more valuable; instead it presents categories that may be of interest in different situations.
This position also goes to the heart of why we’ve never “scored” the research or the researchers. Without going into all of our beliefs on this subject, I’ll let what Collister and Deliyannides said speak for me right now:
PlumX provides numbers that can be evaluated according to the user’s needs; there are no “scores” like those present in Altmetric, ImpactStory, and many other outlets, which can obscure the meaning of the data below the score. Researchers at the University of Pittsburgh have expressed enjoyment of this versatile and data-driven approach, which has been viewed as a relief from increasing pressure to apply scores and ratios and rankings to the output of a researcher. Generally, researchers appreciate the ability to see the data behind their metrics without being compared to an unknown amount of others with a scoring system that may not be immediately transparent.
There is so much good content in this article that I could quote the entire piece. Instead, I encourage you to read it.
I will close this blog post with part of the conclusion from this article:
Whether altmetrics can or should replace citation counts is not the question that needs to be asked…The question that we should ask is how we can evaluate the different metrics available to us, how individual scholarly communities can best use the data, and how we can influence the consideration of many different impacts of research beyond traditional citation counts.
To read the whole article go here.