When we started working with all of the metrics that we could gather from the data exhaust created when people interacted with research we quickly realized three things:
- Not all metrics are created equal, a download is not the same as a tweet.
- Synthesizing all of the metric data into a single number dilutes the meaning.
- Categorizing the metrics into buckets gives you useful information.
For example, we have seen that people “capturing” work to save it for later is often an early indicator of later citations. Since citation counts lag, this is a great way to find work that other researchers are finding valuable. But, we don’t want to “bury” the fact of those captures inside some grand number – you would lose this valuable information.
After a lot of experimentation and working with early customers, we categorized metrics into these useful categories:
Here is a list of examples of what we put into each category:
- Usage – Downloads, views, book holdings
- Captures – Favorites, bookmarks, saves, readers, groups, watchers
- Mentions – blog posts, news stories, Wikipedia articles, comments, reviews
- Social media – Tweets, +1’s, likes, shares
- Citations – PubMed, Scopus, patents
You can see PlumX in action and see more on how these categories work with real research at the PlumX Demo Site.