Altmetrics, formally introduced in 2010 with an online manifesto, are alternatives to the traditional measures of scholarly impact. They are meant to supplement, not replace, existing metrics and filters such as peer review, citation counts, and Journal Impact Factor (JIF). Altmetrics are a response to some criticisms of these traditional metrics as well as the movement of scholarship to the web.
The scholarly environment and scholarly output have become more diverse and accessible, and old methods of measuring impact aren’t keeping up with what’s possible. Researchers now easily share not only their published research articles but also their presentation slideshares, posters, datasets, software code, personal blog posts, and other work online. And these various research outputs are now shared and discussed through a variety of channels, including social media and the popular press. Unlike citation counts, altmetrics can measure attention to all types of research output within and beyond academic publications. For example:
How many people have stored the article in a citation manager like Zotero or Mendeley?
How many times has a dataset or a slideshare been viewed? Downloaded? Shared? Tweeted?
How many news stories in the popular press mentioned the research?
How many citations does it have in public policy documents?
How many views did a blog post get? How many comments, and what are they saying?
How many syllabi include it as a course reading?
These examples demonstrate some of the major advantages of altmetrics, including their speed and diversity. And because the source data for some altmetrics is also readily available, such as what is being said in those comments and tweets that are being counted, the best altmetrics provide context and qualitative assessment in addition to quantitative measures.
But altmetrics are still relatively new and, like all metrics, have their own limitations and potential for manipulation. There is still plenty of discussion and debate about how to define them and how they can and should be used. Do they measure impact or simply attention? How are they viewed by administrators, T&P committees, or funding institutions? Right now there may be more questions than answers, so find out more and join the conversation!