I noted at the annual National Council of Public History meeting that about 70% of academic journal articles were not being read in the context of arguing for open access and new measures of assessing impact of digital publications of all sorts (in journals or not.) I am a very big fan of analytics.
I mis-spoke the research has argued that 75% of all social science articles are not cited and 92% of Humanities articles are not *CITED*. My error in making my point to baldly. My friend Ann Whisnant tweeted this. And, of course, this was picked up in the twitter stream where it lost all nuance, was amplified and questioned–all in 140 characters, rapidfire.
So, attached are a couple references about citation as promised to the twitterverse, and also a few notes about what this means.
You can see the original source of my statement as well as a critical assessment of the original research by Charles Schwartz. It possesses citations and a google search reveals that much of the original research has been questioned (although I am not sure convincingly so.) Even so, there appears a persistent strain of research that suggests a persistent (and perhaps growing) trend toward uncitedness, for example this essay in the Chronicle.
My point at the NCPH was that we need to find alternative ways to measure impact and influence, including on *public* audiences (not present in the twitter discussion of my statement.) And, although I misspoke about readers, my point was precisely about trying to gauge impact beyond the academy (which usually uses citations.)
Indeed, my misstatement is precisely the point. We *DON’T* know how many readers we have; we know purchasers for books. This is why citation emerged as a measure of influence, one related narrowly to scholarly reuse, but a measure nonetheless. Moreover, as Tim McCormick noted on the twitter exchange, what constitutes reading is not clear either. (Just realized I don’ t know how to show a tweet on my blog. Will have to remedy that later.)
My point here and at NCPH is that we should care about engagement–measured by citation (surely such a measure), reading, re-using (linking), and other such measures the impact and influence of our work. And, with digital humanities and digital analytics, we may have a way to do that.
Take an example of print verse digital. My book Eating Smoke has had about 750 purchasers (based on sales figures) and >15 citations (perhaps more if I were to do an exhaustive search of linking.) I have no clue how many folks have read all or part of it. By contrast, Cleveland Historical has over 4000 unique visitors monthly (or about 75,000 since the project originated), about 50% of those folks stay beyond their initial entry (i.e. they’ve not “bounced.”) Those who stay, read the various interpretive narratives for an average of five minutes and delve deeper into the site with an average of three clicks (probably three stories). I can bore you with more details. The difference between the projects is interesting, as is the difference between my ability to observe the measures.
I would throw out some modest propositions.
1) Citation is not a good measure of impact. Of course, it is not a bad measure either. But, it is very imprecise and shaped by many factors. (Many universities and fields use an even worse measure, Impact Factor, to assess research quality. Impact factor measures the “quality” of a journal, rather than its contents–sorta like assessing a book by its cover. But, I digress.) Even so, it only measures the impact of a citation on an academic subfield–hardly a model for public history.
2) Readership statistics via google analytics get us much closer to where we want to be, because they actually measure number of readers, time spent, material explored, and so forth. This is a great step toward a richer analysis.
3) But, readership figures need some context in order to be understand. We desperately need such measures across academic settings. Marin Dacos has done some interesting work in this area, that I will link to once I find it and write more about it (in a later revision.)
4) We (in the Public History community, at NCPH and elsewhere) need to push toward something more like Peter Binfield’s PLoS-ONE, which makes that information public and which is open. Both of these are goals and should be emulated.
Finally, and broadly, I recommend that we make our work more widely read and more widely cited by not only making it more open, but by creating a rewards infrastructure around the analytics of that work. BUT, to create such a structure, we need to have better analytics and better comparisons in our public historical work.