Saturday, April 5, 2008

The European Reference Index for the Humanities: Friend or Foe?

Trouble is brewing in the arcane world of humanities bibliometrics, and it looks as if a major debate on the measurement of what constitutes "quality" in scholarship in fields such as archaeology and classical studies is about to begin.

The catalyst for this is a European Science Foundation initiative to compile a European Reference Index for the Humanities (ERIH), defined (in its first iteration) as "a reference index of top journals in 15 areas of the Humanities, across the continent and beyond" but due to expand "to include book-form publications and non-traditional formats" with the aim of eventually operating "as a backbone to a fully-fledged research information system for the Humanities." Wow!

In 2007, with little fanfare, the first ERIH lists were published, presenting what aimed to be comprehensive catalogues of journals (ca. 1,300 in total and almost 300 in Classical Studies alone), each with an A, B, or C classification. Although the Committees of top scholars in each discipline who compiled them emphasized that "the lists are not a bibliometric tool" and that they therefore advised "against using the lists as the only basis for assessment of individual candidates for positions or promotions or of applicants for research grants," the presumption that universities and other organizations would not start to use them in this way was naive.

The reality is that funding agencies, university administrators, library acquisitions staff, and hiring committees alike have been desperate to find some objective way of measuring the quality of humanities research for years. Although subject to increasing criticism (and attempts to find a web-based metric using Google-like algorithms), the citation-based "impact factor" has been an acceptable measure of article quality in the sciences for decades, since Eugene Garfield invented the measure and later institutionalized it by selling his Institute for Scientific Information (ISI) to the mighty Thomson Corporation. Journals in the humanities, in the meantime, tend to be ranked on the basis of the extremely qualitative and fuzzy scale of "peer perception" which understandably drives busy bureaucrats within the higher establishment wild. There is an Arts and Humanities citation index, and sometimes I will get a panicked call from a junior scholar whose Dean has asked what Hesperia's impact factor is, but the AHCI index has never been widely accepted and ISI does not provide a means of extracting an impact factor from it. Among other problems with AHCI, its coverage of journals is limited and it doesn't acknowledge the important role books play in the humanities.

In an aside, related to previous discussions on this blog and elsewhere about the segmented and "tribal" nature of disciplines like archaeology, I once heard a rumor that ISI didn't produce an impact factor for the Arts and Humanities partly because their statistical analyses tended to find "odd clumping" when analyzing humanities journals, perhaps explained by the tendency of some sub-disciplines to almost exclusively cite themselves. However, I asked Eugene Garfield a question about this at a meeting of the Society for Scholarly Publishing a couple of years ago, and he claimed that his algorithm would work just as well for humanities journals as for sciences. Statistics is far from being my strong suit, so I didn't pursue this further.

In this context, ERIH sounds like an attempt to turn a qualitative measure of "peer perception" into something quantitative, and it undoubtedly has some good motivations.

Firstly, while the method by which the expert panels selected were chosen is obscure, the people on them are distinguished. For Archaeology, the panel consists of Lin Foxhall (Chair), University of Leicester (UK); Csanád Bálint, Hungarian Academy of Sciences, Budapest (HU); Serge Cleuziou, CNRS / Nanterre (FR); Kristian Kristiansen, Göteborgs Universitet (SE), and Jacek Lech, Polish Academy of Sciences, Warzaw (PL). For Classical Studies, Claudia Antonetti (Chair), Università Ca' Foscari, Venice (IT); Angelos Chaniotis, Universität Heidelberg (DE); Antonio Gonzales, Université de Franche-Comté, Besançon (FR); Richard Hunter, University of Cambridge (UK), and Paul Schubert, Université de Genève (CH).

Secondly, the compilation of lists that include important journals from developing as well as developed countries and new periodicals is extremely praiseworthy since almost any library catalogue is incomplete in this regard, and the high quality work of publishing colleagues in some of the new European countries is often unfairly ignored.

Thirdly, humanities is probably losing out on funding by not catering to bureaucrats. ERIH proponents, thinking in European terms, argue that a quantitative measure of humanities research quality would enable the humanities to compete alongside the sciences to access the €7bn funding provided by the European Research Council each year. At a more provincial level, it is probably true that one reason why high-quality but independently-published journals like Hesperia that are seeing steadily declining numbers of institutional subscribers is that librarians don't have a quantitative measure of quality to rely on when making their choices, and therefore tend to make scattershot decisions to subscribe to large commercial packages in the hope that they will hit some of the "core" periodicals. For independent journals as well, quantitative measures in the humanities would probably level the playing field.

On the other hand, a growing body of academics, especially in the traditionally Euro-sceptic UK, are spotting problems with ERIH. They seem to be led, one is somewhat proud to note, by the traditional "awkward squad" disciplines of archaeology and classics.

A good summary of the arguments against ERIH can be found in the PDF minutes of a meeting of the Arts and Humanities Research Council in the UK on February 27, 2008, subsequently reported in the Times Higher Education Supplement of March 19, 2008. Also worth watching may soon be the website of the Council of University Classical Departments (CUCD) which seems to be leading the opposition to ERIH under the control of their distinguished Chairman, Robin Osborne.

The main problems identified were:

1. By categorizing journals into disciplines, the importance of interdisciplinary journals is understated. A single list of journals would be more useful than 15 separate ones.

2. The methodology on categorization (involving quantitative data on % of authors from different countries, acceptance rate, level of peer-review but also qualitative data about who is on the advisory board etc.) is obscure and unscientific. However good the expert panels are, surely their own research preferences and integration into networks would show.

3. The lists were not complete, especially in regard to non-European including North American periodicals, and some of the journals listed were defunct.

The biggest concern was that a system like ERIH that even its proponents agreed was till in a beta-phase was already being to make hiring and funding decisions. Although the evidence for this was anecdotal, the probability that, after such a long period of frustration over the lack of quantitative measures of humanities research quality, ERIH would not be used by data-starved administrators seemed low.

In recent weeks, the generators of ERIH have clearly been acting to head off its critics. For the first time last week, Hesperia received formal notification of the project and a feedback form with which to comment on our rankings. How did we do? Not too bad with category "A" (for "high ranking, international level publication") in Classical Studies and Archaeology and "B" (for "standard, international level publication") in History. The initial list in Art and Art History, another important field in which the journal publishes, is yet to be announced.

It's nice to get grade "A"s, so perhaps my decision to suspend judgement on ERIH for the moment is biased. Peer review works for the contents of journals, so why shouldn't it work for compiling lists of journals themselves? How else would an obvious gap in the market for information on humanities publications be filled than by a major international initiative? Will ERIH's promise to index publications in "non-traditional formats" in the future provide objective measures for the quality of electronic publications that have so far been poorly recognized by employers? However, I also see a lot of validity in the criticism of the project which seems to have been unduly secretive in its development and perhaps naive in its implementation. The important debate about how to measure the quality of publications in the humanities that ERIH has reopened is definitely one to watch.

2 comments:

AES said...

A couple of years ago, at an APA panel on scholarly publishing, a young (i.e. non-tenured) scholar reported that his chairman advised him, in regard to his evaluation for tenure: "If you aren't published by one of these [he handed him a list of 8 scholarly presses] don't even bother listing your book." At the time, I took this as evidence that the chairman was either too lazy or too unsure of his own judgment to read the book itself. The anecdote shows why I am suspicious of "objective" systems of evaluation.

Anonymous said...

This is a great post; it was very edifying. I look ahead in reading more of your work. new york city loans