Preliminary ideas for a TS journal ranking system

D. Gile - EST

September 2, 2008

 

 

An increasing number of university departments where TS scholars work consider publication in high-ranking journals an important criterion in academic promotion. This raises two issues: one is whether such emphasis on journals is justified in TS (and in other disciplines in the humanities), and the second is how TS journals are ranked. The first issue will be addressed in a separate text. This text focuses on journals.

            The European Science Foundation’s European Reference Index for the Humanities (ERIH) project has defined three ranks for journals:

 

   “Category A: high ranking international publications with a very strong reputation among researchers of the field

      in different countries, regularly cited all over the world.

   Category B: standard international publications with a good reputation among researchers of the field in different

    countries

   Category C: research journals with an important local/regional significance in Europe, occasionally cited outside the

    publishing country through their main target group is the domestic academic community.”

 

            I will refrain from going into fundamental criticism here, the assumption being that serious scholars have given serious thought about the options and have not come up with a better solution. What may be worth pointing out, however, is that the situation in TS is rather different from the situation in established disciplines, that it seems that only one TS scholar participates in the work of ERIH and that the actual action leading to the ranking of journals is not clear. In fact, there have been some protests again ranking proposals, with colleagues asking other colleagues to write to ERIH and make requests for changes. Such a procedure is far too subjective and ‘political’ for what should belong to the scientific conceptual world.

            European bodies such as ESF have procedures and an institutional weight which make hopes of the TS community changing anything by protesting somewhat dim for the moment. One possible alternative would be for the Community to establish its own parallel system, by consensus between as many scholars from as many academic TS centres as possible, in such a way that it can become a credible reference within the TS community. If it manages to achieve such a status within TS, it may not be unrealistic to hope that ERIH will take it on board as well.

            Objective criteria should help the system achieve credibility and maximum acceptability. Ranking ‘quality’ objectively is difficult. Not only can individual opinions be challenged, but even the refereed vs. non-refereed distinction is rather weak in the field, as some refereed journals recruit authors by invitation and accept their publications even when referees have been very critical about them. The best one could hope for is a measure of influence as opposed to quality. Influence could be measured by the number of citations from different authors (other than the author of the cited paper) and in publications other than the journal being ranked (i.e. citation of a Target paper by authors X and Y would give two points to Target unless the citing authors are X or Y respectively and/or the citing publications are in Target themselves). For each year, each journal would have a score representing the number of authors citing papers published in it (citing authors index - CAI), and perhaps another score representing the number of countries where these citing authors have their academic affiliation (citing countries index - CCI). This way, each journal could be ranked in terms of both its ‘individual influence’ and its ‘geographic territory of influence’. It would probably make sense to have separate scores for translation (all types of translation) and interpreting (all types of interpreting).

            Instead of A, B or C rankings, each journal would be given ordinal ranking (1st, 2nd, 3rd rank etc.). For instance, for 2007, Interpreting could be given say a ranking of number 1 in terms of CAI and number 2 in terms of CCI for interpreting, and perhaps number 21 in terms of CAI and number 22 in terms of CCI for translation. For 2008, the rankings could be different, depending on citations during that year. Thus, an interpreting scholar applying for a promotion could say that s/he has published so many papers in a journal which was ranked 2nd in 2006, 1st in 2007 and 2nd in 2008 in terms of CAI and 3rd 2006, 1st in 2007 and 4th in 2008 in terms of CCI, in other words, a journal which has been in the top 4 over the past 3 years in terms of both CAI and CCI.

            The list of (citing) journals surveyed would be as large as possible, without excluding any. For every year, all the lists of references in all papers in every issue will be read and every time a paper from another journal and by another author is cited (self-citations are excluded), the following data are added: name of citing author, country of institutional affiliation of the citing author, title of the journal in which the cited paper was published. For instance, for 2006, in Target 18:2, in Claire Yi-yi Shih’s paper “Revision from translators’ point of view: An interview study”, the following information can be found from the list of references (18 references in total):

 

Year

Citing journal

Issue

Citing

Author

Country of affiliation

Cited journal

2006

Target

18:2

Shih, Yi-yi Claire

United Kingdom

Across Languages and Cultures

2006

 

Target

18:2

Shih, Yi-yi Claire

United Kingdom

The Translator

2006

 

Target

 

18:2

Shih, Yi-yi Claire

United Kingdom

Meta

2006

Target

18:2

Shih, Yi-yi Claire

United Kingdom

Terminology update

2006

Target

18:2

Shih, Yi-yi Claire

United Kingdom

Babel

 

This citing paper thus provides one Citing Author Index point and one Citing Country Index point (for the UK) to Across Languages and Cultures, The Translator, Meta, Terminology update and Babel.

 

Reading through the list of references and extracting the information is very fast, and considering that each journal has between one and four issues a year, establishing the ranking through a relatively large sample of  citing journals (perhaps up to 50 or so) should not be too formidable a task, especially if the work is shared among several centres. The question is whether colleagues in various universities and journal editors are interested. If so, a specific body could be established, perhaps under EST, try to improve the concept as presented here, start working on the coordination procedure and produce results rapidly. Comments and suggestions are welcome.