Martin Ince, convener of the QS Global Academic Advisory Board, takes a look at new Leiden Ranking of universities.
At the QS World University Rankings, we are always keen to see how other people go about looking at universities.
This month has been notable for the appearance of the Leiden ranking, which offers a highly specific view of academic excellence as expressed through citations.
Paul Wouters, director of CWST at Leiden and director of the Leiden Ranking, says that it uses a methodology which allows research at institutions of radically different size and subject mix to be compared fairly.
It is also intended to compensate for the different characteristics of English and non-English publishing, and for the potential distorting effects of a few much-cited outlier papers.
Methodology...
We asked Paul how the Leiden Ranking fits into the world ecology of university rankings, especially the HEEACT and Shanghai rankings, which are also designed to look at high-level research performance.
He said: “There are a number of fundamental differences. Most rankings combine performance on very different dimensions in a single number, in particular educational performance and scientific performance.
"HEEACT is similar to our ranking in that it focuses on scientific performance. Another issue is that Shanghai and HEEACT are strongly size-dependent. Larger universities will almost always outperform smaller ones in these rankings.”
The Leiden Ranking focuses on the average performance of a university for each of the publications it produces. This means that the large differences in citation behaviour between scientific fields are corrected for.
Next, we asked, how does the system cope with the different publishing cultures of different subjects? Paul replied: “We apply a correction for the differences between citation behavior in different scientific fields. In addition, our ranking uses fractional counting indicators.
Publications co-authored by multiple universities are assigned fractionally to each of the universities involved. In the full counting approach used by most rankings, co-authored publications are fully assigned to each university involved, causing double counting of these publications.
The use of fractional counting is another way of making fields with different publication and citation cultures more comparable.”
And results...
Paul says that when the results came in, his team’s biggest surprise was the difference which the fractional counting approach makes. Universities with a strong medical orientation tend to fall in a ranking where fractional counting is used, while universities with a technical focus do better.
In the Netherlands, for instance, differences between universities that are apparent using the full counting approach disappear almost completely with fractional counting.
The Leiden methodology agrees with its competitors about the excellence of the top US universities. It has 42 of them in its top 50, led by MIT, Princeton and Harvard. Only two non-US universities make it into the top 20: EPFL and ETH, the Francophone and German-speaking federal institutions of Switzerland, at 12 and 18 respectively.
Cambridge, top in the QS ranking, appears here in 31st place. It is one of four UK institutions in the top 50 along with the London School of Hygiene and Tropical Medicine (33), Oxford (36), and Durham (42).
The only other non-US institutions in the top 50 are the Weizmann Institute (Israel) at 25 and the Technical University of Denmark at 45. The ranking shows the top 500 universities, with Moscow and St Petersburg in the last two places.
The table also shows the volume of publications for each institution found in the Web of Science database and used for the ranking. The total among the top 50 ranges from 33,511 for Harvard to 1,652 for the LSHTM.