Nunzio Quacquarelli, managing director of QS, talks about how different rankings tackle the issue of measuring teaching quality.
In the past three months, a number of researchers of global university rankings have published their results for 2010 and students, employers, academics and governments who rely upon these rankings to make important decisions have rightly questioned why their results vary. In short who is right? The answer is in fact fairly simple - It depends on which criteria and methodology makes most sense to the user.
Globalization is transforming the higher education landscape. The 2010 edition of the OECD’s Education at a Glance estimates that there are 3.3 million students at universities outside their home country, almost double the number of a decade ago. The world has changed and an increasing desire for comparative information has emerged.
The methodologies of each ranking provider need to be examined carefully in order to understand the final results.
Aims behind the rankings
QS World University Rankings seeks to measure employability, research quality, commitment to teaching and internationalisation and has proven to be consistently by far the most popular of all international rankings – this year over 5 million people have already visited TopUniversities.com, which hosts QS rankings.
QS has been criticized for placing 40% weighting on an academic reputation survey of research quality, whilst several commentators have recommended QS apply a higher than 10% weighting to our corporate survey of student employability (an outcome of many factors including, but not limited to, teaching quality). Faculty student ratio, with a 20% weighting, is an important indicator of access and commitment, but does not attempt to measure teaching quality.
The Shanghai Academic Ranking seeks to measure excellence in published university research output, with 60% of weighting applied to citations and papers in sciences and social sciences, with no attempt to measure teaching quality. It has been criticised for placing 30% weighting on Nobel Prizes and Fields medals which are historic measures (some winners date back 80 years) making the results extremely stable, but not necessarily a useful contemporary measure of university excellence.
It is perhaps not surprising that this academic ranking is established as the most widely referenced amongst the science-based academic research community, but is this a good guide to post-university student employability and how do new centres of research excellence ever catch the long established universities?
Robust data?
Times Higher this year published a new ranking placing primary emphasis on research, but also emphasising their attempt to measure teaching quality. Research is given a weighting of 62.5%, with one measure - citations per paper weighted at 32.5% - surprisingly yielding Alexandria University at 4th in the world ahead of Harvard and Stanford. Five indicators attributed to teaching account for 30% of the weighting.
This has produced the most stinging criticism from academia, with Malcolm Grant, Provost of University College London, amongst others, arguing in The Guardian that for teaching quality, there are simply no robust data on which global comparison can be made. When it comes to calculating the number of citations per academic publication, Prof. Sibrand Poppema, President of the Board of the University of Groningen, Netherlands, has commented that he finds the method which THE uses as “completely obscure”.
QS’s research only asks experts to evaluate what they know in depth. Employers are asked to evaluate universities at which they recruit. Academics are asked to evaluate research quality in their subject and region of knowledge only. In our opinion teaching quality, as opposed to teaching commitment, cannot be effectively ranked, because there are no independent experts and no suitable surrogate metrics.
Global rankings are here to stay and whenever rankings are published controversy will follow close behind. The integrity of rankings organisations requires a transparent, rigorous methodology which measures what is measurable. QS has 20 years of experience conducting higher education research and, though market leader, we are not complacent.
QS is committed to providing reliable and meaningful information. Our independent academic advisory board supervises our data accuracy and the evolution of our methodology. For example, in 2011, QS is launching a voluntary global student survey to provide comparative insights into the student experience. The results will be published alongside, but not included within our overall rankings results.
With high quality university education becoming more expensive and having an ever growing impact on career outcomes, making the right choice has never been more important. Potential students and their parents need all the help they can get, but they must dig beneath the rankings to understand the validity and relevance of each criteria and create their own personalised ranking, based on what matters to them.