You are here

Interested in studying abroad?

Check out our comprehensive guides

Comparing University Rankings

Comparing University Rankings main image

There are plenty of university rankings out there. So what are the differences between them, and what are the consequences of these differences?

The world’s first system for ranking universities is still not 30 years old. US News and World Report launched America’s Best Colleges in 1983. And if you pick up a copy of the 2010 edition this August, you will see one reason why university rankings exist.

Since 1983, university ranking has become a world industry. But before we ask how universities can be ranked, it is important to consider why.

Why rank universities?

The main reason for ranking universities is the vast growth in higher education across the world in recent decades. Universities are bigger than in the past. One with 10,000 students is no longer considered large. And there are more of them. The UK had 116 at the last count.

As the number of students has grown, so has their need for advice and information, because more of them come from families which are new to higher education. And in the US especially, the sheer cost of going to university also drives rankings. College can cost a six-figure sum, so you need to know you are going to the right one.

In addition, the courses and programs offered by universities are increasingly diverse. Observers such as Michael Porter of Harvard Business School point out that advanced economies produce increasingly varied and specialized employment. This means new forms of university education, which in turn means an increasing need for information.

Take a look at any national university ranking and you will see that it is mainly geared to the needs of potential undergraduate students, and of course their parents. Measures such as class size, staff/student ratio, completion rates, the achievement of high honours degrees, and your likelihood of employment after you leave, are the sort of thing that gets counted.

These all feature in The Times [of London] Good University Guide. The guide also uses the results of the UK’s Student Satisfaction Survey, although not all countries have anything comparable. In addition, the Guide uses research quality as measured by the UK Research Assessment Exercise. But it uses this measure more as a way of detecting universities and departments where teaching might be carried out by top figures in the field than in an attempt to identify top research institutions.

A similar logic applies to other national ranking systems. US News uses many of the same measures as The Times, but adds others such as library spending and alumni donations, based on the theory that satisfied graduates give more. It also uses course costs, in the belief that people are willing to pay more for a quality product.

These rankings, and others like them around the world, have become important to universities in the fight for students. In the US, colleges that slip a few places in the US News rankings will see the number and quality of their applicants fall.

According to the website of Shanghai Jiao Tong’s Academic Ranking of World Universities, 23 nations now have at least one university ranking system. Many have several, especially the US. Like The Times and US News, most are intended for students and use criteria that reflect this emphasis.

Global comparisons

Now think about an even knottier question: a world ranking of universities. Again, it is essential to think why one might want to rank universities globally before one can work out how to do it.

The first world ranking of universities was published by Shanghai Jiao Tong University in 2003. Its methodology is based on the insight that although teaching and learning at universities grow out of the society where they are carried out, research is more universal.

A discovery about physics made in France is news to physicists in Japan. Equally, governments all over the world, including China, are keen to know whether their countries are competitive in research, because of its role in driving innovation and economic growth. Of course, social, humanities and arts research are more tied to the country they come from than scientific research, and this is a drawback of global rankings systems based on publishing patterns.

So Shanghai’s Academic Ranking of World Universities uses measures based on research to capture academic quality. They give 20% of a university’s possible score for citations of research papers, the standard measure of whether research is regarded as interesting.

A further 30% is available for universities whose researchers or alumni win Nobel Prizes or Fields Medals. Another 20% is available for publishing in Science and Nature, the two boutique publications of international scientific research, and a further 20% for universities with very highly cited researchers. The final 10% takes account of university income, to favor less well-resourced universities.

Suitable criteria?

There are many objections to these criteria. Most academic subjects do not have a Nobel Prize. Only 806 people have won one, including the Peace and Literature prizes, so they are too small a sample to mean anything. Fields Medals are even worse, being awarded only in the tiny (if important) subject of mathematics, and only to people under 40. In addition, Science and Nature have substantial biases towards specific areas of science.

Despite these issues, there is little doubt that ARWU does provide a snapshot of the world’s top universities for scientific research. The newer HEEACT ranking from Taiwan is a more sophisticated version of ARWU, intended to avoid its worst flaws by using more representative measures.

The QS World University Rankings has a different motivation and as a result, uses different criteria. Our interest stems from globalization and from the internationalization of knowledge and study. When our rankings were launched in 2005, there were already more than two million international students, and the total now exceeds three million.

We took a decision at the start that our rankings (at that time a joint venture with the Times Higher Education Supplement) would try to capture everything that makes a university noteworthy on the world stage. This includes teaching as well as research, and also involves international commitment. And we decided to concentrate on a small number of robust measures. Measures that work fine in a single country, such as library spending, become impossibly fuzzy when they are applied on a global scale.

The QS rankings are designed to measure big, general universities. Unlike ARWU, we exclude specialist institutions and those that do not teach undergraduates.

Types of data

The cornerstone of our approach is our awareness that active academics know about good universities. We have developed a unique academic review process, with rigorous quality control, that asks them where the best work is being done in the field they know about. In 2009, we aggregated about 200,000 pieces of academic review data from just over 9,000 people. Because the respondents are in all subject areas and all parts of the world, this approach counteracts the bias in favour of science subjects and in favour of English-language publishing which citations measures cannot avoid.

This measure accounts for 40% of an institution’s possible score in our ranking. The next 10% is arrived at by a similar logic, but relating this time to teaching rather than research. We ask active recruiters where they like to recruit, partly because they are likely to know where the good graduates come from, and also to allow users of the rankings to see which universities are best-liked by recruiters.

The other half of our rankings comes from quantitative data. The first 20% is a straightforward count of the staff/student ratio of a university. Subjects differ in the number of staff needed to teach them, but as we are mainly looking at big, general universities, that effect ought to sort itself out.

The next 20% comes from citations, as held by the Scopus database, per member of academic staff. We do it this way to reduce the effect of having a medical school and to look for the concentration of cited researchers in each university.

Finally, we allot 5% each for the proportion of overseas staff and overseas students at each university. Here we are trying to find universities which are internationally attractive and which are serious about being global.

These measures capture how universities perform in their main activities of teaching and research, and at how well-located they are in the international streams of people, money and ideas that make up the globalized world. We like these measures, and we intend to change them only gradually.

New developments

But there are many other ways to rank universities internationally. The Organisation for Economic Cooperation and Development (OECD) is planning a system which will look at indicators of teaching. This is a promising addition to existing research-based rankings. It will collect information on the setting and context of university life – a measure, perhaps, which will build in a significant advantage for institutions in rich nations.

The European Commission, meanwhile, has contracted a continental European consortium to internationalize the German-based CHE university assessment system. This approach is likely to use a wide range of measures and may produce softer outcomes than a raw ranking table.

In addition, future developments are certain to include more global data on specific subjects. This would be helpful to students, but would also interest companies with research budgets, as well as governments. One incentive to develop this approach is the European Research Council, which is now distributing funds competitively across the 27 EU member states.

Even this list does not complete the story. CSIS, the Spanish research agency, is behind the Webometrics ranking, which is based upon indicators of universities’ web presence. This is not itself a quality measure but it does say something important about whether a university is visible and is planning for the future.

Also of interest is the Scimago Institutions Ranking, produced by Elsevier, also the publisher of the Scopus database. It lists publications from all research producers, not just universities. It finds that in 2009, the French research agency CNRS was the world’s biggest generator of research, with over 120,000 papers. The most productive university was Harvard with half as many.

While there is continuing interest in ranking universities, an approach that also throws light on other knowledge creators is also valuable and we look forward to its further development.

QS Staff Writer's profile image
Written by QS Staff Writer

Want to leave a comment?

Please login or register to post
comment above our articles


i want to get information about comparison of middle east universities

Hi Shima. You can use the QS World University Rankings and then filter by location to see universities in the countries you're interested in:

Hope this is helpful.