Understanding the Methodology: QS World University Rankings | Top Universities

Understanding the Methodology: QS World University Rankings

User Image

Staff Writer

Updated Jun 08, 2021



FAQ students

Ever wonder how the QS World University Rankings work? Or the reasoning behind this? Here's some answers to some common questions regarding the methodology we use.

QS has been producing university and business school research for the past 20 years. During this time, we have introduced innovations such as the use of global employer and academic surveys as a cornerstone of our approach.

In that time, QS research has become highly respected and every year is referenced in roughly 1000 different newspapers, journals and websites. The QS World University Rankings have become the most widely used basis for comparing universities across the world.

Why does QS produce the World University Rankings? 

QS has been producing the World University Rankings since 2004 to give students worldwide an independent, objective and data informed tool to help them choose the right university.  

There are thousands of universities around the world, and as higher education becomes more accessible, it’s vital to maintain a benchmarked comparison of them over time.  

We know that for many students, the choice of where to go to university is not a simple one and this may be especially true for international students. The rankings have been designed to make this decision easier and to help students make more informed choices. 

How has QS designed its methodology? 

The QS World University Rankings methodology has been designed to be accessible, globally relevant, and stable.  

QS originally began the process of ranking universities internationally by identifying the primary objectives of world class universities: research quality, graduate employability, teaching experience and international outlook. It then sought ways of measuring each of these. 

QS consulted with many university leaders in compiling the rankings methodology and identifying these core missions. We spoke with nearly 8,000 academics in face-to-face seminars, who showed strong support for our focus on these primary goals of world class universities. 

Next, QS sought to develop a tried and tested approach to conducting an expert Peer Review Survey of academic quality. It brought in statistical and technical experts to ensure that our survey design could not be ‘gamed’ and provided valid results. The survey design is founded on the principles of many online political polls, which are used to anticipate election results. 

What is the QS methodology? 

The QS methodology has six indicators looking at four broad categories: research reputation, the learning and teaching environment, research impact, and internationalisation.  

Our global academic reputation survey – the largest of its kind – asks academics to give their informed view on which institutions excel in the disciplines that they are familiar with. Academics are best placed to answer questions around research and academic excellence. They collaborate with other academics, attend international conferences, peer review work from other institutions and sit on advisory boards.  

Our Employer Reputation Survey asks global employers which institutions, in their experience, supply the best graduates into their workplaces.  We weight the views from our academic reputation survey at 40 percent, and the views from the employer reputation survey at 10 percent.  

In addition to reputation, we place a firm focus on research impact. In the QS World University Rankings, this takes the form of our citations per faculty indicator, weighted at 20 percent. Put simply, it measures the volume of citations being achieved on average by the academic staff at the institution. It's a reasonably safe assumption that a higher volume of citations means that the academics at those institutions are publishing in top quality journals, engaging in strong collaboration and working on topics that merit a wide readership. We collect data on the research output from Elsevier Scopus.  

The faculty student ratio indicator measures the learning and teaching environment of the university. This is a simple measure, dividing the number of students by the number of faculty staff. The more academic staff that are available per student, the more we may assume that an institution has adequately funded and resourced their teaching commitments. Lecturing, supervision, curriculum design, marking, and pastoral care all require a strong staff headcount, so this metric allows students to see how well-resourced different universities are in this respect. 

Finally, we look at the internationalisation of the student experience. Two metrics, International Student Ratio and International Faculty Ratio, account for 5 percent each, rounding up the final weights to 100 percent. QS’s focus has always been on fostering internationalisation in education, and by keeping this indicator, we allow students to assess which universities will give them that experience. It serves two broad purposes. Firstly, how diverse will the experience be like on campus, in terms of making friends and being taught by staff all over the world? Secondly, how attractive is that university to international students and staff? 

Are there plans to include any additional indicators in the future?  

We review our methodology annually and have a formal process for adjustments or additions. We seek advice from our Global Academic Advisory Board (currently around forty global academics who have expertise in their field) and assess the evidence and impact that any adjustments would have.  

It's important to remember that QS does not just provide rankings – we are a performance insights partner to the higher education sector, and it may be that there are other more natural platforms for additional insights rather than a ranking. 

Is the methodology accurate and reliable? 

QS ensures the QS World University Rankings are as accurate as possible. Each of the indicators is very carefully collated, using a mixture of surveys and data from QS research partners. 

To measure a school’s academic reputation, surveys are sent to 100,000 academic experts who asked to mark the research excellence of other universities. These experts are not allowed to vote for their own school but instead must select another school they believe has excellent research. This is to remove any potential bias. Similarly, for the employer reputation indicator, 50,000 employers are asked to identify institutions from which they source the most competent, innovative, effective graduates in our QS Employer Reputation Survey. 

Universities are asked how many faculty members and students they have. To some extent, this relies upon universities telling the truth. However, if the figures look drastically different to previous years, we validate these figures against regional averages and validate the data against past submissions. We won’t accept the figures unless universities can explain this change. 

The rest of the data is sourced from QS’s research partners at Elsevier. All citations data is sourced using Elsevier’s Scopus database, the world’s largest repository of academic journal data. This is completely unbiased, and universities have no control over this data. 

To ensure reliability, QS ensures stability in the ranking methodology. We are careful about making changes to indicators in order to ensure that the data is comparable year on year. Any changes we make have usually involve our advisory boards and have gone through a long consultation process. This means that if an intuition’s ranking changes in the index, it is because something has happened to affect its performance, not because QS has changed the methodology. 

How recent is the data when the rankings are published?  

QS runs an annual data collection cycle. Institutional data is based on the most recent full reporting year for the institution.  

Our reputation surveys that fuel the Academic Reputation and Employer Reputation metrics are run annually but take into account responses from the previous five years, with lower weighting given to the data that is from four and five years ago. This means we can be confident in the reputation of the institution over time. 

The data on research and citations looks at a five-year period for research papers, and a six-year period for citations – after-all it takes time for a paper to be cited by other academics after it is published. We consider citations received up until the end of the calendar year prior to publication. These approaches ensure our evaluations are as up to date as possible. 

Does the methodology have any limitations? 

All rankings have their limitations. There is certainly no single right answer, which is why QS believes in competition in rankings to provide consumers with multiple points of view.  

We encourage people to look beyond the rankings to develop a broader perspective on the things that are important to them in understanding university performance.  

Our classifications provide different lenses for rankings users to explore the types of institutions important to them, as do our different rankings exploring dimensions such as particular subjects, regions and employability. 

Some of the most common criticisms include:: 

  • Institution size. Rankings have been criticized for favouring large universities. The Shanghai Jiao Tong Academic Ranking (SJT) certainly favours large, well-funded institutions with an emphasis on science, and most of its indicators are not adjusted for size. In QS’s World University Rankings, by contrast, all the hard data indicators are adjusted for size. The unprompted nature of our survey design also means that respondents must think about the universities which they actively know produce great research, and employers have to actively think about the universities they seek to recruit from. 

  • Bias towards the natural and life sciences. Measures such as citation counts do favour universities which are strong in the fields of medicine and natural sciences, where there is a strong publishing and citation culture. Nevertheless, within our Peer Review, QS normalises across five broad subject areas: Life sciences, natural sciences, IT and engineering, the social sciences, and the arts and humanities, as well as across geographies. This ensures that universities that are strong in the humanities or social sciences have almost as good an opportunity to feature in our results as those strong in the sciences. 

  • Anglo-Saxon bias. A common criticism of global rankings is that they favour universities which publish in the English language, because most journals counted by bibliometric databases (counts of papers and citations per university) are in English. The use of English in academia is widespread and many of the world’s most impactful pieces of research are produced in English. The dataset that we use from our research partners at Elsevier provides ever increasing coverage of non-English journals. The QS Peer Review is independent of any such language bias, and QS has gone to great lengths to produce our surveys in a range of languages, so as not to disadvantage non-native English-speaking academics.  

How do you verify the data you collect? 

QS go to great lengths to verify all the data collected, using multiple sources. Citations data is collected from Scopus (provided by Elsevier) and together we ensure accurate grouping of the data. Each affiliated college of each university needs to be incorporated into the analysis. 

Most of the hard data criteria are collected directly from universities, using a system which records the time of the entry and the person making it. All data is verified against government data statistics, such as IPEDS and HESA, where available, as well as against university websites. Any inconsistencies are followed up directly with each university and resolved. This is a huge task, requiring a team of researchers speaking many different languages, but stands us apart from many other rankings. 

Do you classify different types of university? 

QS classify institutions across four different measures, based on the school’s number of students, its subject range, how old it is and its research productivity.   

Arts and humanities-focused universities produce many fewer research papers than science-focused ones. In this way, sub-rankings can be drawn by classification. QS also publishes sub-rankings by the five broad subject categories defined earlier: life sciences, natural sciences, IT and engineering, the social sciences and the arts and humanities. 

How transparent is the QS World University Rankings methodology? 

QS prides itself on being transparent. Full details of the methods used to compile the QS World University Rankings can be found here. 

QS also frequently runs webinars throughout the year featuring a Q&A section at the end. This allows audience members to engage directly with QS experts and ask any specific questions they might have. Details of upcoming webinars can be found here

How well accepted is the QS methodology? 

The QS methodology is well established and accepted. Government officials and policymakers all around the world are keen users of the QS World University Rankings. In a number of countries governments have established specific objectives for the development of their higher education systems based on QS rankings.

The rankings are an objective measure of a country’s university system. They are a powerful tool for spotting areas of strength, understanding how it compares with the rest of the world, as well as weaknesses that must be addressed. 

What are some of the benefits of the QS World University Rankings methodology? 

  • Strategic planning. QS World University Rankings provide a trusted an objective measure for university leaders to measure their own efforts to develop the performance of their university.  

  • Benchmarking. Seven years of prior results are available to enable university leaders to benchmark the evolution of their university compared to other institutions. 

  • Measuring quality. Our use of citation data provides an indicator of the quality of research output. Universities can then go into greater depth by utilising tools provided by Elsevier to identify distinctive research competencies and research productivity at the individual faculty level. 

  • International recognition and brand awareness. QS reputation surveys provide the best measure available today of the strength of a university brand amongst other academics and employers. In a global education marketplace, this is a vital measure. 

  • Clarity. QS methodology is straightforward and transparent, making the rankings widely accessible and easy to understand. 

There are similar benefits for employers. The QS World University Rankings enable employers to plan their recruitment campaigns amongst highly rated universities anywhere in the world. In the past, a company expanding into a new market would have had to rely on local word of mouth when planning to recruit graduates. Today it has a reliable basis for its choice of target university. The rankings encourage  employers to hire graduates from universities previously unknown to them. 

For students and parents, the benefits of the World University Rankings are obvious. Any candidate seeking to study abroad no longer has to rely on the advice of an agent who may be on commission to recommend a particular university. They can now use objective data, compare universities for their different strengths and make informed short lists.  

+ 4 others
saved this article

+ 5 others saved this article