Schools Training

Rankings – Higher education systems vs universities

26 MAR 2014

                                  Image Source

International university rankings have become a familiar feature on the higher education scene. As their impact has grown, reactions have followed, running from enthusiastic adherence to passive resistance to outright criticism.

Thanks to the latter, methodologies are improving – guidelines and safeguards are being developed (for example, the Berlin Principles) and followed up (for instance, the International Ranking Expert Group).

Yet serious criticisms relate to the fact that, by definition, these rankings focus exclusively on individual institutions – the world-class universities – which are found only in a small cluster of countries.

Thus, university rankings ignore the vast majority of institutions worldwide that cannot compete on the same playing field as world-class universities. In turn, policy-makers tend to prioritise a small number of institutions in order to improve their country’s position in the rankings, often at the expense of the rest of the higher education system.

To counter these unexpected and perverse effects, attempts are being made to measure, rank and compare national higher education systems rather than individual institutions. To figure out whether these attempts are successful, this article compares their results with those obtained by university rankings.

Selecting and comparing rankings

As a first step in the comparison, university rankings and system rankings need to be selected. The Academic Ranking of World Universities, usually referred to as the Shanghai ranking, and theTimes Higher Education and the QS rankings are selected for being the most popular and well-established league tables. Because of its innovative aspect, the Webometrics ranking is added to the ‘big three’.

As far as system rankings are concerned, the choice is limited, and Universitas 21 – or U21, led by the University of Melbourne in Australia – stands out as an obvious pick, with currently no real competitor, even though earlier works have explored ways to assess entire systems.

U21 uses 22 measures – ‘desirable attributes’ – grouped into four categories or modules: resources, environment, connectivity and outputs weighted respectively (25%, 20%, 15%, and 40%).

Most measures draw from conventional and verifiable sources – the OECD, University Information Systems and SCImago data etc – and they provide a comprehensive view of the most important facets of higher education systems.

Particularly interesting is the inclusion of the unemployment rates of university graduates to reflect external efficiency, even if the measure needs some fine-tuning.

Another welcome feature is the effort to reflect the regulatory environment of higher education systems. However, the modalities needed to come up with an indicator for this dimension are elusive and rely on a combination of sources – a survey of U21 institutions, and data from renowned institutions and from websites.

Finally, the use of an ‘overall’ indicator built on the four modules’ indicators is highly dependent on the weights of its components and, therefore, remains controversial because of the arbitrariness of such weights – a pitfall shared by university rankings.

Then, the results of the four selected university rankings need to be normalised at the country level so that the size effect is neutralised. More specifically, the number of top universities in each country is weighted by the higher education-aged population of the country. This indicator can be seen as reflecting the ‘density’ of world-class universities in each nation.

First, there is no significant correlation between the number of top universities in a country and their density. Second, the normalised results of the four selected university rankings are very similar: their methodologies differ substantially on some points, but also share common features.

Third, countries that can boast at least one of the top 400 universities in each of the four rankings constitute a rather homogenous club of less than 40 members, mostly high-income economies.

Across the four rankings, the density of top universities is the highest in small and rich countries – Denmark, Switzerland, Sweden and Finland followed by Ireland, The Netherlands and Hong Kong.

Similar outcomes

The four normalised university rankings produced by U21 (2012 edition) leads to a clear conclusion: a strong and positive correlation between the two sets of results.

To double check this finding, correlations are also examined for the 2013 editions of both Shanghai and U21 rankings and the results show an even stronger association.

A further test is administered, correlating the results of each of the four U21 categories with those of the major university leagues. The correlations are significant and the relationship is largely positive, regardless of the university league considered (Shanghai first) and the U21 category selected (resources and output strongest).

The only noticeable exception to the convergence of the two types of rankings is the United States, which comes first under U21 but does not show among the winners of the university leagues when analysed in terms of density.

 

Read full article: University World News