Skip Ribbon Commands
Skip to main content

Quick Launch

 

 Categories

 
  
Edit
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
Home
July 21
Q&A About Rankings for International Students Part 4 of 6

​For international students considering a college education in the United States, media rankings of colleges are a leading source of information and are often relied upon when students are exploring their options. The following question is typically encountered by international admissions counselors and independent educational consultants working with international students. The answer may useful in helping to guide students who want to know more about rankings.

There are so many rankings, including world rankings; I don’t know which ones are reliable. How do I know if some are more reliable than others in helping me do research on possible college options?  

There is no such thing as a reliable college ranking since each ranking makes assumptions about what are the most important features of an institution. College rankings are designed with specific goals in mind. In fact, rankings are not really rankings at all because they often adjust the data in order to reinforce a pre-ordained order rather than letting the best institutions rise to the top. Moreover, many rankings measure things that affect undergraduates only in the most remote ways such as the research activity of the faculty or how many faculty members have won a Nobel Prize.

 

Occasionally looking for college rankings will take you to unscrupulous “lead generation” sites.  Lead generation sites are often owned by marketing companies that unaccredited and for-profit colleges hire to find new customers. These sites generally promise to help you find the “right” college or access to special lists, rankings, and other information. A good way to tell if a site is a lead generation site is if it asks you to provide your contact information before you can proceed. Giving out your personal contact information may result in you being subject to aggressive sales calls and tactics from non-reputable colleges.  These sites often feature colleges that have paid to receive a sponsored place. When looking for colleges, always think about whether a site is offering something truly helpful. Ultimately lead generation sites are more useful to the colleges that use them than they are for potential students


July 21
Q&A About Rankings for International Students Part 3 of 6

​For international students considering a college education in the United States, media rankings of colleges are a leading source of information and are often relied upon when students are exploring their options. The following question is typically encountered by international admissions counselors and independent educational consultants working with international students. The answer may useful in helping to guide students who want to know more about rankings.

Given the 4,000 or so colleges in the U.S., I find it hard to know where to begin my research. Should I use rankings to help me narrow my search, and if so how?

There are indeed many colleges in the United States. The first thing you should do is start with yourself. You should consider the kind of college experience you want to have and what you hope to accomplish while you are there. You should consider what you might like to study, what your favorite things about your current school are, and what size school you want to attend. You should also make sure that the school has a few majors you are interested in and a compelling academic philosophy. These are the considerations you should make when trying to make a good “fit” with you and an institution. 

 

Rankings tend to correlate well with admission factors and institutional wealth. Many of the highly ranked schools are well thought of…at least in terms of what the rankings measure. However you should never cross a school off your list because it is not ranked high. The methodology of rankings tends to produce artificially large differences between institutions.


July 21
New Research Suggests That College Rankings Promote Pre-established Order Among Colleges

​University Rankings In Critical Perspective

The Journal of Higher Education, Volume 84, Number 4 July/August 2013

Brian Pusser and Simon Marginson​

In University Rankings In Critical Perspective, university rankings are recontextualized within a discourse of power that illuminates their far-flung effects on the globalization of higher education. Brian Pusser and Simon Marginson argue that rankings contribute to a convergence of institutional missions which can undermine the ability to meet localized educational needs. Moreover rankings are a tool for powerful state actors to legitimize their standing atop of the international prestige hierarchy by sanctioning global competition on terms that are most favorable to hegemonic universities. Overall Pusser and Marginson conclude that university rankings contribute to a neo-imperial project that solidifies traditional forms of power and unequal competition between nation states.

Pusser and Marginson argue that a critical-theoretical perspective that examines power is useful for the study of higher education. Pusser and Marginson see states as  fulfilling a variety of ends related to the concentration of power for elite actors. Institutions of higher learning exist in a nexus of power relations and are often recruited for nationalistic or state building agendas. As state agents, the university goals can become co-extensive with the interests of moneyed elites by bringing legitimacy to neo-liberal values. Elite research universities which are consistently found at the top of the rankings, are broadly responsive to the needs of powerful families, professions, and other authorities. Since the most sufficiently resourced institutions are consistently at the top of most rankings, the rankings ultimately serve as a framework by which the hegemony of leading institutions is perpetuated across the postsecondary education landscape. Overall, a critical-power perspective of rankings reveals that rankings do not reflect objective ratings of universities so much as they express various normative agendas typically serving elite actors.

For Pusser and Marginson, rankings contribute to a neo-liberal meta-state project that confers power and prestige to English speaking nations by engaging universities elsewhere in a contest that favors traditional elites. Given the diversity of countries, contexts, and educational needs around the world, the rankings would not be expected to have such broad appeal. However, rankings have managed this exactly and contributed to a convergence of educational goals in many disparate contexts. In short, college rankings are an efficient mechanism by which the template of highly selective and research-oriented institutions can be propagated around the world. Using a conception of power informed by Michele Foucault, they argue that rankings have socialized, or “disciplined,” other universities into behaving in ways that serve the interests of the most powerful agents. 

The disciplining feature of rankings is unfortunate because it undermines the ways in which global institutions could better serve their local contexts.  Currently there are no major rankings that award merit to promoting accessibility or other forms of social justice. In the data that Pusser and Marginson collected, they found that none of the highest ranked medical schools for “Social Mission Score” were also found in U.S. News and World Report's top research category. Only the Morehouse School of Medicine was ranked among the top ninety institutions in the primary care category for 2010.  Presumably, there could be a variety of rankings that reward institutions serving in a number of heterogeneous contexts but the data that is available points strongly towards a convergence of interests most favorable to hegemonic universities. More information on the subject of sociological and institutional effects of college rankings can be found in the report of the NACAC Ad Hoc Committee on US News & World Report rankings, available here​

The full report can be found here.


June 26
Will U.S. News Rank Community Colleges in the Future?

​The new Community College Directory​ published by U.S. News includes general information about 950 community colleges using data from the Integrated Postsecondary Education Data System (IPEDS) such as, tuition/fees, financial aid, enrollment, gender distribution, and student-to-faculty ratios. A user can search schools by name, state, zipcode, or degree/certificate program. Although the directory currently doesn’t rank community colleges by specific factors, U.S. News says they are “studying the possibility of publishing evaluative information on community colleges and their programs in the future.”

Read U.S. News’ FAQ about the directory here​

June 19
Researchers Conclude Efforts to Change a University's Rankings are a "waste of time."

​In response to rankings such as U.S. News & World Report, many university administrators take actions such as setting strategic goals, allocating resources or even tying performance to bonuses in a direct attempt to boost one's position in the rankings. However, it's unclear what it would really take to move up in the rankings significantly, or if it's even possible to do so. Researchers in a new report, examined these questions by creating a model that uses the same methodology described by U.S. News.

Gnolek, Falciano, & Kuncl (2014) conclude that "meaningful rank changes for top universities are difficult and would only occur after long-range and extraordinarily expensive changes, not through small adjustments." For example, it would be extremely difficult, if not impossible, for a university in the mid-30s to move into the top 20 without a substantive change in a university's academic reputation score, the category U.S. News weights the most in their rankings (combined weight of 22.5 percent). This score is often based on age-old perceptions that change very little over time. Even if that change occurred, the university would need to spend a very large amount of money to change other factors at the same time. 

Furthermore, in answering what amount of change should simply be considered "noise," the researchers found for universities with ranks beyond the top 40, differences of +/- 4 (or even greater) should be considered "noise" and universities with ranks in the top 40 should similarly consider differences of +/- 2 as "noise." 

So what's the advice to campuses? 

"Universities might be best served by focusing their efforts and resources on what they do best, not what is being measured by U.S. News, by making improvements that are in line with their own goals and are centered around the success of their unique student population."

The authors hope their findings will reduce the pressure for universities to constantly respond to small changes in rankings, spend money on factors exclusively aimed at changing rankings, as well as, combat the tendency to manipulate numbers. Indeed the researchers note "the focus on wealth, fame, and exclusivity that comes from placing emphasis on U.S. News rankings creates very real issues and highlights some of the inherent problems with rankings themselves."

Listen to one of the authors talk about the findings in a panel recorded here​

June 16
Q&A About Rankings for International Students Part 2 of 6

 For international students considering a college education in the United States, media rankings of colleges are a leading source of information and are often relied upon when students are exploring their options. The following question is typically encountered by international admissions counselors and independent educational consultants working with international students. The answer may useful in helping to guide students who want to know more about rankings.

My parents say they will only pay to support my studies in America if I am accepted by a Top 100 college. I really want to study in America but I don’t know if I can reach that goal. How can I convince my parents to not care so much about the rankings?  

It is important to consider that there are no official rankings in the United States. The rankings that your family is familiar with are “media rankings” which are only designed to sell magazines. The people who rank colleges are not authorities on higher education nor do they even work in education…rather they are editors and journalists. Most common rankings use a mixture of data that is biased towards a specific goal. Different rankings reward different qualities about colleges. Research has shown that your choice of major and academic performance matter much more for your job placement and life satisfaction than attending a highly ranked college. The things that are found to be actually important are small class sizes, a high percentage of faculty holding a terminal degree, and rigorous coursework.




June 10
Methodology Behind the Guardian University Rankings in the UK Now Available
The methodology behind the Guardian's rankings of universities in the UK has recently been posted online.

The Guardian's University League tables rank universities and their courses based on data collected by the Higher Education Statistics Agency​ and data from the annual National Student Survey in the UK (which is sent out to final year undergraduates and takes less than 5 minutes to complete). Scores are created based on a formula with several different factors and weights. Different results are common in rankings depending on the publication and the methodology used. What do you think of the Guardian's methodology?

For more resources on college rankings or to view other ranking methodologies, visit NACAC's webpage on College Rankings​
May 29
Many Questions Remain: APLU Forum Discusses the President's Rating Proposal, Alternatives, and Viability

​In April, the Association of Public and Land-Grant Universities (APLU) held a forum relating to the President's proposal to create a postsecondary institutional rating system. Panelists discussed important considerations in rating colleges including alternative ideas, unintended consequences of proposed strategies, and possible ways in which to account for student inputs when looking at outcomes. Below are some of the highlights:

  • ​A general consensus existed that framed the forum discussion: there is a serious problem with some very low-performing institutions that continue to receive Title IV money and something needs to be done. However, agreeing on the best way to address this problem is where things get complicated.

  • Panelists generally agreed that the underlying goal(s) of the rating system remain unclear. Is the main goal to affect institutions by holding them accountable in order to get better outcomes, affect student choices by offering comparisons of colleges, or both? As Sandy Baum pointed out, answering the "what are we really trying to do?"/"what is the problem we're trying to solve?" question is a critical first step before considering how to design such a system, answering how the proposed suggestions would address the problem, or whether they would have unintended consequences. 

  • Sandy also raised the following critical questions: "Are we devoting a lot of resources to a universal system in order to avoid singling out those that are causing the most serious problems?" and "What outcomes do we expect for under-prepared, at-risk students? What's the best way to achieve those outcomes?"

  • As an alternative idea to the ratings proposal, David Bergeron from the Center for American Progress suggested returning accreditation to "a community of practice" model that's about improving outcomes for students (with connection to aid limited) and requiring institutions to have "skin in the game" in student loans (except at the highest levels of performance).

  • Christine Keller discussed APLU's alternative plan​, which suggests enhancing consumer information tools and separately, designing a stronger Title IV institutional eligibility process that could involve three performance tiers (with outcomes adjusted by a student readiness index). 

  • Panelists that were asked to talk about usin​g student input-adjustment measures to fairly compare outcomes of similar institutions agreed that the notion is essential, however, the viability of such a model based on the proposals we have now was debated. 

To watch the whole forum, or parts of it, click here


April 29
The Potential for Harmful Consequences: When Students Take Rankings at Face Value

Another study by Michael Sauder, a University of Iowa associate professor in sociology, and Wendy Espeland, professor of sociology at Northwestern University, outlines important implications in the way students interpret rankings. Strength in Numbers? The Advantages of Multiple Rankings in particular compares the influence of U.S. News & World Report rankings in legal education (a virtual monopoly on rankings of law schools) to the use of multiple rankings in the realm of business schools, arguing that the latter is the lesser of two evils. One of the major criticisms of having one, predominant source of rankings, law school administrators argue, is that prospective students tend to be "uncritical consumers" of rankings, accepting them at face value without considering the underlying methodology. Whereas the ambiguity created by multiple rankings at least helps prospective students be more cautious in their interpretation of rankings, having one source of rankings makes it more likely that students will misinterpret the information as "objective" and overlook other factors of school quality that may be more important and not reflected in the quantitative and subjective nature of rankings. Furthermore, the authors point out that even multiple rankings, "incorporate only a limited range of the possible indicators of quality, typically including factors that are easy to measure, widely available and standardized."

Even though multiple rankings pose advantages over one source of rankings in the short-term, they do not eliminate the long-term risks of using rankings in general. Sauder and Espeland remind us to be mindful of "being seduced by the apparent objectivity and rigor of rankings" and the dangers inherent in simplifying complex systems, such as institutions of higher education.
April 25
Q&A About Rankings for International Students, Part 1 of 6

​College rankings are controversial in the United States. To be sure prospective students from overseas understand the nature and role of rankings in the U.S., we offer the first in a series of Questions & Answers for international students:

Are the U.S. News & World Report rankings considered “official” rankings of U.S. colleges and universities?

NO. The U.S. News & World Report ranking, along with a growing list of other college ranking publications, is a commercial activity by a private media corporation. The rankings generate a great deal of revenue and visibility for U.S. News & World Report, and are to be interpreted as a private endeavor. Each media company that ranks colleges, including U.S. News & World Report, has a different approach to ranking colleges. There is no officially accepted way of judging college quality in the U.S. NACAC offers more information about how to interpret and use college rankings here.

Does that U.S. government have an official ranking of colleges and universities?

NO. At least not yet. President Obama has proposed a federal college ratings system, which is currently under development. NACAC offers more information here about the President’s proposal.

1 - 10Next
 

 #CollegeRankings