Skip Ribbon Commands
Skip to main content

Quick Launch

 

 Categories

 
  
Edit
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
Home
November 19
Looking Beyond Methodologies: What is the Role of Firms in Driving Global Rankings?

​A critical moment in the history of global rankings of universities occurred in 2003 when Shanghai Jiao Tong University in China developed the first global rankings, Academic Ranking of World Universities (ARWU). Since then, the landscape of global rankings has continued to grow. In the paper, “World university rankings: On the new arts of governing (quality),” authors Susan Robertson (University of Bristol) and Kris Olds (University of Wisconsin-Madison) discuss different explanations in regards to the development and growing influence of global rankings. These explanations range from viewing rankings as accountability measures, to viewing rankings as part of status competition internationally, or even providers of a new service industry. 

This post will focus more specifically on a central tenet raised by Robertson and Olds, that while there is much discussion around different global ranking methodologies, ”the role of firms, such as Elsevier and Thomson Reuters… in fueling the global rankings phenomenon, has received remarkably little attention.” The authors argue that current explanations regarding the influence of global rankings are missing a critical piece, “one that places many players driving the process on centre stage, with their interests in full view.” Let’s take a look at this issue more closely.

Discussed in previous posts in NACAC’s Counselor’s Corner, global ranking methodologies tend to focus predominantly on research publications, citations, and reputational scores as indicators of “quality,” a significant shift compared to rankings in the U.S, which rely more on publically available data on indicators such as completion rates and class sizes. Both companies that Robertson and Olds highlight- Thomson Reuters and Elsevier- house databases (such as Web of Science, Scopus, and more recently, the Global Institutional Profiles Project) that provide a huge portion of the data used in many global ranking formulas (see the table in NACAC’s previous blog post​ that shows where such data is used within the QS World, Times Higher Education World, and U.S. News & World Report global university rankings). Publishers pay for this data to create the rankings and in turn, to sell newspapers. However, Robertson and Olds also point out that the same data can “feed into the development of ancillary services and benchmarking capabilities that can be sold back to universities” for the kinds of knowledge they think they need in order to go up in the rankings. And we’ve already seen instances in which some countries place enormous weight on the rankings, including those that give more resources to “top rated” universities.

Another point the authors point out in regards to the influence of firms in the spread of global rankings is,

“One of the interesting aspects of the involvement of these firms with the rankings phenomenon is that they have helped to create a normalized expectation that rankings happen once per year, even though there is no clear (and certainly not stated) logic for such a frequency.” 

Indeed, it is very unusual for the top rankings to change significantly in the short term, let alone year to year without a shift in methodology. So beyond an “informative” ranking, what is the purpose behind firms collecting this data every year from institutions? Furthermore, what are the profits generated each year for firms and publishers? And beyond publishers of the rankings and firms selling the data, the authors also point out that we must consider “the role that universities themselves play in enabling rankings to not only continue, but expand in ambition and depth.” 

Undoubtedly, placing the industries and people that fuel the rankings, as well as their interests and profits in full view would certainly provide a more complete picture of the global rankings industry, as Robertson and Olds have argued. Indeed, the comparisons made by global rankings are fueled by many players and contexts that are important to consider along with other explanations. 

November 19
As Crazy As We Will Allow it to Get

​“What is the craziest ranking you have ever seen?” 

This question, recently submitted to the Counselor’s Corner, provoked several interesting lines of thought on formulating an answer. The first brought the following quote to mind:

“People are desperate to measure something, so they seize on the wrong things,” Mark Edmundson, a professor of English at the University of Virginia (PayScale), 76). "I’m not against people making a living or prospering. But if the objective of an education is to ‘know yourself,’ it’s going to be hard to measure that.”

Mr. Edmunds’ observation was on yet another “return on investment (ROI)” study. But, more importantly for this blog post, he was calling attention to how with college rankings in particular, the principal goal of finding the best possible match between student and institution can be obscured by erroneous evaluative criteria and misplaced emphases.

So it stands to reason that Mr. Edmunds’ initial point fits in well with a discussion of the “craziness” that can surface in the realm of college rankings. And, within this context  it is no stretch to freely associate, “craziness” with several synonyms offered by Merriam Webster; i.e., “impractical”, “absurd” and “nonsensical”, to list but a few. For good measure, the term, “irresponsible” will be included as a consequence, unintended or not, of this type of ranking.

Before proceeding however, it should be carefully noted that the purpose of this present blog post is not to highlight or in any way endorse the existence of such rankings. To the contrary, it is hoped that raising awareness that they are out there will help minimize the potential for their misuse or misinterpretation. 

That being said, the discussion will begin with a partial list of some of the craziest college rankings I have seen. They are:

“The Best Party Schools”

“Schools with the Most Beer Drinkers”

“Schools with the Most Hard Liquor Drinkers,” and

“Schools with the Most Potheads”

The rankings were put on the website of an outfit named CollegeAtlas.org which basically lifted the results of surveys conducted by Princeton Review. Whether this was a veiled attempt at gaining some legitimacy for the feature is unclear but, for good measure, Atlas also included the category rankings for 2014, 2013 and 2012 respectively, adding that,

“The 5 top party schools in the US from 2014 stayed the same, however their orders were rearranged, with the exception of West Virginia University, which stayed in fourth place. Syracuse University went from fifth place right to the top, showing the most change in the top 5. There are also 2 new schools added this year: 9th place, Bucknell University (Lewisburg, PA); and 20th place, University of Delaware (Newark, DE). These two schools replaced: 15th place, University of Texas at Austin (Austin, TX); and 17th place, University of Maryland (College Park, MD). Bucknell University didn’t waste any party time as they jumped into the top 10, especially since this is the first time they have shown up in the previous 3 years!”

At first impression I thought, “How nice of them (Atlas/Princeton Review) to keep us current on which schools are trending in this critical area of campus life … I will bet that Texas and Maryland are ratcheting up their toga party schedules after this.”

If the sarcasm appears a bit extreme, one need only consider the irony of Atlas/Princeton Review’s mention of West Virginia retaining its fourth-place rank among other party schools as being the flashpoint for my view. Moreover, the most recent suspected alcohol-related death of a student occurred at a fraternity house at the University of West Virginia. Further, only those who have been living in a hermetically-sealed chamber for the past ten years or so, would be surprised by the findings by the National Institute on Alcohol Abuse and Alcoholism. To quote from the Institute’s website:

“Virtually all college students experience the effects of college drinking – whether they drink or not... The problem with college drinking is not necessarily the drinking itself, but the negative consequences that result from excessive drinking.

College drinking problems

College drinking is extremely widespread:

  • ​About four out of five college students drink alcohol.
  • About half of college students who drink, also consume alcohol through binge drinking.

Each year, drinking affects college students, as well as college communities, and families. The consequences of drinking include:

  • Death: 1,825 college students between the ages of 18 and 24 die each year from alcohol-related unintentional injuries.
  • Assault: More than 690,000 students between the ages of 18 and 24 are assaulted by another student who has been drinking.
  • Sexual Abuse: More than 97,000 students between the ages of 18 and 24 are victims of alcohol-related sexual assault or date rape.
  • Injury: 599,000 students between the ages of 18 and 24 receive unintentional injuries while under the influence of alcohol.
  • Academic Problems: About 25 percent of college students report academic consequences of their drinking including missing class, falling behind, doing poorly on exams or papers, and receiving lower grades overall.
  • Health Problems/Suicide Attempts: More than 150,000 students develop an alcohol-related health problem and between 1.2 and 1.5 percent of students indicate that they tried to commit suicide within the past year due to drinking or drug use.”

Source: NIAAA, November, 2014

While it was not originally intended to introduce the foregoing statistics as a part of this discussion, citing the work of a particular publishing enterprise as “crazy” or “irresponsible”, in fairness, should have a factual basis. To be sure, the data reported is not only factual, but in my humble opinion, frighteningly so. Which is why anything that appears to sanction or condone such destructive behavior and where it appears to flourish (and we never even got to the pothead issue), leaves itself open to judgment.

All right now – are you ready for the “flipside”? To be more specific, because this blog has always sought to provide teachable moments for students and families, it may be possible, the foregoing criticism notwithstanding, that there is a positive use for the subject rankings featured herein.

Imagine, if you will, a best case scenario where a student has one or more of the “best party schools” on her list of preferred colleges. Her parents, aware of this and of the school’s notoriety (according to Atlas/Princeton Review) incorporate this knowledge into a list of questions they will take on their campus visits. At a question and answer session where various campus officials sit as a panel, the parents present the following sample queries:

What is the institutional policy regarding the possession and consumption of alcoholic beverages by students living in the residence halls?

Who oversees the administration and implementation of this policy?

Does this policy extend to off-campus but university-affiliated student residences, most particularly, Greek houses? 

What is the protocol for dealing with a student who has been involved in a disciplinary, alcohol-related incident?

How are determinations made on whether a student has an alcohol-management problem?

What support services are available to students identified as having an alcohol-management problem? How are such services implemented, by whom and for how long?

This is neither an exhaustive list nor one that cannot be shaped according to parents’ individual concerns. It would also not be surprising if members of the panel felt a bit put on the spot. But when it comes to the health and welfare and yes, the safety of the young person parents are handing over to be educated through a multi-dimensional approach to life-coping skills, they do not want their “return on investment” to be measured in empty alcohol bottles.


http://admin.nacacnet.org/media-center/PublishingImages/prieto_small.jpg

Joe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012.

November 13
Citations, Citations, and More Citations: A Closer Look at Global Rankings' Methodologies

​As mentioned in our previous post​, global ranking methodologies tend to focus primarily on research publications, citations, reputational scores, and international influence as indicators of "quality"- a significant difference from domestic ranking methodologies​ (which tend to focus more on indicators such as graduation rates, cost, and earnings, to name a few). To take a closer look at this subject we put the methodologies of three prominent global rankings- QS WorldTimes Higher Education World​, and U.S. News & World Report Best Global Universities-  side by side so you can take a look at how each publication factors in publications, citations, reputation surveys, and more. 

ah1.png
 

November 10
U.S. News and World Report Releases Rankings of “Best Global Universities”

Last month, U.S. News and World Report (USNWR) released its inaugural “Best Global Universities" rankings. Inpreviously announcing the company’s plans, USNWR chief data strategist Bob Morse stated that “an increasing number of students plan to enroll in universities outside of their own country” as a motivation. A 2013 report by the Institute of International Education, New Frontiers: U.S. Students Pursuing Degrees Abroad, provides pertinent data and analysis about this trend. Specifically, the report found that from 2010/2011 to 2011/2012, the number of U.S. students pursuing full degrees abroad increased from 44,403 to 46,571 (a 5% increase). The top host countries for U.S. students pursuing full degrees abroad were the United Kingdom (36%), Canada (20%), and France (10%).

The new global rankings differ from USNWR’s domestic lists, in being focused entirely on research and reputational measures. As the Washington Post reported, “Factors such as undergraduate admissions selectivity and graduation rates….are absent from the global formula.” Instead, research reputations and citations are featured heavily (which tends to be the case with similar publications that attempt to rank universities globally).

For example, two of the ten indicators for the USNWR global rankings include research reputation scores- a global research reputation indicator (12.5%) and a regional research reputation indicator (12.5%)- both of which are compiled using Thomson Reuters’ Academic Reputation Survey. Another 12.5% of a university’s ranking comes from the “number of highly cited papers.” Click here to see the complete list of indicators used and full methodology behind the new rankings.

USNWR last week, also released a list of “Best Arab Region Universities,” which uses different methodology and sources of data- the rankings focus even more exclusively on citations and publications.

See here for more NACAC resources on rankings. And, in 2015 look to the NACAC Journal of College Admission for analysis and insight in the forthcoming article, Rankings Go Global.
November 06
Changing Students' and Families' Mindset about Rankings

​A recent question submitted to the Counselor’s Corner requests a brief discussion of, “what many counselors would like to see change with the present rankings state-of –affairs”; a “wish-list”, as some would put it. For this we will first look at a synthesis of the comments offered by more than six hundred guidance professionals who responded to the previously cited NACAC survey. 

An overview based on the compiled results included:

  • ​That USNWR (and most if not all college rankings enterprises) develop methodologies for measuring the value-added considerations of graduation and retention,
  • That the metrics used provide reasonably accurate indices of student engagement and satisfaction with their undergraduate experience,
  • That a wholesale revision of rankings methodologies would encourage students and their families to evaluate and interpret input, process and output variables according to their own needs, thereby enabling them to generate their own ranking scale.
These three points presume that the rankings are here to stay, and therefore, we must find a way to live with them. However, there is ample evidence that, if among the items on the guidance counselor wish list was the power to make them “go away forever," it would be no surprise to see it exercised. 

To expand on this a bit, eliminating the need to undo or unravel the misunderstandings created by rankings would be a welcome change for guidance professionals. More specifically, it would enhance their efforts to help students and families focus first on the characteristics that define a quality program of study along with those which suggest a high quality of life at a given college or university. Shifting the emphasis toward how well post-secondary study will prepare a student to make a living and a life within a setting that nurtures and stimulates personal growth would transform college counseling sessions into something far more personal, meaningful and efficacious.

To summarize, in the final analysis, a positive change in the college rankings state-of-affairs is probably not in the hands of the publishers but rather, in the minds of students and their families. Understanding and accepting the fact that data on teaching quality, character-developing student involvement and effective avenues to job and career readiness are not amenable to any current publication-ready format, is an essential mindset.

Further, it must be firmly realized that one, rankings are subjective, limited-use tools that reflect largely what the editors feel is important, and not the student, and two, that the most valuable resource in the college search and selection processes sits in the guidance counselor’s office. There students have a trained, knowledgeable advocate who keeps their best interests at heart. Most importantly, the counselor’s primary goal is to facilitate their personal empowerment through the acquisition of strong decision-making skills. Gaining mastery of this, will enable students and their families to weather the storm and emerge into the sunlight.

http://admin.nacacnet.org/media-center/PublishingImages/prieto_small.jpg

Joe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012.​

October 30
What Really Matters?

​To continue our discussion, let's now turn to a recent query asking,”what rankings really tell us as opposed to what really matters for a successful well-being”. From this writer’s perspective, there are certain governing realities with regard to rankings and particularly college rankings, that should be kept in mind.

Rankings appear to strongly appeal to our desire for “order” and, as such, seem to fulfill two demands of modern society.

  • ​They are seen as a method for determining value (the “J.D. Power Syndrome”, if you will) by a source that sorts out and provides an interpretation of voluminous amounts of information on something of popular interest. And,
  • The general public has a strong need to have an authoritative source tell them what is “best”, i.e., “Whom” or “What” is “Number One”. 

In my view, this might be perfectly fine for ranking toaster ovens and Buicks, but the reality remains that colleges and universities are complex institutions with multiple purposes and which, to a great extent, rely on human interactions and individual determination in order to function at their best; i.e., fulfill their mission. 

This latter point compels us to acknowledge the content of two recent university job postings wh​ere interested candidates were encouraged to consider. The first stated,

“Humanity. Justice. Integrity. You know, wild-eyed San Francisco values. Change the world from here," (University of San Francisco, 2014).

The second read,

“Expectations for all Dominican Employees: To support the University's mission of preparing students to pursue truth, to give compassionate service, and to participate in the creation of a more just and humane world,” (Dominican University, 2014).

Unfortunately none of these ideals, all of which arguably “really matter for students and their well-being”, translate neatly if at all, into the current lexicon of college rankings. Rather, students and their families must face the equally unfortunate reality that, identifying a list of “Best Colleges” uniquely suited to them cannot be readily done through either ordinal ranking or the sum total of a set of data points.


http://admin.nacacnet.org/media-center/PublishingImages/prieto_small.jpgJoe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012.

October 24
A New School Year, But the Same Old Issues with Rankings

Hello everyone and welcome to NACAC’s Counselor’s Corner. Whether you are a returning or new reader, it is hoped you will find the content helpful in understanding and most importantly, properly using some of the multitude of approaches to college rankings publications and services. 

However, before continuing it should be noted that the term “service”, as it relates to college rankings, is used with a very necessary reservation. Why? Referencing “service”, at Merriam-Webster Online, lists the following among the primary definitions,

(a)   “Contribution to the welfare of others.”

A point often made on this blog is that the connection between “rankings” and “service” is, in the eyes of many admissions and guidance professionals, tenuous at best. U.S. News and World Report (USNWR) initiated the drive to be recognized as the authority on higher educational quality with its first “Best Colleges” issue in 1983. But while USNWR may be given the benefit of the doubt that “service” was a large part of its original intent, few would agree that it is today.

A general explanation for this opinion can be found among the answers to several questions that have been submitted to the Counselor’s Corner. One in particular seeks reasons for a perceived “’love-hate’ relationship that colleges have with the rankings”. 

A past conversation with two long-time friends, both of whom have had lengthy careers as college admissions representatives provides a good point of departure on this question. Reflecting on the impact of college rankings on their work, one observed (while the other agreed) that, 

“If our [university’s] rank is ‘good’; i.e., higher/better than our main competitors, or if it has improved since the previous year, and/or if the administration is satisfied with our most recent rank, then the pressure is off, at least for the time being. That ranking can then be fully integrated into the institutional marketing plan. Under those circumstances, you could say that we ‘love’ the rankings.”  

Conversely, 

“Even when things remain relatively stable or God forbid, if our ranking slips, the downturn is quickly seen as the first reason why our school is not attracting more applicants and especially those who are among the brightest and wealthiest. Presidents and trustees then issue demands that the admissions office do whatever is necessary to right the ship and get our rank back to where they think it should be. There’s the ‘hate’ with regard to the rankings.”

On a slightly more benign note, my other colleague remarked that an uncomfortable scenario can result from a student who, solely on the basis of what is in a college rankings publication, approaches the representative at a college fair and inquires about a program that either is not necessarily the strongest program or worse, is not offered by the college. “If you tell the student the major they want is not offered you risk them saying, ’Well, how can you have such a high rank if you don’t offer this major? And, there is simply is no effective way to explain that (for example) USNWR ranks institutions as a whole and not by the quality of individual programs.”

Further evidence of a less than favorable regard for rankings is contained within the findings of a survey conducted by NACAC’s U.S. News and World Report Advisory Committee (2012) in hopes of gaining an assessment of viewpoints on college rankings. One survey question, responded to by nearly equal numbers of admissions and guidance professionals asked, 

“Are rankings a helpful resource for students and families interested in college information?” 

Only slightly more than ten percent of admissions officers and less than ten percent of the guidance counselors surveyed were in clear agreement that they are. 

A corresponding question on the survey was, “Do rankings create confusion for students and their families?” 

Nearly 50 percent of all respondents (60 percent of whom were guidance counselors) agreed that “Yes, they do create confusion.” In light of this, concluding that the tabulated response to both queries further underscore the low regard held for college rankings as a whole is difficult to avoid.

In spite of this, Princeton Review, The New York Times, Forbes, Business Insider, Money Magazine and USA Today and still others publishing their own rankings. With wide variations in the methodology used to compile them, it is hardly surprising that students and their families are overwhelmed by the mountain of divergent characteristics, data points and quantitative formulae. Unless something has been overlooked here, this hardly connotes the earlier defined, “service.” 


http://admin.nacacnet.org/media-center/PublishingImages/prieto_small.jpg
Joe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012. 

October 07
A Quick Look at LinkedIn's New University Rankings

​Last week LinkedIn unveiled new university rankings based on career outcomes. Here’s a brief overview of what the site includes:

The LinkedIn rankings include a “top 25” list of schools in the United States for each of the following eight career fields: Accounting Professionals, Designers, Finance Professionals, Investment Bankers, Marketers, Media Professionals, Software Developers, and Software Developers at Startups (the rankings also include top 25 lists of schools in Canada and the United Kingdom for five career fields). 

How were the rankings developed? 

From LinkedIn’s blog, here’s what we know about the methodology: To develop the "top 25" lists, LinkedIn first identified top companies where professionals in a specific career choose to work (this was based on LinkedIn data on how well and which companies attract and retain the most employees in a specific career). Next, LinkedIn looked at where members in that career went to school. Then for each school, LinkedIn found the percentage of alumni who work at the top companies identified in that particular career field (only looked at members who graduated in the past eight years).

After reading through the limited explanation above, I was left with several questions. For example:

  • ​How many “top companies” were identified for each career field? I think this information would be helpful to know. For example, we can see that at least Deloitte, KPMG, EY, and PwC are identified as “top companies” for accounting professionals. However, we don’t know if and how many other companies were considered. Are the university rankings for each career only based on employment at a few companies? And similarly, are specific companies weighted more heavily in the rankings determination? 
  • How does LinkedIn account for the size of companies? It appears as if larger companies have an immediate advantage if they are able to employ more professionals.
  • What proportion of alumni in the sample from each school are on LinkedIn? Do the rankings control for schools that do not have an equal proportion of alumni on LinkedIn (i.e. how does LinkedIn compare a school where 60% of alumni are LinkedIn members compared to a school where only 20% of alumni are represented?).
  • Does LinkedIn account for location? If a company’s headquarters are in one or two major cities, what about professionals who do not live in those areas?
  • What if a college/university isn’t represented on LinkedIn?
  • Does the ranking methodology restrict the rankings to business-related fields? 
These and other questions are important to consider when considering a new ranking methodology in order to really understand the value of the data, as well as its potential limitations. LinkedIn’s rankings could be helpful for select students who wish to work at big companies identified in the specific career fields analyzed. But for the majority of students, the rankings do not apply. Visit NACAC's website​ to learn more about rankings.
September 04
​Back to School Post: 5 Things to Know About Rankings

As high school students go back to school this week across the nation, juniors in particular will soon be applying to colleges. For students and parents searching for potential colleges and universities, as well as, school counselors advising students throughout their search, NACAC has put together the following reminders to help you read between the lines when using college rankings.

#1: All Rankings Are NOT Created Equal.

Each private ranking publication relies on a formula, or methodology, to determine which colleges and universities it considers the "best." These formulas can be very different for each publication, resulting in very different rankings.

The inconsistencies in college rankings reveal their significant flaws, but they also demonstrate the real purpose of the rankings. Some lists work well for some individuals, but not all. Take a look at the methodology of each ranking publication and pay special attention to the weights associated with each criteria. Rankings typically include factors that are easy to measure, widely available, and standardized. Do you agree that standardized test scores are a good measure of quality? Do the rankings provide information on opportunities to engage with faculty and participate in internships? Look for the factors that are important to you. Visit NACAC’s rankings website to see a list of links to the methodology for the most popular sources of rankings. 

#2: One Size Does Not Fit All.

 Just because a magazine says that a particular college is number one doesn't mean that it would be a good fit for you. “Institutional Fit” is used to describe the degree to which a student’s unique “profile” matches-up with that of particular colleges or universities. This search then, for an appropriate “fit,” is directed by your own interests, abilities, and values. Rankings do not offer this answer. What these lists can do is provide a good starting point for researching different colleges and universities, and even help you discover schools you have not heard of before, but if you want to make the college selection process one that is driven by your priorities (and thereby substantially more meaningful), concentrate on your own academic interests, abilities, and personal values. Resist the temptation to seek only the “premier” brands in higher education. What someone else’s top five choices are, will likely be very different from your own. 

#3: What’s the Difference Between #40 and #48? How to Sift Through the “Noise.”

Researchers examining U.S. News and World Report rankings found that for universities with ranks beyond the top 40, differences of +/- 4 (or even greater) should be considered "noise" and universities with ranks in the top 40 should similarly consider differences of +/- 2 as "noise." In addition, however, it would be extremely difficult, if not impossible, for a university in the mid-30s to move into the top 20 without a substantive change in a university's academic reputation score, the category U.S. News weights the most in their rankings (combined weight of 22.5 percent). This score is often based on age-old perceptions that change very little over time. Therefore, it’s important not to get too caught up in the “numbers” game when conducting your search. Once a rankings publication selects its methodology, it’s unlikely that meaningful changes will occur over a short period of time or even at all. 

#4: Don’t Rely on One Source.

When using college rankings, one of the major dangers is taking one source of rankings at “face value,” or in other words, believing that one source is completely objective and the be all, end all determination of a college’s quality. Consider multiple sources. Talk to alumni. Visit campus if you can. Discuss options with your school counselor. In addition, here are a couple trusted sources to find information and search college options:

 

  • ​The U.S. Department of Education's College Navigator is a free consumer information tool designed to help students, parents, high school counselors, and others get information about more than 7,000 colleges from the US Department of Educations' database. 
  • The College Scorecard allows you to search colleges by selecting factors that are important in your college search, such as, programs or majors offered, location, and enrollment size.You can also use the College Scorecard to find out more about a college’s affordability and value so you can make more informed decisions about which college to attend. 

 

#5: BEWARE of Lead Generator Sites.

There are some websites out there that look like other rankings websites and may claim to provide a free and trusted source of information about colleges, but in reality they exist only to get you to enter your personal contact information, which the company can then sell to others for a fee. These websites, known as "lead generators," generally direct students only to schools and programs that pay them and are in no way objective or a good source of information. To identify a lead generator website and check out NACAC’s online resource for more information. 

Visit NACAC’s resource on College Rankings or resources for students and parents​ for more information.

July 30
College Rankings by Money Magazine

​Earlier this week Money magazine revealed new college rankings, including a complete list of 665 colleges. The rankings focus on several factors all weighted differently within three overarching areas: 

 

  • ​Quality of education (weighted 33.3%) (i.e. graduation rates with a value added measure, peer quality based on admitted students’ standardized test scores, quality of professors based on ratemyprofessors.com, yield);
  • Affordability (weighted 33.3%) (i.e. net price of a degree, student and parent borrowing, loan default rates); and
  • Outcomes (weighted 33.3%) (i.e. early and mid-career earnings based on payscale.com). 

 

Data were converted into a single score on a five-point scale and then ranked accordingly. For a complete look on how Money weighted each factor to rank colleges, view their methodology webpage​

As always with rankings, there are several caveats and limitations. What do you think of Money’s ranking methodology?

1 - 10Next
 

 #CollegeRankings