Skip Ribbon Commands
Skip to main content

Quick Launch



December 18
Not a Question of Win Some, Lose Some – But Rather Should You Play At All (Part 2 of 2)

Ah, and now back to those teachable moments I promised in my post earlier this week:

One is for students who factor the opportunity to play an intercollegiate sport into their evaluation of a college or university. In such cases, a final decision to enroll should be made independent of whether or not the student will ever take an at-bat, shoot a jumper, take a snap or spring from a starting block. While great disappointment might ensue from a lost chance for athletic glory as in the case of UAB or if, heaven forbid you do not make the team, the true reason and purpose for getting a college education will not be derailed by unforeseen circumstances.

The second is to take the spurious notion of “happiest students” and transform it into something more personal and productive by tailoring it to your particular needs. Keep in mind that Princeton Review’s ranking of colleges with the “happiest students” only lists them one through twenty. If none of the schools on your list are among them, might you wonder whether that means their students are unhappy, or where they might rank on the happiness continuum among the other two thousand or so other colleges in the country?

For starters, dismiss the highly subjective and hopelessly vague use of “happiness” as a rating gradient. That is for Miss America contestants to define. Seriously though, what truly matters is whether your intuition or sensory apparatus picks up on any evidence of either a positive campus atmosphere or affirmative student interaction during a tour or extended visit. It is a pleasure to illustrate several approaches that may be taken to this investigatory process.

One is most easily implemented where you know an already enrolled student and are able to give advance notice that you will be visiting campus. If arrangements can be made to have a brief talk, it is possible to gain very useful insights through a few basic questions. For example, with former students I would ask some or all of the following:

  • ​“How are things going for you? Or, “How has your experience been so far?”
  • “Please refresh my memory - where else did you apply for admission?”
  • “If you feel comfortable telling me, at which of those schools were you admitted?”
  • “What factor(s) was most influential in your decision to enroll here?”
  • “Did you have any second thoughts about your decision after you had been here for a while?”
  • “If you were able to make the choice all over again, would it be the same?” and,
  • “Do you have any advice or insight to someone thinking about attending here?”

Of course, the questions can be adjusted according to the situation and timeframe. And even though it can be very different if you venture to approach a stranger, once a wholly understandable hesitation is overcome, it can turn out to be quite comfortable and satisfying. After all, you are making mental notes on how people react to a campus visitor who is reaching out to them.  In my many experiences, pleasant exchanges have actually been the norm rather than the exception.

To illustrate, during a visit to Clemson, I noticed students belonging to an organization for business majors holding a bake sale just outside the doors to the union. I smiled, introduced myself, and explained that I was touring the campus in my role as a guidance counselor.  This prelude to your questions is a must in order to avoid (in the popular vernacular) “creeping someone out”. At any rate, I asked if they would answer a few questions in exchange for my buying some cookies and donuts. Their eagerness to make a sale was far exceeded by their eagerness to talk about their school. I still recall how, walking away nearly twenty minutes later, I thought, “What a happy bunch of kids.”

Similarly, during a quick pass through the University of Florida, I stopped for an early lunch in one of the campus cafeterias. It was 11 a.m. on a Sunday morning, so I was not surprised to see most of the tables empty. However, at one there were about ten students talking animatedly and laughing. After my requisite introduction was out of the way, they explained that they were working on a calculus problem set. They added that they preferred the cafeteria to the library because when someone came up with an answer, they could enjoy the moment without disturbing anyone. Even though I did not leave with cookies or donuts, I had some unquestionably good impressions of students at the flagship institution.

Lastly, when I visited Fordham, I joined students who waiting in line for the cafeteria to open. They too were equally happy to share their feelings about attending school in one of the most intensely urban and multi-cultural areas of New York. But of equal importance were the results of a slightly divergent approach I had taken to get there. More specifically, I went to several different places on campus and at each, asked for directions to the cafeteria. Everyone I approached broke from their purposeful stride (this was, after all, New York) and explained patiently how I could best reach my destination. In fact, the last student insisted that I follow him, as the way involved a number of turns. 

It is unlikely that anyone would regard the forgoing interactions as a “scientific” way of finding out if a campus has happy students. However, the outcomes, in my humble opinion, are at least as valid in seeing certain aspects of a given campus environment as the students themselves see them. Obviously, such episodes are more difficult to have within the context of a structured campus tour. Though led by an enthusiastic guide very skilled at walking backwards while reciting facts and figures, the sentient experience is unfortunately, absent. Nevertheless, if given “free time” to visit the bookstore or opportunity to enjoy lunch on your own, keep the questions outlined earlier in mind and “engage” the campus on your own terms. In other words, “Make it your number one.” You will find yourself all the happier for having done so.

Reference: Alter. M. and Reback, R. (2014). True for your school? How changing reputations alter demand for selective U.S. colleges.  Educational Evaluation and Policy Analysis, 36, 346-370.

Joe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012.
December 16
Not a Question of Win Some, Lose Some – But Rather Should You Play At All (Part 1)

​Because the focus of this blog is customarily on some aspect of college rankings, some may at first see a discussion on a Division 1 football program as being an unrelated topic. However, at the risk of it being regarded as such, what follows will show that the connection is neither misplaced nor a stretch.          

Two weeks ago, the University of Alabama at Birmingham (UAB) revealed that it was closing down its football program. The stark headline in the December 10, 2014 edition of The Washington News read, “An Alabama University Drops Football - Hard numbers challenge the national hysteria over sport.”

In announcing that the decision was made after a campus-wide study conducted by a consulting firm over the past year, University President Ray Watts explained,

"The fiscal realities we face -- both from an operating and a capital investment standpoint -- are starker than ever and demand that we take decisive action for the greater good of the athletic department and UAB… As we look at the evolving landscape of NCAA football, we see expenses only continuing to increase. When considering a model that best protects the financial future and prominence of the athletic department, football is simply not sustainable,” ( news services December 3, 2014).

It would not be difficult to imagine how widespread the pain associated with such action is across the UAB campus. It is probably most acute for team members, some of whom have issued public protests stating that they were drawn to the school for the chance to play football, and how they must now find a way to finance their education. Given the “fiscal realities” to which President Watts alluded - UAB projected losses of nearly $49 million to subsidize football for the next few years alone - it is highly unlikely that the university would honor athletic scholarships for a program which no longer exists.

However, this may only be the tip of the iceberg.  The preceding post to this blog centered on the vagaries and patent unreliability of rankings due to inconsistent, inaccurate or truly unquantifiable evaluative criteria. This in turn makes the potential impact of what has happened at UAB, at least with regard to its place in national rankings, even more problematic.

To expand on this, we refer once again to previously cited research done by Molly Alter and Randall Reback in, “True for your school? How changing reputations alter demand for selective U.S. colleges,” (2014).  Their findings indicate that poor academic and quality of life reputations of a college, as purportedly shown in “Happiest Students” and Best Quality of Life” rankings by The Princeton Review, negatively affect not only the number of applications received by the institution but also the academic competitiveness of its incoming class.

That the previous UAB administration acknowledged this linkage was reflected in a 2009 feature article on how Alabama universities fared in that year’s Princeton Review rankings. Stan Diel, writing for, reported, 

 “The University of Alabama at Birmingham is one of the nation's most racially inclusive colleges, and has some of the happiest students anywhere, according to the annual Princeton Review survey of students nationwide.

The survey, perhaps most well-known for picking the nation's biggest party school each year, ranked UAB No. 3 nationally for interaction between the races and social classes. It ranked 11th nationally for "happiest students."

UAB President Carol Garrison said the survey's results are widely used by parents and prospective students, and the inclusiveness ranking will help the school recruit students who might have dated perceptions about Birmingham.

"It says this is an institution where any student can come and feel comfortable," she said.

The high ranking in the happiness category likely was, in part, a result of efforts to improve the campus, including the recent addition of a campus green and a recreation center, she said.

"We're happy that they're happy," Garrison said of the students.”

But, as the saying goes, “that was then and this is now”. Translation? One can only hope that the student body at UAB continues to be happy for all the right reasons and that the absence of football will not result in a venting on the “quality of student life” or “student happiness”  questions of a Princeton Review survey they may be asked to complete. It is further hoped that then-President Garrison greatly overestimated the weight prospective students and their families give to any such ranking constructs. In a perfect world, these two wishes would hold true. Unfortunately, this is not a perfect world.

Before bringing closure to this discussion, there are two teachable moments to be offered in Part 2 of this post- stay tuned!
Joe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012.
December 01
The Games Continue, With No End in Sight

​As the holiday season gets into full swing I hope that most, if not all, seniors are preparing to bring closure to their college application efforts and that a good number of them have already received positive results from timely action on rolling admissions programs. I further hope that juniors, and especially the ones with whom I am currently working, are beginning to formulate preliminary lists of college in which they may have an interest. 

This latter point serves as an introduction to our topic for this blog post – two questions which, due to their contiguous nature, are well-suited to a blended response. They are,

“When a college promotes that they are, “nationally ranked”, how should students and their families interpret that?” and,

“What is your baseline advice for students and families about rankings?”

An initial response that may appear too fast and easy, but which will be developed more fully throughout this text, would encourage students and families to view a “national ranking” as nothing more than a publisher’s formulaic conclusion. It is based solely on a narrowly drawn methodology that cannot possibly take into account infinite variances in the needs of the entire college-bound population. 

Moreover, there is significant potential for confusion in the fact that there seems to be no limit to the number of publishing enterprises claiming to be an “authority” on higher education – a circumstance The Princeton Review takes to the extreme, ranking colleges in no fewer than 62 categories, including financial aid and campus food.

This state of affairs was the subject of an article by Kevin Carey of The New York Times who in, Building a Better College Ranking System. Wait, Babson Beats Harvard? (July 28, 2014), wrote,

“For a long time, U.S. News & World Report had a monopoly on the college rankings game. Every August, the magazine would announce that, once again, Harvard was America’s best college, or Princeton, or, to shake things up, a tie between Harvard and Princeton. But in recent years, there has been a profusion of rankings competitors, each with a different perspective on what “best colleges” really means.”

In spite of this, U.S. News and World Report (USNWR) appears to be holding on as the preferred association. Support for this can be found in what was appended to an e-mail sent out on the National Association for College Admission Counseling (NACAC) listserv last week. Appearing prominently below the sender’s name and contact information was the U.S. News and World Report “Best Colleges” logo and the statement, 

“USC Aiken is AGAIN ranked #1 Public College in the South by U.S. News and World Report. To find out more, visit”

This may be seen as another reminder that, as much as many counseling and admissions professionals wish college rankings would fade away, higher education’s desire to be connected to and included among them seems to be gaining strength. 

But, before proceeding, my oft-repeated disclaimer remains - that it is not the position of this writer to dispute the right and good business sense of any enterprise calling widespread attention to favorable critical review. However, as also noted in previous postings, we are not talking about espresso machines or lawnmowers in this blog. We are talking about the nature, purpose and efficacy of life-shaping, life-enriching and life-determining entities that are our nation’s colleges and universities. Therefore the orientation, lexicon of analysis and import must be grounded in humanistic considerations rather than data points and ordinal rankings. 

Imagine then, if you will, the possible implications of colleges incorporating their respective ranking throughout their marketing outreach. For those just beginning their college search, seeing both the USNWR logo and promotion of the school’s ranking (as referenced above), might well give rise to several assumptions, none of which would be wholly correct. To list a few, the student might:

  • ​Completely overlook or misunderstand the sub-category designation, and assume the school is among the best, even beyond the region/category to which it is assigned;
  • Conclude that the high rank verifies program excellence throughout the institution’s curriculum;
  • Interpret the high rank as indicative of the quality of life, the level of instruction, the campus setting and resources, the financial aid program and post-graduate placement; And/or,
  • Assume the college is more selective than other similar institutions and thereby more desirable.

Compounding such well-intentioned but no less mistaken thought, are factors that weigh heavily against the few certainties in college rankings as a whole, but which nonetheless are used to cement their accuracy. One prime example is the suggested “selectivity quotient” of a particular institution and the factors that can impact its respective ranking and image.

Institutional selectivity accounts for 12.5 percent of the ranking methodology used by U.S. News and World Report.  It is based on the acceptance rate, or the ratio of students admitted to the total number of applicants. The vulnerability, if you will, of this ratio to any number of influences was once the subject of a conference lunch discussion among several highly experienced colleagues of mine. Their observations are summarized as follows:

  • ​“Has anyone else heard that (highly selective, Midwestern university) has significantly increased the size of its applicant pool by buying more names from College Board and ACT in order to generate more applications, but with no plans to increase the number of acceptances? As I understand, it is part of a strategy to make themselves appear as even more selective and thereby, strengthen their ranking.” Which prompted someone to add,
  • “We (and others) achieved somewhat of the same by joining the Common Application. Streamlining the process made it much easier for students to send out more applications and we saw a sizeable jump in our numbers right away. We too, decided not to increase the number of accepted students – at least for the time being.” This prompted a third party to reflect,
  • “Things sure have changed since Hilary Clinton’s joining her husband Bill in the White House was said to have produced twenty percent more applications for Wellesley. It got more selective in one cycle. You still get that same benefit if your football team wins a national championship or your basketball team reaches the Final Four, but how many schools can do that on a regular basis?”

A corresponding view of the manipulation and maneuvering related to a critical component of ranking methodology, Princeton Review’s in particular, was presented by Molly Alter and Randall Reback in, “True for your school? How changing reputations alter demand for selective U.S. colleges.” Educational Evaluation and Policy Analysis, 36, 346-370 (2014). As they reported,

“Our findings suggest that changes in academic and quality-of-life reputations affect the number of applications received by a college and the academic competitiveness and geographic diversity of the ensuing incoming freshman class. Colleges receive fewer applications when peer universities earn high academic ratings. However, unfavorable quality-of-life ratings for peers are followed by decreases in the college’s own application pool and the academic competitiveness of its incoming class. This suggests that potential applicants often begin their search process by shopping for groups of colleges where non-pecuniary benefits may be relatively high.”

In the final analysis, irrespective of the questionable legitimacy of academic ratings, the work of Ms. Alter and Mr. Reback leads them to conclude that, in some instances, they can be trumped by something as nebulous as unfavorable “quality of life” marks. More unfortunately, the net effect of this is inaccurate and unreliable information for students and families, which also continues to be troublesome and problematic for many guidance counselors as well.

In summary, the foregoing was intended to illustrate some of the more unsettling aspects of college rankings. They start with, but are certainly not limited to, misunderstandings stemming from being highlighted in college and university image initiatives, and extend to their inherent volatility and unreliability in the area of institutional selectivity. 

With this being the case, the most prudent baseline advice to students and their families would be to proceed with caution and understand college rankings of any sort and from any source for what they are – one tool among many to incorporate into their college search and evaluation efforts. And, just as with a hammer, being aware of its limitations along with proper, judicious and appropriate use minimizes the risk of mishaps.

November 19
Looking Beyond Methodologies: What is the Role of Firms in Driving Global Rankings?

​A critical moment in the history of global rankings of universities occurred in 2003 when Shanghai Jiao Tong University in China developed the first global rankings, Academic Ranking of World Universities (ARWU). Since then, the landscape of global rankings has continued to grow. In the paper, “World university rankings: On the new arts of governing (quality),” authors Susan Robertson (University of Bristol) and Kris Olds (University of Wisconsin-Madison) discuss different explanations in regards to the development and growing influence of global rankings. These explanations range from viewing rankings as accountability measures, to viewing rankings as part of status competition internationally, or even providers of a new service industry. 

This post will focus more specifically on a central tenet raised by Robertson and Olds, that while there is much discussion around different global ranking methodologies, ”the role of firms, such as Elsevier and Thomson Reuters… in fueling the global rankings phenomenon, has received remarkably little attention.” The authors argue that current explanations regarding the influence of global rankings are missing a critical piece, “one that places many players driving the process on centre stage, with their interests in full view.” Let’s take a look at this issue more closely.

Discussed in previous posts in NACAC’s Counselor’s Corner, global ranking methodologies tend to focus predominantly on research publications, citations, and reputational scores as indicators of “quality,” a significant shift compared to rankings in the U.S, which rely more on publically available data on indicators such as completion rates and class sizes. Both companies that Robertson and Olds highlight- Thomson Reuters and Elsevier- house databases (such as Web of Science, Scopus, and more recently, the Global Institutional Profiles Project) that provide a huge portion of the data used in many global ranking formulas (see the table in NACAC’s previous blog post​ that shows where such data is used within the QS World, Times Higher Education World, and U.S. News & World Report global university rankings). Publishers pay for this data to create the rankings and in turn, to sell newspapers. However, Robertson and Olds also point out that the same data can “feed into the development of ancillary services and benchmarking capabilities that can be sold back to universities” for the kinds of knowledge they think they need in order to go up in the rankings. And we’ve already seen instances in which some countries place enormous weight on the rankings, including those that give more resources to “top rated” universities.

Another point the authors point out in regards to the influence of firms in the spread of global rankings is,

“One of the interesting aspects of the involvement of these firms with the rankings phenomenon is that they have helped to create a normalized expectation that rankings happen once per year, even though there is no clear (and certainly not stated) logic for such a frequency.” 

Indeed, it is very unusual for the top rankings to change significantly in the short term, let alone year to year without a shift in methodology. So beyond an “informative” ranking, what is the purpose behind firms collecting this data every year from institutions? Furthermore, what are the profits generated each year for firms and publishers? And beyond publishers of the rankings and firms selling the data, the authors also point out that we must consider “the role that universities themselves play in enabling rankings to not only continue, but expand in ambition and depth.” 

Undoubtedly, placing the industries and people that fuel the rankings, as well as their interests and profits in full view would certainly provide a more complete picture of the global rankings industry, as Robertson and Olds have argued. Indeed, the comparisons made by global rankings are fueled by many players and contexts that are important to consider along with other explanations. 

November 19
As Crazy As We Will Allow it to Get

​“What is the craziest ranking you have ever seen?” 

This question, recently submitted to the Counselor’s Corner, provoked several interesting lines of thought on formulating an answer. The first brought the following quote to mind:

“People are desperate to measure something, so they seize on the wrong things,” Mark Edmundson, a professor of English at the University of Virginia (PayScale), 76). "I’m not against people making a living or prospering. But if the objective of an education is to ‘know yourself,’ it’s going to be hard to measure that.”

Mr. Edmunds’ observation was on yet another “return on investment (ROI)” study. But, more importantly for this blog post, he was calling attention to how with college rankings in particular, the principal goal of finding the best possible match between student and institution can be obscured by erroneous evaluative criteria and misplaced emphases.

So it stands to reason that Mr. Edmunds’ initial point fits in well with a discussion of the “craziness” that can surface in the realm of college rankings. And, within this context  it is no stretch to freely associate, “craziness” with several synonyms offered by Merriam Webster; i.e., “impractical”, “absurd” and “nonsensical”, to list but a few. For good measure, the term, “irresponsible” will be included as a consequence, unintended or not, of this type of ranking.

Before proceeding however, it should be carefully noted that the purpose of this present blog post is not to highlight or in any way endorse the existence of such rankings. To the contrary, it is hoped that raising awareness that they are out there will help minimize the potential for their misuse or misinterpretation. 

That being said, the discussion will begin with a partial list of some of the craziest college rankings I have seen. They are:

“The Best Party Schools”

“Schools with the Most Beer Drinkers”

“Schools with the Most Hard Liquor Drinkers,” and

“Schools with the Most Potheads”

The rankings were put on the website of an outfit named which basically lifted the results of surveys conducted by Princeton Review. Whether this was a veiled attempt at gaining some legitimacy for the feature is unclear but, for good measure, Atlas also included the category rankings for 2014, 2013 and 2012 respectively, adding that,

“The 5 top party schools in the US from 2014 stayed the same, however their orders were rearranged, with the exception of West Virginia University, which stayed in fourth place. Syracuse University went from fifth place right to the top, showing the most change in the top 5. There are also 2 new schools added this year: 9th place, Bucknell University (Lewisburg, PA); and 20th place, University of Delaware (Newark, DE). These two schools replaced: 15th place, University of Texas at Austin (Austin, TX); and 17th place, University of Maryland (College Park, MD). Bucknell University didn’t waste any party time as they jumped into the top 10, especially since this is the first time they have shown up in the previous 3 years!”

At first impression I thought, “How nice of them (Atlas/Princeton Review) to keep us current on which schools are trending in this critical area of campus life … I will bet that Texas and Maryland are ratcheting up their toga party schedules after this.”

If the sarcasm appears a bit extreme, one need only consider the irony of Atlas/Princeton Review’s mention of West Virginia retaining its fourth-place rank among other party schools as being the flashpoint for my view. Moreover, the most recent suspected alcohol-related death of a student occurred at a fraternity house at the University of West Virginia. Further, only those who have been living in a hermetically-sealed chamber for the past ten years or so, would be surprised by the findings by the National Institute on Alcohol Abuse and Alcoholism. To quote from the Institute’s website:

“Virtually all college students experience the effects of college drinking – whether they drink or not... The problem with college drinking is not necessarily the drinking itself, but the negative consequences that result from excessive drinking.

College drinking problems

College drinking is extremely widespread:

  • ​About four out of five college students drink alcohol.
  • About half of college students who drink, also consume alcohol through binge drinking.

Each year, drinking affects college students, as well as college communities, and families. The consequences of drinking include:

  • Death: 1,825 college students between the ages of 18 and 24 die each year from alcohol-related unintentional injuries.
  • Assault: More than 690,000 students between the ages of 18 and 24 are assaulted by another student who has been drinking.
  • Sexual Abuse: More than 97,000 students between the ages of 18 and 24 are victims of alcohol-related sexual assault or date rape.
  • Injury: 599,000 students between the ages of 18 and 24 receive unintentional injuries while under the influence of alcohol.
  • Academic Problems: About 25 percent of college students report academic consequences of their drinking including missing class, falling behind, doing poorly on exams or papers, and receiving lower grades overall.
  • Health Problems/Suicide Attempts: More than 150,000 students develop an alcohol-related health problem and between 1.2 and 1.5 percent of students indicate that they tried to commit suicide within the past year due to drinking or drug use.”

Source: NIAAA, November, 2014

While it was not originally intended to introduce the foregoing statistics as a part of this discussion, citing the work of a particular publishing enterprise as “crazy” or “irresponsible”, in fairness, should have a factual basis. To be sure, the data reported is not only factual, but in my humble opinion, frighteningly so. Which is why anything that appears to sanction or condone such destructive behavior and where it appears to flourish (and we never even got to the pothead issue), leaves itself open to judgment.

All right now – are you ready for the “flipside”? To be more specific, because this blog has always sought to provide teachable moments for students and families, it may be possible, the foregoing criticism notwithstanding, that there is a positive use for the subject rankings featured herein.

Imagine, if you will, a best case scenario where a student has one or more of the “best party schools” on her list of preferred colleges. Her parents, aware of this and of the school’s notoriety (according to Atlas/Princeton Review) incorporate this knowledge into a list of questions they will take on their campus visits. At a question and answer session where various campus officials sit as a panel, the parents present the following sample queries:

What is the institutional policy regarding the possession and consumption of alcoholic beverages by students living in the residence halls?

Who oversees the administration and implementation of this policy?

Does this policy extend to off-campus but university-affiliated student residences, most particularly, Greek houses? 

What is the protocol for dealing with a student who has been involved in a disciplinary, alcohol-related incident?

How are determinations made on whether a student has an alcohol-management problem?

What support services are available to students identified as having an alcohol-management problem? How are such services implemented, by whom and for how long?

This is neither an exhaustive list nor one that cannot be shaped according to parents’ individual concerns. It would also not be surprising if members of the panel felt a bit put on the spot. But when it comes to the health and welfare and yes, the safety of the young person parents are handing over to be educated through a multi-dimensional approach to life-coping skills, they do not want their “return on investment” to be measured in empty alcohol bottles.

Joe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012.

November 13
Citations, Citations, and More Citations: A Closer Look at Global Rankings' Methodologies

​As mentioned in our previous post​, global ranking methodologies tend to focus primarily on research publications, citations, reputational scores, and international influence as indicators of "quality"- a significant difference from domestic ranking methodologies​ (which tend to focus more on indicators such as graduation rates, cost, and earnings, to name a few). To take a closer look at this subject we put the methodologies of three prominent global rankings- QS WorldTimes Higher Education World​, and U.S. News & World Report Best Global Universities-  side by side so you can take a look at how each publication factors in publications, citations, reputation surveys, and more. 


November 10
U.S. News and World Report Releases Rankings of “Best Global Universities”

Last month, U.S. News and World Report (USNWR) released its inaugural “Best Global Universities" rankings. Inpreviously announcing the company’s plans, USNWR chief data strategist Bob Morse stated that “an increasing number of students plan to enroll in universities outside of their own country” as a motivation. A 2013 report by the Institute of International Education, New Frontiers: U.S. Students Pursuing Degrees Abroad, provides pertinent data and analysis about this trend. Specifically, the report found that from 2010/2011 to 2011/2012, the number of U.S. students pursuing full degrees abroad increased from 44,403 to 46,571 (a 5% increase). The top host countries for U.S. students pursuing full degrees abroad were the United Kingdom (36%), Canada (20%), and France (10%).

The new global rankings differ from USNWR’s domestic lists, in being focused entirely on research and reputational measures. As the Washington Post reported, “Factors such as undergraduate admissions selectivity and graduation rates….are absent from the global formula.” Instead, research reputations and citations are featured heavily (which tends to be the case with similar publications that attempt to rank universities globally).

For example, two of the ten indicators for the USNWR global rankings include research reputation scores- a global research reputation indicator (12.5%) and a regional research reputation indicator (12.5%)- both of which are compiled using Thomson Reuters’ Academic Reputation Survey. Another 12.5% of a university’s ranking comes from the “number of highly cited papers.” Click here to see the complete list of indicators used and full methodology behind the new rankings.

USNWR last week, also released a list of “Best Arab Region Universities,” which uses different methodology and sources of data- the rankings focus even more exclusively on citations and publications.

See here for more NACAC resources on rankings. And, in 2015 look to the NACAC Journal of College Admission for analysis and insight in the forthcoming article, Rankings Go Global.
November 06
Changing Students' and Families' Mindset about Rankings

​A recent question submitted to the Counselor’s Corner requests a brief discussion of, “what many counselors would like to see change with the present rankings state-of –affairs”; a “wish-list”, as some would put it. For this we will first look at a synthesis of the comments offered by more than six hundred guidance professionals who responded to the previously cited NACAC survey. 

An overview based on the compiled results included:

  • ​That USNWR (and most if not all college rankings enterprises) develop methodologies for measuring the value-added considerations of graduation and retention,
  • That the metrics used provide reasonably accurate indices of student engagement and satisfaction with their undergraduate experience,
  • That a wholesale revision of rankings methodologies would encourage students and their families to evaluate and interpret input, process and output variables according to their own needs, thereby enabling them to generate their own ranking scale.
These three points presume that the rankings are here to stay, and therefore, we must find a way to live with them. However, there is ample evidence that, if among the items on the guidance counselor wish list was the power to make them “go away forever," it would be no surprise to see it exercised. 

To expand on this a bit, eliminating the need to undo or unravel the misunderstandings created by rankings would be a welcome change for guidance professionals. More specifically, it would enhance their efforts to help students and families focus first on the characteristics that define a quality program of study along with those which suggest a high quality of life at a given college or university. Shifting the emphasis toward how well post-secondary study will prepare a student to make a living and a life within a setting that nurtures and stimulates personal growth would transform college counseling sessions into something far more personal, meaningful and efficacious.

To summarize, in the final analysis, a positive change in the college rankings state-of-affairs is probably not in the hands of the publishers but rather, in the minds of students and their families. Understanding and accepting the fact that data on teaching quality, character-developing student involvement and effective avenues to job and career readiness are not amenable to any current publication-ready format, is an essential mindset.

Further, it must be firmly realized that one, rankings are subjective, limited-use tools that reflect largely what the editors feel is important, and not the student, and two, that the most valuable resource in the college search and selection processes sits in the guidance counselor’s office. There students have a trained, knowledgeable advocate who keeps their best interests at heart. Most importantly, the counselor’s primary goal is to facilitate their personal empowerment through the acquisition of strong decision-making skills. Gaining mastery of this, will enable students and their families to weather the storm and emerge into the sunlight.

Joe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012.​

October 30
What Really Matters?

​To continue our discussion, let's now turn to a recent query asking,”what rankings really tell us as opposed to what really matters for a successful well-being”. From this writer’s perspective, there are certain governing realities with regard to rankings and particularly college rankings, that should be kept in mind.

Rankings appear to strongly appeal to our desire for “order” and, as such, seem to fulfill two demands of modern society.

  • ​They are seen as a method for determining value (the “J.D. Power Syndrome”, if you will) by a source that sorts out and provides an interpretation of voluminous amounts of information on something of popular interest. And,
  • The general public has a strong need to have an authoritative source tell them what is “best”, i.e., “Whom” or “What” is “Number One”. 

In my view, this might be perfectly fine for ranking toaster ovens and Buicks, but the reality remains that colleges and universities are complex institutions with multiple purposes and which, to a great extent, rely on human interactions and individual determination in order to function at their best; i.e., fulfill their mission. 

This latter point compels us to acknowledge the content of two recent university job postings wh​ere interested candidates were encouraged to consider. The first stated,

“Humanity. Justice. Integrity. You know, wild-eyed San Francisco values. Change the world from here," (University of San Francisco, 2014).

The second read,

“Expectations for all Dominican Employees: To support the University's mission of preparing students to pursue truth, to give compassionate service, and to participate in the creation of a more just and humane world,” (Dominican University, 2014).

Unfortunately none of these ideals, all of which arguably “really matter for students and their well-being”, translate neatly if at all, into the current lexicon of college rankings. Rather, students and their families must face the equally unfortunate reality that, identifying a list of “Best Colleges” uniquely suited to them cannot be readily done through either ordinal ranking or the sum total of a set of data points. Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012.

October 24
A New School Year, But the Same Old Issues with Rankings

Hello everyone and welcome to NACAC’s Counselor’s Corner. Whether you are a returning or new reader, it is hoped you will find the content helpful in understanding and most importantly, properly using some of the multitude of approaches to college rankings publications and services. 

However, before continuing it should be noted that the term “service”, as it relates to college rankings, is used with a very necessary reservation. Why? Referencing “service”, at Merriam-Webster Online, lists the following among the primary definitions,

(a)   “Contribution to the welfare of others.”

A point often made on this blog is that the connection between “rankings” and “service” is, in the eyes of many admissions and guidance professionals, tenuous at best. U.S. News and World Report (USNWR) initiated the drive to be recognized as the authority on higher educational quality with its first “Best Colleges” issue in 1983. But while USNWR may be given the benefit of the doubt that “service” was a large part of its original intent, few would agree that it is today.

A general explanation for this opinion can be found among the answers to several questions that have been submitted to the Counselor’s Corner. One in particular seeks reasons for a perceived “’love-hate’ relationship that colleges have with the rankings”. 

A past conversation with two long-time friends, both of whom have had lengthy careers as college admissions representatives provides a good point of departure on this question. Reflecting on the impact of college rankings on their work, one observed (while the other agreed) that, 

“If our [university’s] rank is ‘good’; i.e., higher/better than our main competitors, or if it has improved since the previous year, and/or if the administration is satisfied with our most recent rank, then the pressure is off, at least for the time being. That ranking can then be fully integrated into the institutional marketing plan. Under those circumstances, you could say that we ‘love’ the rankings.”  


“Even when things remain relatively stable or God forbid, if our ranking slips, the downturn is quickly seen as the first reason why our school is not attracting more applicants and especially those who are among the brightest and wealthiest. Presidents and trustees then issue demands that the admissions office do whatever is necessary to right the ship and get our rank back to where they think it should be. There’s the ‘hate’ with regard to the rankings.”

On a slightly more benign note, my other colleague remarked that an uncomfortable scenario can result from a student who, solely on the basis of what is in a college rankings publication, approaches the representative at a college fair and inquires about a program that either is not necessarily the strongest program or worse, is not offered by the college. “If you tell the student the major they want is not offered you risk them saying, ’Well, how can you have such a high rank if you don’t offer this major? And, there is simply is no effective way to explain that (for example) USNWR ranks institutions as a whole and not by the quality of individual programs.”

Further evidence of a less than favorable regard for rankings is contained within the findings of a survey conducted by NACAC’s U.S. News and World Report Advisory Committee (2012) in hopes of gaining an assessment of viewpoints on college rankings. One survey question, responded to by nearly equal numbers of admissions and guidance professionals asked, 

“Are rankings a helpful resource for students and families interested in college information?” 

Only slightly more than ten percent of admissions officers and less than ten percent of the guidance counselors surveyed were in clear agreement that they are. 

A corresponding question on the survey was, “Do rankings create confusion for students and their families?” 

Nearly 50 percent of all respondents (60 percent of whom were guidance counselors) agreed that “Yes, they do create confusion.” In light of this, concluding that the tabulated response to both queries further underscore the low regard held for college rankings as a whole is difficult to avoid.

In spite of this, Princeton Review, The New York Times, Forbes, Business Insider, Money Magazine and USA Today and still others publishing their own rankings. With wide variations in the methodology used to compile them, it is hardly surprising that students and their families are overwhelmed by the mountain of divergent characteristics, data points and quantitative formulae. Unless something has been overlooked here, this hardly connotes the earlier defined, “service.”
Joe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012. 

1 - 10Next