Skip Ribbon Commands
Skip to main content

Quick Launch

 

 Categories

 
  
Edit
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
Home
March 27
Studying the Effects of College Rankings

In the latest installment of their extensive research into the effects of college rankings on higher education, Michael Sauder, a University of Iowa associate professor in sociology, and Wendy Espeland, professor of sociology at Northwestern University, noted that “administrators cared deeply” about their respective institution’s ranking in publications like U.S. News and World Report. “Some become kind of obsessed with rankings … Some schools are more concerned about the numbers than the educational mission … For example, a school’s mission to enroll a diverse population of students might be overshadowed by a desire to admit only students with high test scores,” Sauder observed. Exactly the kind of conondrum I described in my last blog post.

For anyone interested in Sauder and Espeland's work over the years on the subject of college rankings, take a look at:

"Rankings and Reactivity: How Public Measures Recreate Social Worlds" (2007)

"The Discipline of Rankings: Tight Coupling and Organizational Change" (2009)

"Rankings and Diversity" (2009)

It came as no surprise that Sauder and Espeland observed “gaming strategies,” and “a level of cheating” in the reporting of data by law schools for purposes of improving their rank – practices which have surfaced in the undergraduate rankings during the past several years. Sauder and Espeland plan to publish their findings in a book tentatively titled, “Fear of Falling: How Media Rankings Changed Legal Education In America.” Their goal is to “make students and administrators more savvy about the rankings [in realization] that, It’s more important to choose a school that is a good fit instead of based on rankings.”

This book is sure to be on my reading list, as I hope it will be for the numerous college admission counseling professionals who share my concern about the outsized influence of rankings. I also hope the Administration is paying close attention to the work these scholars and others have conducted over the past three decades.​

March 19
Repost: What Do You Think About the Proposed Federal College Ratings System?

Originally posted on the NACAC Admitted Blog 3/10/14:

The Obama Administration’s proposal to construct a college ratings system has generated a great deal of discussion and more than a little concern in the higher education community. While NACAC has been supportive of the Administration’s efforts to promote consumer information and awareness to facilitate informed enrollment decisions (see College Navigator, Standardized Aid Award Letter, College Scorecard, to name a few), we have expressed concerns about institutionalizing a federal ratings system. We have dedicated some space on our College Rankings web page for this issue, and have submitted our official comments to the Administration. We would also encourage you to share your comments with us. We’ll post a summary of comments we receive on the Counselor’s Corner.​

March 10
The Rankings Conundrum

In January, Inside Higher Ed ran an article describing how the change in leadership at Syracuse University, a NACAC member institution, may change the way in which the institution relates to the U.S. News & World Report college rankings.

The article reveals a fundamental difficulty with ranking colleges and universities: they threaten to calcify existing, rigid hierarchical structures and discourage innovation and openness, heretofore the hallmarks of American higher education.  

As we correctly place a great deal of emphasis on greater access and success for under-represented populations, rankings seem to punish institutions for attempting to make good on that goal. The Syracuse example is telling.

"A decade ago, less than a fifth of Syracuse students were from minority groups and less than a fifth were eligible for Pell Grants -- a proxy for the number of low-income students. Now, about a third of students are minorities and about a quarter are Pell-eligible. Amid the effort, the percentage of applicants who got into Syracuse increased – which didn’t sit well with some students and faculty," and created downward pressure in the USNWR methodology.

The new administration at Syracuse has "asked university officials to pay attention to the rankings but cautioned them not to be driven by rankings."

Such directives can be quite a Catch-22 when factors like acceptance rates and standardized admission test scores are factors on which a college is ranked. What choice does an admission office have if stakeholders, either internal or external, ask them to "pay attention" to rankings, and the levers they have at their disposal to affect the rankings serve only to make the institution less accessible?

This is a conondrum that plays itself out at scores of institutions every year. This is why, from our humble perspective, the NACAC Ad Hoc Committee on USNWR Rankings (on which I served) recommended that USNWR de-emphasize what we called "input" variables, like standardized test scores, in favor of variables that more accurately measure the value a college adds no matter which students it enrolls.

We think this constitutes good advice for anyone who is contemplating ways to make decisions about college quality, including (and especially) students.​

February 24
Rankings and Ratings: More Than Just Semantics?

​As Rick Harrison notes at the beginning of each episode of Pawn Stars, “You never know what is going to come through that door.” After two years of centering this blog on the many and varied dimensions of college rankings, you also never know what forms this controversial topic might take. 

A recent Inside Higher Ed article, “Feedback From the Field,” provided an overview of educators' reactions to President Obama’s plan to develop a federal rating system for colleges, the resulting initiative by the Education Department to solicit reactions to the President’s proposal, and a brief summary of responses submitted thus far by students, guidance counselors, professors, higher education associations and members of the general public.

The suggested “outcomes-based” component of the President’s proposed rating system focused on graduates’ earnings has caused some college leaders and faculty to express concern that doing so would distort the true goals and purposes of higher education.  As one professor emphasized in her e-mail to the Education Department, “[this would treat education] as if it were a stock investment, making earnings after graduation a sign of the quality of one’s education.”

I hope graduate earnings does not become the controlling metric in any higher education rating system. In its commercial manifestation, this metric (as expressed in Forbes magazine's "return on investment" rankings) is even more problematic than the US News rankings, which NACAC has critiqued over the years.

Many more voices need to be heard and much dialogue needs to occur to aid in developing a rating system of the type advocated by, for one, the Education Trust. Specifically, “a system that would, among other things, highlight the success gaps for low-income and minority students in higher education, and [thereby] help institutions better serve those students,” without allowing the emergence of “perverse incentives.” From this former school counselors' perspective, incorporating such measures would be an essential component of any attempt to classify colleges by the value they add to student experiences.

While the article didn't elaborate on the nature of the “perverse incentives,” it seems reasonable to assume that the reference is to college ranking metrics. At the same time he concurs with Jamie Studley, Deputy Under-Secretary of Education, who emphasizes that the goal is to develop a “nuanced and enriched idea of how you get value from education.” Ideally then, the college rating system that would emerge would eliminate most, if not all, the misguided metrics used by rankings publications and most importantly, the gamesmanship, manipulation and subterfuge to which they are vulnerable. NACAC's comments on the Administration's proposed college ratings system can be found here.

January 03
What's in a Name?

US News and World Report (USNWR) attempts to evaluate colleges mostly on digits and dollars, like graduation rates, alumni gifts, GPA, test scores, and faculty resources. Because of all the attention on numerical data, it may come as a surprise that one of the most significant variables in the USNWR formula is also its most subjective, and some might say, unreliable. The reputational survey, a measure of what college representatives think about other institutions, represents 22.5 percent of a school's overall rankings, only matched in strength by graduation and retention rates.

On the reputational survey, college representatives are asked to rate colleges on a scale from one to five. That bears repeating. Survey participants are asked to label each college a 1, 2, 3, 4, or 5. If a college asked you to rate your high school experience along these lines, what would you say? Would a three suffice? What is the difference between a three and a four? 

The rankings are an attempt to measure the immeasurable, and the USNWR editorial staff themselves acknowledge as much (emphasis added):

The host of intangibles that make up the college experience can't be measured by a series of data points. But for families concerned with finding the best academic value for their money, the U.S. News Best Colleges rankings provide an excellent starting point for the search.

Students should take this warning from USNWR very seriously. The rankings, in fact, do provide valuable information on many postsecondary institutions across the country. There are over 4,000 institutions to choose from, so more information on particular colleges is always helpful.

But be wary of rankings that begin with adjectives like "best." What does that mean exactly? For USNWR it means a few things, but it largely means the most selective schools with the highest graduation/retention rates and the "best" reputations.  Because of its heavy weight in the rankings, let's look closer at the reputational survey.

NACAC surveyed college counselors and admission officers to find out what they thought of various factors within the USNWR ranking methodology. Here is what they thought of the reputational, or peer, survey:

Peer.gif
 
Notice the meager number of respondents who think the assessments are good indicators. With over 60 percent calling the peer assessments either "poor" or simply "not an indicator," the assessments may not be such accurate indicators of quality after all.

Robert Morse, the rankings editor at USNWR, frequently takes a closer look at the data on his Morse Code Blog. Though many students may overlook this information, the blog is an important companion piece to the rankings.

In a recent entry, he articulates an important mismatch in the reputational survey results. According to Morse, responses from high school counselors and college representatives are sometimes very different. Several charts outline some jaw-dropping differences in the ways each of these groups view specific colleges. Institutions shift by as much as 104 places in the rankings according to whether they are viewed from the high school or college side.

Morse sums up his analysis thusly:

The National Universities that are more highly ranked by high school counselors are generally either smaller schools, universities that have recently expanded into research and granting doctoral degrees or schools whose curriculum is concentrated in science, technology, engineering and math, the so-called STEM fields. The universities that are more highly ranked by college officials than by high school counselors are all very large public universities, often the state's flagship school.

His analysis proves that the reputational data skews toward certain sectors, depending on who is answering the survey. It underscores the inherent problem with using reputations as benchmarks for quality; even on a scale from one to five, the answers are highly subjective.

In other words, opinions of school quality are just that. Opinions.

November 19
Choose Your Own Rankings

​Accompanying the warnings about college rankings on this blog have been strong recommendations that students turn first to their guidance counselors for help in developing a list of colleges that is subjective and personalized according to their unique needs. The unfortunate reality (and I accept responsibility for perhaps not giving due notice to this) is that not all students have either equal access to a guidance counselor or the full measure of support that one can provide.

Part of NACAC’s response to students in underserved communities is the “Students and Parents” link on its website which provides an excellent overview of the college search and admissions processes. In this post, the Counselor’s Corner will help you take advantage of these resources and use them as a springboard toward a self-designed and individualized list and ranking, in a manner of speaking, of colleges.

To begin, develop a preliminary list of schools. Some college search engines provide efficient, user-friendly links for this task. Among them are:

It should be noted at this point that you can use more than one search engine. Tip: each database is different, so do an identically-framed search on all of them. 


When looking at the data, it is crucial that you first determine which filters are most important to you, such as:

Majors offered
Enrollment size
Geographic location
College categories
College types

Even using a rating system as simplistic as a “plus” for certain preferred features will, in principle, mirror what the rankings editors do.

Some quick thoughts on your filters:

Retention: The percentage of applicants who return for their sophomore year can indicate the transitional and ongoing support that freshmen received throughout the year. Generally speaking, the smaller the school the higher the retention, so anything higher than 90% is a definite plus.

Graduation Rate: With the cost of attendance for one year of college being nearly the same as that of a mid-sized car, the percentage of students who graduate in four, five or six years merits a lot of scrutiny.  Anything beyond four-years not only costs you time and money, but also delays entry into the work-force or starting graduate school. Keep in mind though that the graduation rate for the institution as a whole may be very different than that of certain departments/programs of study. Tip: see if the respective rate for the major you are interested is the higher of the two, as it would be a plus in your rating.

Financial Aid: Understanding that it is only a starting point, the “percentage of students who receive financial aid” is nonetheless perhaps the most misleading of all the data points to be researched. If a college guidebook reports cryptically that 60% of students receive aid, what is it actually telling you? First of all, it tells you that 40% of the remaining students can afford to pay the entire bill. It does not tell you what percentage of aid recipients qualified for federal aid or state aid, or those who received only an institutional grant or loans.

Perhaps the best way to research financial aid (short of an actual aid award letter) is the Net Price Calculator. All undergraduate institutions that award federal Title IV financial aid are required to offer a Net Price Calculator that provides students an estimated cost to attend the school by calculating tuition, fees, and housing charges against financial aid awards for which you may qualify. This is available on every institution’s web site.

While this blog entry serves as a kind of tutorial in creating your own ranking system, the results are uniquely your own. You are free and encouraged to expand on the types of data points,  or exchange some of them with others according to what feels right. In the end, what really matters is that it is all of your own choosing, and that is what makes it right. So, set your course, pursue it with confidence, keep an open mind and have fun. To help you get started, try out NACAC’s College Comparison Worksheet.

October 18
Storm Brews After US News Release

​In a post that appeared in the Counselor’s Corner several weeks ago, the question posed with regard to the forthcoming college rankings and application season was “More Stormy Weather Ahead?”

Given the reaction, seemingly from all parts of the country, to the release of U.S. News and World Report’s “Best Colleges” issue, that title was prescient. I now have the opportunity to sort through and summarize an array of reactions, a critical firestorm of sorts, to USNWR’s latest product.

So here we go. Decisions, decisions. Among the more benign was the view expressed by Mr. Nathaniel Drake in the edition of the University of Arizona’s Daily Wildcat, wherein he stated, “The rankings aren’t much good, though, unless you’re interested in how wealthy, prestigious and exclusive a school is...The U.S. News and World Report methodology still heavily favors wealthy private institutions over public schools without demonstrating how these schools actually provide students with a better education.”

Mr. Drake’s article reminded me of an ealier comment from Graham Spanier, the president of Penn State, who once stated to Malcolm Gladwell in a New Yorker piece:

“If you look at the top twenty schools every year, forever, they are all wealthy private universities. Do you mean that even the most prestigious public universities in the United States, and you can take your pick of what you think they are—Berkeley, U.C.L.A., University of Michigan, University of Wisconsin, Illinois, Penn State, U.N.C.—do you mean to say that not one of those is in the top tier of institutions? It doesn’t really make sense, until you drill down into the rankings, and what do you find?"

For the answer to Mr. Drake's question, look no further than this year's rankings critics. 
 
Consider the strident, nearly plaintive reaction from the Daily Californian’s Senior Editorial Board/Staff in a piece called "A Pointless Numbers Game:"
 
“Yet again, U.S. News awarded UC Berkeley the distinction of being the best public university in the United States. And proud as one might be of this achievement, the U.S. News rankings are really meaningless distinctions that primarily affirm northeast private universities’ status as the upper crust of American higher education...The diverse opportunities available to anyone and a commitment to building a healthy campus community inclusive of a wide variety of students are what create a meaningful college experience. Treating these colleges as prestige factories that are worth only as much as the degrees they award has noxious side effects, and it explains part of what makes applying to college such a universally loathed experience.”
 
*****
Perhaps the saddest part of all is the terminology being associated with a significant episode in the lives of students and their families – the college selection and application processes. “Bile,” “noxious” and “loathing” certainly do not connote the excitement, the scope of the challenge notwithstanding, that should accompany the process of discovering the best setting for a young person to build a foundation upon which the direction of her or his future personal and intellectual energies will be based.

To be sure, this backlash of criticism did not just spring up out of nowhere. I can still recall an interaction with a well-meaning parent who, after driving more than two hours to our school, literally entered my office waving one of the earliest editions on U.S. News “Best Colleges.”  With a big smile he said, “This is great! Have you seen this magazine? Someone has finally told us who is the best. This is where I want my son to go!”

After having him take a seat I calmly pointed out that, in light of the fact that his son’s long-term goal was to become an elementary school teacher, “number one” would not be an appropriate choice as that concentration was not among its curricular offerings. Bewilderment displaced exuberance as the parent asked, “Well, how can this school be the best in the country if it doesn’t have a major that all good schools should have? How can it still be called the best?” Grasping for an answer I replied, “I don’t know. I guess the people who put out that magazine don’t think it makes a school any less of a number one just because it doesn’t offer a concentration in education."


On another occasion one of my better students detailed a heated disagreement with her parents about one of her top college choices, an out-of-state flagship institution with an excellent reputation. “A friend of my dad’s told him that the school isn’t ranked very high and that that makes it second-rate. Does the rank these people [USNWR] gave it, make that a fact?”

My overly simplistic answer at the time was, ”No.” Years later, the article from Mr. Gladwell would help provide a sufficient answer. No one, least of all U.S. News and World Report has devised mechanisms for measuring the factors that define student engagement (i.e., a “quality experience,” which is also regarded by many as a critical factor in student growth, learning, persistence and ultimately, graduation). Because of this the editors of U.S. News substitute direct measures with proxies to assess institutional excellence. And, as Mr. Gladwell notes, “the proxies for educational quality turn out to be flimsy at best.”

Valerie Strauss of the Washington Post expanded on this notion of “flimsiness” by looking at the survey on academic reputation (weighted at 22.5% by U.S. News) and asking, “[Are] top academics – presidents, provosts and deans of admissions – [truly able] to account for intangibles at [more than 200] peer institutions such as ‘faculty dedication to teaching?’ … Do you think they can do that accurately for all faculty even at their own schools?”


It is my fervent recommendation that the “Best Colleges” rankings be dismissed, regarded as just another flawed publication of its type, and perhaps be referenced cautiously only as a compilation of marginally accurate entering freshmen academic profiles at institutions that are grouped neatly by type.

October 02
Obama on Ratings, Rankings and Higher Education

​It would have been interesting to see the reaction in the offices of U.S. News and World Report to President Obama’s recent speech at the University of Buffalo, of which the central points were college ratings, costs, access and accountability. After his remarks on the challenges facing those whose aspirations are the hallmarks of middle class America (“a good job with good wages, a good education, a home of your own, affordable health care, a secure retirement") Mr. Obama noted that at least a part of the problem is the influence of college rankings. He followed this by stating,

“Today, I'm directing Arne Duncan, our Secretary of Education, to lead an effort to develop a new rating system for America's colleges before the 2015 college year. Right now, private [companies] like U.S. News and World Report put out each year their rankings, and it encourages a lot of colleges to focus on ways to…game the numbers, and it actually rewards them, in some cases, for raising costs.”

While it is widely accepted that there is plenty of “wag the dog” syndrome stimulated by the college rankings industry, before proceeding further it should be understood that the President used the term “rating system”, as opposed to “rankings.” The critical difference is that the latter is based on an ordinal system to reflect which institutions are “the best” according to criteria that Mr. Obama sees as subject to manipulation; i.e., “gaming.”

The former would assign a qualitative rating to colleges which, as summarized by Scott Jaschik in the August 22, 2013 edition of Inside Higher Education, “…… is based on various outcomes (such as graduation rates and graduate earnings), on affordability and on access (measures such as the percentage of students receiving Pell Grants)."

As noted in the transcript of Mr. Obama’s speech, this translates out to new metrics by which higher educational institutions will be rated. Among them,

  • Is the institution placing higher education within the reach of all students through innovative financing and aid programs?
  • Does the institution have in place programs that encourage higher rates of student persistence and success without compromising the quality of the education delivered?
  • What percentage of the institution’s freshmen graduate within four years?
  • Do employment rates of graduates reflect the quality of the overall learning experience and the skill set acquired during study at the institution?
  • What is the average accumulated debt that a student has at graduation?
  • Is the repayment schedule manageable, given the graduate’s earnings?


In the President’s view, the answers to these questions, “will help parents and students figure out how much value a college truly offers…. [and ensure that our country is providing] a better bargain for the middle class and everybody who's working hard to get into the middle class.”

For a moment, consider the idyllic possibilities of such a proposal. Family conversations on potential college options would be less likely to center on the U.S. News and World Report, Peterson’s, Forbes, Newsweek or any other of the usual “best college” rankings publications.  Students referencing the Obama college value/rating system would be able to make more informed choices based on an improved assessment of what the institution delivers for their family’s tuition investment. Particularly for those from under-represented segments, college attendance would become more of a reality than ever before.

Moreover, colleges with progressive aid and support programs would benefit from federally-provided awards. Through an increase in mutual accountability, colleges would help students remain on track for graduation as the students would then meet requirements for a renewal of their federal aid. Students, confident of their chances to complete their degrees on time, would graduate at a higher rate, increasing the potential for a reduction in the loan default rate. The inherent advantages conferred by a degree would enhance opportunities for employment within a knowledge-based economy. Caps on percentage rates for student loan repayment would ease the burden on graduates with entry-level salaries.

To be sure, these are lofty, noble proposals and in a perfect world they would be implemented with all deliberate speed. However, as Mr. Obama pointed out, “some of these reforms will require action from Congress”, and as Mr. Jaschik wrote,

“The ideas in the plan are a mix of actions that the administration could take by itself and those that would require legislation. To date, there has been plenty of Republican enthusiasm (at least at the state level) for some of the ideas reflected in the proposal. But given Republican enthusiasm in Washington for not passing anything proposed by the president, it is unclear how much support the administration will find on the Hill.”

September 27
More Stormy Weather Ahead?
​​Generally speaking, high school seniors and their parents emerge from the relative quietude of summer break to a setting that starts fast and only intensifies as the new school year progresses. Together, with the guidance counselors who serve them, they are swept up in a funnel of activity that will slow only when final college decisions are mailed. In a past discussion with several colleagues, the phenomenon was likened to sitting in a sailboat one minute on a calm sea, and the next being shrouded in dense fog, listening to high winds approach and watching the waves as they got higher.


Drawing analogies to the senior year, one counselor saw the fog as the confusion some students may suddenly feel over their list of prospective colleges. Another felt the high winds might represent the need to choose to apply early decision and/or early action or both, within the next several weeks. And for various reasons, we all saw the waves as something that often only exacerbates the prior two circumstances—the college rankings industry.

So with regard to the latter, what might be expected as the 2013 college rankings season begins? U.S. News and World Report releases its annual “Best Colleges” issue on September 10th. It will be followed, in no particular order, by similar publications or in some instances, by feature articles from Princeton Review, Kiplinger, Forbes, Money, Business Insider, Newsweek and any other publishing enterprise that sees the potential to increase readership.

Accompanying this wide array of choices on college information is an equally wide range of criteria on which their respective rankings are based. Which institutions enroll the most talented applicants, and how are they defined as such? Which schools have the “best” campus life, and what does "best" mean? Which colleges graduate their students with the lowest amount of loan debt, and how was this data compiled? And, the list goes on.

As we saw during 2012 and again already this year, a magazine’s target audience determines and defines the orientation of the data presented. And it is not always necessarily directed at students.

For example, publications that focus on college rankings are rather splashy, with large, bold, multi-colored print on covers announcing the content’s exclusivity and authoritativeness. In this respect, the August 18th edition of Forbes magazine was very atypical. First, only a portion of the issue was devoted to college rankings. Second, a very subdued, “America’s Top Colleges” appeared as the header in a font about half the size of the title of the main article.

At first impression I thought, “Pretty tame for a set of rankings that has Stanford as number one, instead of the usual, Harvard/Princeton/Yale leaderboard.” As it turned out, there was an ordinal ranking of three hundred colleges, but there were also “financial grades” of A+ down through C- for each school (excepting public institutions) to indicate their “Balance Sheet Health” and “Operational Soundness”, which was the central emphasis of the article. So, students looking for information relevant to their search process, as suggested by the issue’s cover, would be disappointed by the curve-ball thrown them. The rankings are more reflective of traditional Forbes content, with a focus on the business side of higher education. To be sure, this is logical terrain for Forbes. However, it also underscores the simple truth that all rankings are different, and students are responsible for figuring what drives the numbered list in front of them.

There is also something disturbing that students and families should not be surprised to encounter during the rankings and college application season this year. That is, the reporting of inaccurate data by colleges, which has been reported with far greater frequency this past year. The practice, willful and deliberate or not, was the focus of an article accompanying Forbes’ institutional fiscal health feature. Entitled, “Schools of Deception: Some Schools Will Do Anything to Improve Their Ranking”, it was written by Forbes staff writer Abram Brown, whose introductory remarks include the following:

Sometime in 2004 Richard C. Vos, the admission dean at Claremont McKenna College, a highly regarded liberal arts school outside Los Angeles, developed a novel way to meet the school president’s demands to improve the quality of incoming classes. He would simply lie.

Over the next seven years Vos provided falsified data–the numbers behind our ranking of Claremont McKenna in America’s Top Colleges–to the Education Department and others, artificially increasing SAT and ACT scores and lowering the admission rate, providing the illusion, if not the reality, that better students were coming to Claremont McKenna.

Mr. Brown goes on to identify three other institutions; Bucknell, Emory and Iona College who as he stated it, have hosted “data-rigging scandals” for the purpose of improving their respective school’s academic profile of admitted students and in turn, improving their ranking. Last year, Scott Jaschik in the January 2, 2013 edition of Inside Higher Education wrote, “Yet Another Rankings Fabrication,” regarding Tulane and George Washington University as having perpetrated similar misrepresentations with their institution’s admissions data.

While it may not be fair to lay the entire blame on publishers for the potentially erroneous content of their magazines, they do provide the stage for such practices to occur. And until they find a way to fix the problem, the expectation for reliable information of students and their families will be lost in the fog, the wind and the waves of dishonesty. Thankfully, the guidance counseling profession will be there as always, to serve as their bridge over troubled waters.

April 29
A Closer Look at 377 Best Colleges

​The cover of The Best 377 Colleges promises much from the contents, proclaiming its exclusivity through being, “The ONLY GUIDE with CANDID FEEDBACK from 122,000 students, 62 RANKING LISTS, UNIQUE RATINGS, [and] FINANCIAL GUIDANCE”. My first thought was “377? Why 377? Why not 250 or 400?” The bit about “candid feedback” caused me to wonder how they synthesized all that.

Paging forward quickly, I glanced at a few of what must have been the ranking lists, among them, “Best College Radio Station”, “Lots of Greek Life”, followed naturally by one titled, “Lots of Beer.” In fairness, I will point out that these appeared in the sub-categories on “Extracurriculars” and “Social Life,” respectively. Still, I wondered if these lists were founded on the “candid feedback” mentioned on the front cover and, as I read on, found out that they were.

Before continuing, a word on Princeton Review’s position on rankings is appropriate. In Part One under the sub-heading, “About Those College Ranking lists,” the editors of the Best 377 Colleges make a very direct criticism of the college rankings publications. As they state on page 33,

“Here you won’t find the colleges in this book ranked hierarchically, 1 to 377. We think such lists – particularly those driven by and perpetuating a ‘best academics’ mania – are not useful for the people they are supposed to serve (college applicants).”

With this in mind, I began reading the “School Rankings and Lists” in Part 2, which are based on the responses of 122,000 plus students who completed Princeton Review’s anonymous survey.  They were asked to, “rate various aspects of their colleges’ offerings and what they report to us on their campus experiences.”

What I regard as the principal shortcoming of the 62 School Rankings and Lists is the absence of a way to cross-list features. That is, if a student was hoping to put together a list of schools where “Students Study the Most,” where “Professors Get High Marks,” that has the “Best Campus Food,” “Where Everyone Plays Intramural Sports,” and is a “Jock School,” each list would have to be arranged side by side to determine which schools appear on the most ratings lists. This task is further complicated by the, additional lists of, “Great Schools for 20 of the Most Popular Undergraduate Majors”, which incidentally, are arranged alphabetically. 

And either I am splitting hairs or Princeton Review wants readers to buy into the notion that rating categories such as, “Administrators Get Low Marks”, “Best-Rub College”, “Easiest Campus To Get Around”,  “Students Pack the Stadiums”, are as inherently valid and compelling as the “best academics” rating criteria of the rankings publications.

Hierarchy is defined in Webster’s Seventh New Collegiate Dictionary as, “arrangement into a graded series.” Interestingly, the schools on each of the 62 School Rankings and Lists are not ordered alphabetically but rather in, “our ‘Top 20’ [i.e. numbered from 1 to 20] ranking lists in eight categories [based on the compiled results of the student surveys]. Is it reasonable then to ask, “Is this not then a hierarchical ranking?”

1 - 10Next
 

 #CollegeRankings

 
 

 About this blog