As Rick Harrison notes at the beginning of each episode of Pawn Stars, “You never know what is going to come through that door.” After two years of centering this blog on the many and varied dimensions of college rankings, you also never know what forms this controversial topic might take.
A recent Inside Higher Ed article, “Feedback From the Field,” provided an overview of educators' reactions to President Obama’s plan to develop a federal rating system for colleges, the resulting initiative by the Education Department to solicit reactions to the President’s proposal, and a brief summary of responses submitted thus far by students, guidance counselors, professors, higher education associations and members of the general public.
The suggested “outcomes-based” component of the President’s proposed rating system focused on graduates’ earnings has caused some college leaders and faculty to express concern that doing so would distort the true goals and purposes of higher education. As one professor emphasized in her e-mail to the Education Department, “[this would treat education] as if it were a stock investment, making earnings after graduation a sign of the quality of one’s education.”
I hope graduate earnings does not become the controlling metric in any higher education rating system. In its commercial manifestation, this metric (as expressed in Forbes magazine's "return on investment" rankings) is even more problematic than the US News rankings, which NACAC has critiqued over the years.
Many more voices need to be heard and much dialogue needs to occur to aid in developing a rating system of the type advocated by, for one, the Education Trust. Specifically, “a system that would, among other things, highlight the success gaps for low-income and minority students in higher education, and [thereby] help institutions better serve those students,” without allowing the emergence of “perverse incentives.” From this former school counselors' perspective, incorporating such measures would be an essential component of any attempt to classify colleges by the value they add to student experiences.
While the article didn't elaborate on the nature of the “perverse incentives,” it seems reasonable to assume that the reference is to college ranking metrics. At the same time he concurs with Jamie Studley, Deputy Under-Secretary of Education, who emphasizes that the goal is to develop a “nuanced and enriched idea of how you get value from education.” Ideally then, the college rating system that would emerge would eliminate most, if not all, the misguided metrics used by rankings publications and most importantly, the gamesmanship, manipulation and subterfuge to which they are vulnerable. NACAC's comments on the Administration's proposed college ratings system can be found here.
US News and World Report (USNWR) attempts to evaluate colleges mostly on digits and dollars, like graduation rates, alumni gifts, GPA, test scores, and faculty resources. Because of all the attention on numerical data, it may come as a surprise that one of the most significant variables in the USNWR formula is also its most subjective, and some might say, unreliable. The reputational survey, a measure of what college representatives think about other institutions, represents 22.5 percent of a school's overall rankings, only matched in strength by graduation and retention rates.
On the reputational survey, college representatives are asked to rate colleges on a scale from one to five. That bears repeating. Survey participants are asked to label each college a 1, 2, 3, 4, or 5. If a college asked you to rate your high school experience along these lines, what would you say? Would a three suffice? What is the difference between a three and a four?
The rankings are an attempt to measure the immeasurable, and the USNWR editorial staff themselves acknowledge as much (emphasis added):
The host of intangibles that make up the college experience can't be measured by a series of data points. But for families concerned with finding the best academic value for their money, the U.S. News Best Colleges rankings provide an excellent starting point for the search.
Students should take this warning from USNWR very seriously. The rankings, in fact, do provide valuable information on many postsecondary institutions across the country. There are over 4,000 institutions to choose from, so more information on particular colleges is always helpful.
But be wary of rankings that begin with adjectives like "best." What does that mean exactly? For USNWR it means a few things, but it largely means the most selective schools with the highest graduation/retention rates and the "best" reputations. Because of its heavy weight in the rankings, let's look closer at the reputational survey.
NACAC surveyed college counselors and admission officers to find out what they thought of various factors within the USNWR ranking methodology. Here is what they thought of the reputational, or peer, survey:
Notice the meager number of respondents who think the assessments are good indicators. With over 60 percent calling the peer assessments either "poor" or simply "not an indicator," the assessments may not be such accurate indicators of quality after all.
Robert Morse, the rankings editor at USNWR, frequently takes a closer look at the data on his Morse Code Blog. Though many students may overlook this information, the blog is an important companion piece to the rankings.
In a recent entry, he articulates an important mismatch in the reputational survey results. According to Morse, responses from high school counselors and college representatives are sometimes very different. Several charts outline some jaw-dropping differences in the ways each of these groups view specific colleges. Institutions shift by as much as 104 places in the rankings according to whether they are viewed from the high school or college side.
Morse sums up his analysis thusly:
The National Universities that are more highly ranked by high school counselors are generally either smaller schools, universities that have recently expanded into research and granting doctoral degrees or schools whose curriculum is concentrated in science, technology, engineering and math, the so-called STEM fields. The universities that are more highly ranked by college officials than by high school counselors are all very large public universities, often the state's flagship school.
His analysis proves that the reputational data skews toward certain sectors, depending on who is answering the survey. It underscores the inherent problem with using reputations as benchmarks for quality; even on a scale from one to five, the answers are highly subjective.
In other words, opinions of school quality are just that. Opinions.
Accompanying the warnings about college rankings on this blog have been strong recommendations that students turn first to their guidance counselors for help in developing a list of colleges that is subjective and personalized according to their unique needs. The unfortunate reality (and I accept responsibility for perhaps not giving due notice to this) is that not all students have either equal access to a guidance counselor or the full measure of support that one can provide.
Part of NACAC’s response to students in underserved communities is the “Students and Parents” link on its website which provides an excellent overview of the college search and admissions processes. In this post, the Counselor’s Corner will help you take advantage of these resources and use them as a springboard toward a self-designed and individualized list and ranking, in a manner of speaking, of colleges.
To begin, develop a preliminary list of schools. Some college search engines provide efficient, user-friendly links for this task. Among them are:
It should be noted at this point that you can use more than one search engine. Tip: each database is different, so do an identically-framed search on all of them.
When looking at the data, it is crucial that you first determine which filters are most important to you, such as:
Even using a rating system as simplistic as a “plus” for certain preferred features will, in principle, mirror what the rankings editors do.
Some quick thoughts on your filters:
Retention: The percentage of applicants who return for their sophomore year can indicate the transitional and ongoing support that freshmen received throughout the year. Generally speaking, the smaller the school the higher the retention, so anything higher than 90% is a definite plus.
Graduation Rate: With the cost of attendance for one year of college being nearly the same as that of a mid-sized car, the percentage of students who graduate in four, five or six years merits a lot of scrutiny. Anything beyond four-years not only costs you time and money, but also delays entry into the work-force or starting graduate school. Keep in mind though that the graduation rate for the institution as a whole may be very different than that of certain departments/programs of study. Tip: see if the respective rate for the major you are interested is the higher of the two, as it would be a plus in your rating.
Financial Aid: Understanding that it is only a starting point, the “percentage of students who receive financial aid” is nonetheless perhaps the most misleading of all the data points to be researched. If a college guidebook reports cryptically that 60% of students receive aid, what is it actually telling you? First of all, it tells you that 40% of the remaining students can afford to pay the entire bill. It does not tell you what percentage of aid recipients qualified for federal aid or state aid, or those who received only an institutional grant or loans.
Perhaps the best way to research financial aid (short of an actual aid award letter) is the Net Price Calculator. All undergraduate institutions that award federal Title IV financial aid are required to offer a Net Price Calculator that provides students an estimated cost to attend the school by calculating tuition, fees, and housing charges against financial aid awards for which you may qualify. This is available on every institution’s web site.
While this blog entry serves as a kind of tutorial in creating your own ranking system, the results are uniquely your own. You are free and encouraged to expand on the types of data points, or exchange some of them with others according to what feels right. In the end, what really matters is that it is all of your own choosing, and that is what makes it right. So, set your course, pursue it with confidence, keep an open mind and have fun. To help you get started, try out NACAC’s College Comparison Worksheet.
In a post that appeared in the Counselor’s Corner several weeks ago, the question posed with regard to the forthcoming college rankings and application season was “More Stormy Weather Ahead?”
Given the reaction, seemingly from all parts of the country, to the release of U.S. News and World Report’s “Best Colleges” issue, that title was prescient. I now have the opportunity to sort through and summarize an array of reactions, a critical firestorm of sorts, to USNWR’s latest product.
So here we go. Decisions, decisions. Among the more benign was the view expressed by Mr. Nathaniel Drake in the edition of the University of Arizona’s Daily Wildcat, wherein he stated, “The rankings aren’t much good, though, unless you’re interested in how wealthy, prestigious and exclusive a school is...The U.S. News and World Report methodology still heavily favors wealthy private institutions over public schools without demonstrating how these schools actually provide students with a better education.”
Mr. Drake’s article reminded me of an ealier comment from Graham Spanier, the president of Penn State, who once stated to Malcolm Gladwell in a New Yorker piece:
“If you look at the top twenty schools every year, forever, they are all wealthy private universities. Do you mean that even the most prestigious public universities in the United States, and you can take your pick of what you think they are—Berkeley, U.C.L.A., University of Michigan, University of Wisconsin, Illinois, Penn State, U.N.C.—do you mean to say that not one of those is in the top tier of institutions? It doesn’t really make sense, until you drill down into the rankings, and what do you find?"
For the answer to Mr. Drake's question, look no further than this year's rankings critics.
Consider the strident, nearly plaintive reaction from the Daily Californian’s Senior Editorial Board/Staff in a piece called "A Pointless Numbers Game
“Yet again, U.S. News awarded UC Berkeley the distinction of being the best public university in the United States. And proud as one might be of this achievement, the U.S. News rankings are really meaningless distinctions that primarily affirm northeast private universities’ status as the upper crust of American higher education...The diverse opportunities available to anyone and a commitment to building a healthy campus community inclusive of a wide variety of students are what create a meaningful college experience. Treating these colleges as prestige factories that are worth only as much as the degrees they award has noxious side effects, and it explains part of what makes applying to college such a universally loathed experience.”
Perhaps the saddest part of all is the terminology being associated with a significant episode in the lives of students and their families – the college selection and application processes. “Bile,” “noxious” and “loathing” certainly do not connote the excitement, the scope of the challenge notwithstanding, that should accompany the process of discovering the best setting for a young person to build a foundation upon which the direction of her or his future personal and intellectual energies will be based.
To be sure, this backlash of criticism did not just spring up out of nowhere. I can still recall an interaction with a well-meaning parent who, after driving more than two hours to our school, literally entered my office waving one of the earliest editions on U.S. News “Best Colleges.” With a big smile he said, “This is great! Have you seen this magazine? Someone has finally told us who is the best. This is where I want my son to go!”
After having him take a seat I calmly pointed out that, in light of the fact that his son’s long-term goal was to become an elementary school teacher, “number one” would not be an appropriate choice as that concentration was not among its curricular offerings. Bewilderment displaced exuberance as the parent asked, “Well, how can this school be the best in the country if it doesn’t have a major that all good schools should have? How can it still be called the best?” Grasping for an answer I replied, “I don’t know. I guess the people who put out that magazine don’t think it makes a school any less of a number one just because it doesn’t offer a concentration in education."
On another occasion one of my better students detailed a heated disagreement with her parents about one of her top college choices, an out-of-state flagship institution with an excellent reputation. “A friend of my dad’s told him that the school isn’t ranked very high and that that makes it second-rate. Does the rank these people [USNWR] gave it, make that a fact?”
My overly simplistic answer at the time was, ”No.” Years later, the article from Mr. Gladwell would help provide a sufficient answer. No one, least of all U.S. News and World Report has devised mechanisms for measuring the factors that define student engagement (i.e., a “quality experience,” which is also regarded by many as a critical factor in student growth, learning, persistence and ultimately, graduation). Because of this the editors of U.S. News substitute direct measures with proxies to assess institutional excellence. And, as Mr. Gladwell notes, “the proxies for educational quality turn out to be flimsy at best.”
Valerie Strauss of the Washington Post
expanded on this notion of “flimsiness” by looking at the survey on academic reputation (weighted at 22.5% by U.S. News) and asking, “[Are] top academics – presidents, provosts and deans of admissions – [truly able] to account for intangibles at [more than 200] peer institutions such as ‘faculty dedication to teaching?’ … Do you think they can do that accurately for all faculty even at their own schools?”
It is my fervent recommendation that the “Best Colleges” rankings be dismissed, regarded as just another flawed publication of its type, and perhaps be referenced cautiously only as a compilation of marginally accurate entering freshmen academic profiles at institutions that are grouped neatly by type.
It would have been interesting to see the reaction in the offices of U.S. News and World Report to President Obama’s recent speech at the University of Buffalo, of which the central points were college ratings, costs, access and accountability. After his remarks on the challenges facing those whose aspirations are the hallmarks of middle class America (“a good job with good wages, a good education, a home of your own, affordable health care, a secure retirement") Mr. Obama noted that at least a part of the problem is the influence of college rankings. He followed this by stating,
“Today, I'm directing Arne Duncan, our Secretary of Education, to lead an effort to develop a new rating system for America's colleges before the 2015 college year. Right now, private [companies] like U.S. News and World Report put out each year their rankings, and it encourages a lot of colleges to focus on ways to…game the numbers, and it actually rewards them, in some cases, for raising costs.”
While it is widely accepted that there is plenty of “wag the dog” syndrome stimulated by the college rankings industry, before proceeding further it should be understood that the President used the term “rating system”, as opposed to “rankings.” The critical difference is that the latter is based on an ordinal system to reflect which institutions are “the best” according to criteria that Mr. Obama sees as subject to manipulation; i.e., “gaming.”
The former would assign a qualitative rating to colleges which, as summarized by Scott Jaschik in the August 22, 2013 edition of Inside Higher Education, “…… is based on various outcomes (such as graduation rates and graduate earnings), on affordability and on access (measures such as the percentage of students receiving Pell Grants)."
As noted in the transcript of Mr. Obama’s speech, this translates out to new metrics by which higher educational institutions will be rated. Among them,
- Is the institution placing higher education within the reach of all students through innovative financing and aid programs?
- Does the institution have in place programs that encourage higher rates of student persistence and success without compromising the quality of the education delivered?
- What percentage of the institution’s freshmen graduate within four years?
- Do employment rates of graduates reflect the quality of the overall learning experience and the skill set acquired during study at the institution?
- What is the average accumulated debt that a student has at graduation?
- Is the repayment schedule manageable, given the graduate’s earnings?
In the President’s view, the answers to these questions, “will help parents and students figure out how much value a college truly offers…. [and ensure that our country is providing] a better bargain for the middle class and everybody who's working hard to get into the middle class.”
For a moment, consider the idyllic possibilities of such a proposal. Family conversations on potential college options would be less likely to center on the U.S. News and World Report, Peterson’s, Forbes, Newsweek or any other of the usual “best college” rankings publications. Students referencing the Obama college value/rating system would be able to make more informed choices based on an improved assessment of what the institution delivers for their family’s tuition investment. Particularly for those from under-represented segments, college attendance would become more of a reality than ever before.
Moreover, colleges with progressive aid and support programs would benefit from federally-provided awards. Through an increase in mutual accountability, colleges would help students remain on track for graduation as the students would then meet requirements for a renewal of their federal aid. Students, confident of their chances to complete their degrees on time, would graduate at a higher rate, increasing the potential for a reduction in the loan default rate. The inherent advantages conferred by a degree would enhance opportunities for employment within a knowledge-based economy. Caps on percentage rates for student loan repayment would ease the burden on graduates with entry-level salaries.
To be sure, these are lofty, noble proposals and in a perfect world they would be implemented with all deliberate speed. However, as Mr. Obama pointed out, “some of these reforms will require action from Congress”, and as Mr. Jaschik wrote,
“The ideas in the plan are a mix of actions that the administration could take by itself and those that would require legislation. To date, there has been plenty of Republican enthusiasm (at least at the state level) for some of the ideas reflected in the proposal. But given Republican enthusiasm in Washington for not passing anything proposed by the president, it is unclear how much support the administration will find on the Hill.”
Generally speaking, high school seniors and their parents emerge
from the relative quietude of summer break to a setting that starts fast
and only intensifies as the new school year progresses. Together, with
the guidance counselors who serve them, they are swept up in a funnel of
activity that will slow only when final college decisions are mailed.
In a past discussion with several colleagues, the phenomenon was likened
to sitting in a sailboat one minute on a calm sea, and the next being
shrouded in dense fog, listening to high winds approach and watching the
waves as they got higher.
Drawing analogies to the senior year,
one counselor saw the fog as the confusion some students may suddenly
feel over their list of prospective colleges. Another felt the high
winds might represent the need to choose to apply early decision and/or
early action or both, within the next several weeks. And for various
reasons, we all saw the waves as something that often only exacerbates
the prior two circumstances—the college rankings industry.
with regard to the latter, what might be expected as the 2013 college
rankings season begins? U.S. News and World Report releases its annual
“Best Colleges” issue on September 10th. It will be followed, in no
particular order, by similar publications or in some instances, by
feature articles from Princeton Review, Kiplinger, Forbes, Money,
Business Insider, Newsweek and any other publishing enterprise that sees
the potential to increase readership.
Accompanying this wide
array of choices on college information is an equally wide range of
criteria on which their respective rankings are based. Which
institutions enroll the most talented applicants, and how are they
defined as such? Which schools have the “best” campus life, and what
does "best" mean? Which colleges graduate their students with the lowest
amount of loan debt, and how was this data compiled? And, the list goes
As we saw during 2012 and again already this year, a
magazine’s target audience determines and defines the orientation of the
data presented. And it is not always necessarily directed at students.
example, publications that focus on college rankings are rather
splashy, with large, bold, multi-colored print on covers announcing the
content’s exclusivity and authoritativeness. In this respect, the August
18th edition of Forbes magazine was very atypical. First, only a
portion of the issue was devoted to college rankings. Second, a very
subdued, “America’s Top Colleges” appeared as the header in a font about
half the size of the title of the main article.
At first impression I thought, “Pretty tame for a set of rankings
that has Stanford as number one, instead of the usual,
Harvard/Princeton/Yale leaderboard.” As it turned out, there was an
ordinal ranking of three hundred colleges, but there were also
“financial grades” of A+ down through C- for each school (excepting
public institutions) to indicate their “Balance Sheet Health” and
“Operational Soundness”, which was the central emphasis of the article.
So, students looking for information relevant to their search process,
as suggested by the issue’s cover, would be disappointed by the
curve-ball thrown them. The rankings are more reflective of traditional
Forbes content, with a focus on the business side of higher education.
To be sure, this is logical terrain for Forbes. However, it also
underscores the simple truth that all rankings are different, and
students are responsible for figuring what drives the numbered list in
front of them.
There is also something disturbing that students and families should
not be surprised to encounter during the rankings and college
application season this year. That is, the reporting of inaccurate data
by colleges, which has been reported with far greater frequency this
past year. The practice, willful and deliberate or not, was the focus of
an article accompanying Forbes’ institutional fiscal health feature.
Entitled, “Schools of Deception: Some Schools Will Do Anything to
Improve Their Ranking”, it was written by Forbes staff writer Abram
Brown, whose introductory remarks include the following:
in 2004 Richard C. Vos, the admission dean at Claremont McKenna
College, a highly regarded liberal arts school outside Los Angeles,
developed a novel way to meet the school president’s demands to improve
the quality of incoming classes. He would simply lie.
the next seven years Vos provided falsified data–the numbers behind our
ranking of Claremont McKenna in America’s Top Colleges–to the Education
Department and others, artificially increasing SAT and ACT scores and
lowering the admission rate, providing the illusion, if not the reality,
that better students were coming to Claremont McKenna.
Mr. Brown goes on to identify three other institutions; Bucknell,
Emory and Iona College who as he stated it, have hosted “data-rigging
scandals” for the purpose of improving their respective school’s
academic profile of admitted students and in turn, improving their
ranking. Last year, Scott Jaschik in the January 2, 2013 edition of
Inside Higher Education wrote, “Yet Another Rankings Fabrication,”
regarding Tulane and George Washington University as having perpetrated
similar misrepresentations with their institution’s admissions data.
it may not be fair to lay the entire blame on publishers for the
potentially erroneous content of their magazines, they do provide the
stage for such practices to occur. And until they find a way to fix the
problem, the expectation for reliable information of students and their
families will be lost in the fog, the wind and the waves of dishonesty.
Thankfully, the guidance counseling profession will be there as always,
to serve as their bridge over troubled waters.
The cover of The Best 377 Colleges promises much from the
contents, proclaiming its exclusivity through being, “The ONLY GUIDE with CANDID FEEDBACK from 122,000 students, 62 RANKING
LISTS, UNIQUE RATINGS, [and] FINANCIAL GUIDANCE”. My first thought was “377?
Why 377? Why not 250 or 400?” The bit about “candid feedback” caused me to
wonder how they synthesized all that.
Paging forward quickly, I glanced at a few of what must have
been the ranking lists, among them, “Best College Radio Station”, “Lots of
Greek Life”, followed naturally by one titled, “Lots of Beer.” In fairness, I
will point out that these appeared in the sub-categories on “Extracurriculars” and
“Social Life,” respectively. Still, I wondered if these lists were founded on
the “candid feedback” mentioned on the front cover and, as I read on, found out
that they were.
Before continuing, a word on Princeton Review’s position on
rankings is appropriate. In Part One under the sub-heading, “About Those
College Ranking lists,” the editors of the Best 377 Colleges make a very direct
criticism of the college rankings publications. As they state on page 33,
“Here you won’t find
the colleges in this book ranked hierarchically, 1 to 377. We think such lists
– particularly those driven by and perpetuating a ‘best academics’ mania – are
not useful for the people they are supposed to serve (college applicants).”
With this in mind, I began reading the “School Rankings and Lists” in Part 2, which
are based on the responses of 122,000 plus students who completed Princeton
Review’s anonymous survey. They were
asked to, “rate various aspects of their colleges’ offerings and what they
report to us on their campus experiences.”
What I regard as the principal shortcoming of the 62 School
Rankings and Lists is the absence of a way to cross-list features. That is, if
a student was hoping to put together a list of schools where “Students Study
the Most,” where “Professors Get High Marks,” that has the “Best Campus Food,” “Where
Everyone Plays Intramural Sports,” and is a “Jock School,” each list would have
to be arranged side by side to determine which schools appear on the most
ratings lists. This task is further complicated by the, additional lists of,
“Great Schools for 20 of the Most Popular Undergraduate Majors”, which
incidentally, are arranged alphabetically.
And either I am splitting hairs or
Princeton Review wants readers to buy into the notion that rating
categories such as, “Administrators Get Low Marks”, “Best-Rub College”,
“Easiest Campus To Get Around”, “Students Pack the Stadiums”, are as
inherently valid and compelling as the “best academics” rating criteria
of the rankings publications.
Hierarchy is defined in Webster’s Seventh New Collegiate Dictionary as, “arrangement into a graded series.” Interestingly, the schools on each of the 62 School Rankings and Lists are not ordered alphabetically but rather in, “our ‘Top 20’ [i.e. numbered from 1 to 20] ranking lists in eight categories [based on the compiled results of the student surveys]. Is it reasonable then to ask, “Is this not then a hierarchical ranking?”
While the blog posts up to this point have centered primarily on the annual rankings publications that are released each autumn, experience has shown that some students may bypass these altogether and use a particular guidebook or two in their place.
Regardless of which option they choose my advice has always been that, in attempting to verify the impressions they have of the institutions on their list, what they are looking for is consistency in information from source to source relative to those particular schools and programs of study. In other words, is what you believe about an institution or program and its culture/personality addressed in say statements by the admissions representative, brochures, the viewbook, the school’s website, the departmental homepage, guidebooks and finally, during a campus visit?
I freely admit that it may have seemed overly simplistic to tell them, “If it looks like a duck, walks like duck, sounds like a duck and smells like a duck well, chances are good that it is a duck.”
There were usually nods of agreement from my students indicating their understanding of why all this was important, until I got to the point of “campus culture and institutional personality”. A puzzled facial expression would then be accompanied by their asking in so many words, “Campus culture? Campus personality? What is that and why is it a consideration?”
This was a great segue to a favorite anecdote that pretty much clarified the nature of these two institutional features. I related how during a campus visit where a former student was giving me a tour, she turned to me and said, “Before we begin Mr. Prieto, I want to give you some idea of what to expect. The best way to explain what this place is like is that at [this school] women shave their heads and some of the guys wear dresses. That’s just the way it is, and no one bats an eye. And this is just one of the things about [this school] that you might but probably won’t see in the viewbooks and guidebooks.”
In another instance, during a discussion over lunch with another former student, I asked my usual question of whether he found what he expected at his new four-year home. As he replied,
“With one important exception, yes. With the high level of diversity mentioned in everything I had read about [this school], I took it for granted that it would be just like high school, where the differences in our backgrounds didn’t stop strong friendships from forming. It was one of the things I valued most about my high school experience. Here, yes the student body is very diverse, but too many people choose to hang out with only those from the same background. So, I don’t have any Black friends or Asian friends or Hispanic friends anymore. Maybe I just need to wait a little bit longer to see if my impression of only three months is accurate.”
This leads us back to the principal distinction between the various rankings publications and their corollary, the guidebooks. The latter attempt to identify and elaborate on what students on a particular campus can expect to experience from a sociological and interpersonal perspective – two dimensions of “institutional fit” that go beyond matching up academically with the program, features and benefits of a particular school.
Given this orientation, we will proceed with a review and discussion of the following guidebooks. In trying to place myself in the position of a student, I picked up a representative sampling. They are:
- The Best 377 Colleges, 2013 Edition, published by The Princeton Review
- 283 Great Colleges (no publication date given) Spark Publishing
- Fiske Guide to Colleges 2013, published by Sourcebooks EDU
- The K&W Guide To College Programs and Services for Students With Learning Disabilities or AD/HD,
- 11th Edition, published by The Princeton Review
Alright, let’s begin.
(Next week: The Best 377 Colleges)
In the Australian, Simon Marginson, a professor of higher education at the University of Melbourne, was among several academics who reacted strongly to a data-gathering maneuver used by Quacquarelli Symonds, hereinafter “QS”, a London-based enterprise that compiles the QS Global World University Rankings.
Mr. Marginson’s criticism was prompted by his learning that QS enlisted, albeit on a one-time highly limited basis, Opinion Outpost to collect a very small sample of survey responses. Opinion Outpost is a website that awards points that may be redeemed for cash to users who complete surveys on a variety of items and subjects.
An enterprise such as QS, seen as one of three major international university ranking systems, using a paid survey to collect responses, caused Bruce Leiter, a professor and director of the Center for Law, Philosophy, and Human Values at the University of Chicago to go even beyond Professor Marginson in calling the QS rankings, “a fraud on the public.”
Heavy-duty rhetoric bordering on invective you might think. To be sure, as Elizabeth Redden, author of the article in The Australian pointed out, “All three of the major global university rankers, the QS rankings, as well as the Times Higher Education’s (THE) World University Rankings and the Academic Ranking of World Universities (ARWU), are regularly criticized. Many educators question the value of rankings and argue that they can measure only a narrow slice of what quality higher education is all about.”
It is a pleasure to reiterate that this synthesizes the position taken in each post for this blog since its inception. You have to love affirmation…
This groundswell of scrutiny and multi-source criticism led Ben Sowter, head of the intelligence unit for QS, to send a responsive piece to The Australian entitled “10 Reasons Why the QS Academic Survey Cannot Be Effectively Manipulated”, in which he attempted to counter the views of Marginson, Leiter, Redden and others, and neutralize their impact. A brief sample of his defense follows.
- The QS Intelligence Unit enforces a strict policy prohibiting one respondent from either soliciting or coaching another in terms of how to respond to the survey.
- QS uses a screening process determine the validity and authenticity of every request to participate in the Global Academic Sirvey.
- QS maintains that its “market-leading sample size” of more than 46,000 respondents minimizes the possibility of undue influence being exerted to influence the results of the surveys because any such effort would have to be large in scale and therefore, detectable.
- QS stands firmly on the trust they have in their respondents, “academics who place great value on their ‘academic integrity’” and as such, are highly resistant to undue or unscrupulous influence in the completion of their survey.
In light of the preceding, might the average independent reader be willing to back off a bit on QS? For those who have not yet read previous posts to this blog regarding the whole international college rankings scheme and indeed, had I not delved into and written about it, it would be reasonable to ask, “Isn’t this flap about QS’s methodology just a little bit over the top? After all, compare it to the rather benign potshots that the various ranking publications in the U.S. take at each other, and you have to wonder if there is not a lot more to THE, ARWU and other academics taking direct aim at QS.
Well, to reflect back on what appeared in this blog last winter, there is much more to it and it has to do with not merely a rank, but perceived prestige and what appear to be hardline international economics. To re-quote an earlier post:
“Performance indicators like research, citations, and industry income reflect the cash flow into the institution…”
In summary, when students and their families venture into the realm of international college and university rankings, they would be well advised to remember that the academic opportunities available are certainly a consideration, but only to the extent that they establish and maintain an ever-increasing flow of monies.
The following is an editorial on college rankings submitted by Marc Priester, a sophomore
economics and government and politics major at the University of
Maryland. He recently published another editorial about college rankings
in the UMD school newspaper titled “College Rankings Fail.”
The views expressed in this article reflect the views of Mr. Priester,
and not necessarily the views of the National Association for College
The perceived relationship between prestigious universities and the
“American Dream” has spawned an arms race of students wishing to enroll
at top schools, supported by massive student loans. Because I am only
human, I have been, albeit regrettably, also a part of this travesty. I,
too, had been conflicted over prestige and cost once upon a time.
Rewind two years ago. I was a college senior. Various acceptance
letters fell upon me like rain on a barren farmland. This rain was
toxic, but the temptation to indulge was there.
At first my heart was unfailingly set on attending a private
university in New York that was acclaimed for its political science
& economics departments. It seemed so simple, punch my ticket to
this university, do well, let the prestige carry me to a cushy
investment banking firm or a top law school, proceed to buy mansion,
boat, Maybach, and ball harder than Jay-Z or Kanye West.
I had also received acceptance to UMD, but I had only applied after
my mother continually pestered me because she knew for certain we could
afford it. Thank God she did.
When I received my financial aid package, seeing the dismal Stafford
loan valued at $5,500 set against an almost $60,000 a year cost, my
heart sank. I could only finance this academic expedition with private
lenders who charge criminally high interest rates. But I still thought
maybe it’s worth it. After all, it’s prestigious! So maybe the Maybach
has to wait, but I can still get that mansion right?
The unfortunate truth is that excessive loans are so engulfing that
they absorb almost all of even the most lucrative paycheck, with no
remorse. And if you miss a payment, your credit score tanks. And if you
pay off the loan too quickly, your credit score depresses. Paradoxical?
Absolutely, but the way finances work is unfair. It’s a lose-lose
scenario where we as people are robbed of our autonomy. Read up on the
horror stories; they are true. And, unlike a home, bankruptcy does not
absolve you of loan responsibility; and if you default, your school may
sue you as some have done recently.
I begrudgingly chose UMD with a mindset that a future of deskwork and
being a yes-man to an executive was imminent. Thankfully 18-year-old
Marc was mistaken.
Here at my cost-effective state institution, I have been inundated
with opportunity. Leadership positions in various professional
organizations, success on the debate team, journalistic opportunities
with the highly regarded Diamondback, and our proximity to DC for
interning reflect my experiences here. Even having the fortunate
opportunity to write this article is a byproduct of attending UMD.
Also, the nightlife ain’t half bad.
What I’ve learned is that it is the individual who determines
success, not the name on the degree. Those at Harvard, Stanford or NYU
find success because of work ethic, determination and insatiable
ambition. The same goes for Maryland. Statistically speaking, top students from state universities make equitable pay as those at prestigious universities.
I am not saying attending superior universities isn’t a worthy
investment. What I am saying is attending those universities shouldn’t
come at the cost of future financial solvency. Maybe $100,000 of debt is
unfathomable to a teenager, and frankly it still is to me, but if I can
reach the same ends by going to another university and pay
substantially less, than I say bring on the state schools. Let no
parent, GPA/SAT, or university define your life; you build the future.