Accompanying the warnings about college rankings on this blog have been strong recommendations that students turn first to their guidance counselors for help in developing a list of colleges that is subjective and personalized according to their unique needs. The unfortunate reality (and I accept responsibility for perhaps not giving due notice to this) is that not all students have either equal access to a guidance counselor or the full measure of support that one can provide.
Part of NACAC’s response to students in underserved communities is the “Students and Parents” link on its website which provides an excellent overview of the college search and admissions processes. In this post, the Counselor’s Corner will help you take advantage of these resources and use them as a springboard toward a self-designed and individualized list and ranking, in a manner of speaking, of colleges.
To begin, develop a preliminary list of schools. Some college search engines provide efficient, user-friendly links for this task. Among them are:
It should be noted at this point that you can use more than one search engine. Tip: each database is different, so do an identically-framed search on all of them.
When looking at the data, it is crucial that you first determine which filters are most important to you, such as:
Even using a rating system as simplistic as a “plus” for certain preferred features will, in principle, mirror what the rankings editors do.
Some quick thoughts on your filters:
Retention: The percentage of applicants who return for their sophomore year can indicate the transitional and ongoing support that freshmen received throughout the year. Generally speaking, the smaller the school the higher the retention, so anything higher than 90% is a definite plus.
Graduation Rate: With the cost of attendance for one year of college being nearly the same as that of a mid-sized car, the percentage of students who graduate in four, five or six years merits a lot of scrutiny. Anything beyond four-years not only costs you time and money, but also delays entry into the work-force or starting graduate school. Keep in mind though that the graduation rate for the institution as a whole may be very different than that of certain departments/programs of study. Tip: see if the respective rate for the major you are interested is the higher of the two, as it would be a plus in your rating.
Financial Aid: Understanding that it is only a starting point, the “percentage of students who receive financial aid” is nonetheless perhaps the most misleading of all the data points to be researched. If a college guidebook reports cryptically that 60% of students receive aid, what is it actually telling you? First of all, it tells you that 40% of the remaining students can afford to pay the entire bill. It does not tell you what percentage of aid recipients qualified for federal aid or state aid, or those who received only an institutional grant or loans.
Perhaps the best way to research financial aid (short of an actual aid award letter) is the Net Price Calculator. All undergraduate institutions that award federal Title IV financial aid are required to offer a Net Price Calculator that provides students an estimated cost to attend the school by calculating tuition, fees, and housing charges against financial aid awards for which you may qualify. This is available on every institution’s web site.
While this blog entry serves as a kind of tutorial in creating your own ranking system, the results are uniquely your own. You are free and encouraged to expand on the types of data points, or exchange some of them with others according to what feels right. In the end, what really matters is that it is all of your own choosing, and that is what makes it right. So, set your course, pursue it with confidence, keep an open mind and have fun. To help you get started, try out NACAC’s College Comparison Worksheet.
In a post that appeared in the Counselor’s Corner several weeks ago, the question posed with regard to the forthcoming college rankings and application season was “More Stormy Weather Ahead?”
Given the reaction, seemingly from all parts of the country, to the release of U.S. News and World Report’s “Best Colleges” issue, that title was prescient. I now have the opportunity to sort through and summarize an array of reactions, a critical firestorm of sorts, to USNWR’s latest product.
So here we go. Decisions, decisions. Among the more benign was the view expressed by Mr. Nathaniel Drake in the edition of the University of Arizona’s Daily Wildcat, wherein he stated, “The rankings aren’t much good, though, unless you’re interested in how wealthy, prestigious and exclusive a school is...The U.S. News and World Report methodology still heavily favors wealthy private institutions over public schools without demonstrating how these schools actually provide students with a better education.”
Mr. Drake’s article reminded me of an ealier comment from Graham Spanier, the president of Penn State, who once stated to Malcolm Gladwell in a New Yorker piece:
“If you look at the top twenty schools every year, forever, they are all wealthy private universities. Do you mean that even the most prestigious public universities in the United States, and you can take your pick of what you think they are—Berkeley, U.C.L.A., University of Michigan, University of Wisconsin, Illinois, Penn State, U.N.C.—do you mean to say that not one of those is in the top tier of institutions? It doesn’t really make sense, until you drill down into the rankings, and what do you find?"
For the answer to Mr. Drake's question, look no further than this year's rankings critics.
Consider the strident, nearly plaintive reaction from the Daily Californian’s Senior Editorial Board/Staff in a piece called "A Pointless Numbers Game
“Yet again, U.S. News awarded UC Berkeley the distinction of being the best public university in the United States. And proud as one might be of this achievement, the U.S. News rankings are really meaningless distinctions that primarily affirm northeast private universities’ status as the upper crust of American higher education...The diverse opportunities available to anyone and a commitment to building a healthy campus community inclusive of a wide variety of students are what create a meaningful college experience. Treating these colleges as prestige factories that are worth only as much as the degrees they award has noxious side effects, and it explains part of what makes applying to college such a universally loathed experience.”
Perhaps the saddest part of all is the terminology being associated with a significant episode in the lives of students and their families – the college selection and application processes. “Bile,” “noxious” and “loathing” certainly do not connote the excitement, the scope of the challenge notwithstanding, that should accompany the process of discovering the best setting for a young person to build a foundation upon which the direction of her or his future personal and intellectual energies will be based.
To be sure, this backlash of criticism did not just spring up out of nowhere. I can still recall an interaction with a well-meaning parent who, after driving more than two hours to our school, literally entered my office waving one of the earliest editions on U.S. News “Best Colleges.” With a big smile he said, “This is great! Have you seen this magazine? Someone has finally told us who is the best. This is where I want my son to go!”
After having him take a seat I calmly pointed out that, in light of the fact that his son’s long-term goal was to become an elementary school teacher, “number one” would not be an appropriate choice as that concentration was not among its curricular offerings. Bewilderment displaced exuberance as the parent asked, “Well, how can this school be the best in the country if it doesn’t have a major that all good schools should have? How can it still be called the best?” Grasping for an answer I replied, “I don’t know. I guess the people who put out that magazine don’t think it makes a school any less of a number one just because it doesn’t offer a concentration in education."
On another occasion one of my better students detailed a heated disagreement with her parents about one of her top college choices, an out-of-state flagship institution with an excellent reputation. “A friend of my dad’s told him that the school isn’t ranked very high and that that makes it second-rate. Does the rank these people [USNWR] gave it, make that a fact?”
My overly simplistic answer at the time was, ”No.” Years later, the article from Mr. Gladwell would help provide a sufficient answer. No one, least of all U.S. News and World Report has devised mechanisms for measuring the factors that define student engagement (i.e., a “quality experience,” which is also regarded by many as a critical factor in student growth, learning, persistence and ultimately, graduation). Because of this the editors of U.S. News substitute direct measures with proxies to assess institutional excellence. And, as Mr. Gladwell notes, “the proxies for educational quality turn out to be flimsy at best.”
Valerie Strauss of the Washington Post
expanded on this notion of “flimsiness” by looking at the survey on academic reputation (weighted at 22.5% by U.S. News) and asking, “[Are] top academics – presidents, provosts and deans of admissions – [truly able] to account for intangibles at [more than 200] peer institutions such as ‘faculty dedication to teaching?’ … Do you think they can do that accurately for all faculty even at their own schools?”
It is my fervent recommendation that the “Best Colleges” rankings be dismissed, regarded as just another flawed publication of its type, and perhaps be referenced cautiously only as a compilation of marginally accurate entering freshmen academic profiles at institutions that are grouped neatly by type.
It would have been interesting to see the reaction in the offices of U.S. News and World Report to President Obama’s recent speech at the University of Buffalo, of which the central points were college ratings, costs, access and accountability. After his remarks on the challenges facing those whose aspirations are the hallmarks of middle class America (“a good job with good wages, a good education, a home of your own, affordable health care, a secure retirement") Mr. Obama noted that at least a part of the problem is the influence of college rankings. He followed this by stating,
“Today, I'm directing Arne Duncan, our Secretary of Education, to lead an effort to develop a new rating system for America's colleges before the 2015 college year. Right now, private [companies] like U.S. News and World Report put out each year their rankings, and it encourages a lot of colleges to focus on ways to…game the numbers, and it actually rewards them, in some cases, for raising costs.”
While it is widely accepted that there is plenty of “wag the dog” syndrome stimulated by the college rankings industry, before proceeding further it should be understood that the President used the term “rating system”, as opposed to “rankings.” The critical difference is that the latter is based on an ordinal system to reflect which institutions are “the best” according to criteria that Mr. Obama sees as subject to manipulation; i.e., “gaming.”
The former would assign a qualitative rating to colleges which, as summarized by Scott Jaschik in the August 22, 2013 edition of Inside Higher Education, “…… is based on various outcomes (such as graduation rates and graduate earnings), on affordability and on access (measures such as the percentage of students receiving Pell Grants)."
As noted in the transcript of Mr. Obama’s speech, this translates out to new metrics by which higher educational institutions will be rated. Among them,
- Is the institution placing higher education within the reach of all students through innovative financing and aid programs?
- Does the institution have in place programs that encourage higher rates of student persistence and success without compromising the quality of the education delivered?
- What percentage of the institution’s freshmen graduate within four years?
- Do employment rates of graduates reflect the quality of the overall learning experience and the skill set acquired during study at the institution?
- What is the average accumulated debt that a student has at graduation?
- Is the repayment schedule manageable, given the graduate’s earnings?
In the President’s view, the answers to these questions, “will help parents and students figure out how much value a college truly offers…. [and ensure that our country is providing] a better bargain for the middle class and everybody who's working hard to get into the middle class.”
For a moment, consider the idyllic possibilities of such a proposal. Family conversations on potential college options would be less likely to center on the U.S. News and World Report, Peterson’s, Forbes, Newsweek or any other of the usual “best college” rankings publications. Students referencing the Obama college value/rating system would be able to make more informed choices based on an improved assessment of what the institution delivers for their family’s tuition investment. Particularly for those from under-represented segments, college attendance would become more of a reality than ever before.
Moreover, colleges with progressive aid and support programs would benefit from federally-provided awards. Through an increase in mutual accountability, colleges would help students remain on track for graduation as the students would then meet requirements for a renewal of their federal aid. Students, confident of their chances to complete their degrees on time, would graduate at a higher rate, increasing the potential for a reduction in the loan default rate. The inherent advantages conferred by a degree would enhance opportunities for employment within a knowledge-based economy. Caps on percentage rates for student loan repayment would ease the burden on graduates with entry-level salaries.
To be sure, these are lofty, noble proposals and in a perfect world they would be implemented with all deliberate speed. However, as Mr. Obama pointed out, “some of these reforms will require action from Congress”, and as Mr. Jaschik wrote,
“The ideas in the plan are a mix of actions that the administration could take by itself and those that would require legislation. To date, there has been plenty of Republican enthusiasm (at least at the state level) for some of the ideas reflected in the proposal. But given Republican enthusiasm in Washington for not passing anything proposed by the president, it is unclear how much support the administration will find on the Hill.”
Generally speaking, high school seniors and their parents emerge
from the relative quietude of summer break to a setting that starts fast
and only intensifies as the new school year progresses. Together, with
the guidance counselors who serve them, they are swept up in a funnel of
activity that will slow only when final college decisions are mailed.
In a past discussion with several colleagues, the phenomenon was likened
to sitting in a sailboat one minute on a calm sea, and the next being
shrouded in dense fog, listening to high winds approach and watching the
waves as they got higher.
Drawing analogies to the senior year,
one counselor saw the fog as the confusion some students may suddenly
feel over their list of prospective colleges. Another felt the high
winds might represent the need to choose to apply early decision and/or
early action or both, within the next several weeks. And for various
reasons, we all saw the waves as something that often only exacerbates
the prior two circumstances—the college rankings industry.
with regard to the latter, what might be expected as the 2013 college
rankings season begins? U.S. News and World Report releases its annual
“Best Colleges” issue on September 10th. It will be followed, in no
particular order, by similar publications or in some instances, by
feature articles from Princeton Review, Kiplinger, Forbes, Money,
Business Insider, Newsweek and any other publishing enterprise that sees
the potential to increase readership.
Accompanying this wide
array of choices on college information is an equally wide range of
criteria on which their respective rankings are based. Which
institutions enroll the most talented applicants, and how are they
defined as such? Which schools have the “best” campus life, and what
does "best" mean? Which colleges graduate their students with the lowest
amount of loan debt, and how was this data compiled? And, the list goes
As we saw during 2012 and again already this year, a
magazine’s target audience determines and defines the orientation of the
data presented. And it is not always necessarily directed at students.
example, publications that focus on college rankings are rather
splashy, with large, bold, multi-colored print on covers announcing the
content’s exclusivity and authoritativeness. In this respect, the August
18th edition of Forbes magazine was very atypical. First, only a
portion of the issue was devoted to college rankings. Second, a very
subdued, “America’s Top Colleges” appeared as the header in a font about
half the size of the title of the main article.
At first impression I thought, “Pretty tame for a set of rankings
that has Stanford as number one, instead of the usual,
Harvard/Princeton/Yale leaderboard.” As it turned out, there was an
ordinal ranking of three hundred colleges, but there were also
“financial grades” of A+ down through C- for each school (excepting
public institutions) to indicate their “Balance Sheet Health” and
“Operational Soundness”, which was the central emphasis of the article.
So, students looking for information relevant to their search process,
as suggested by the issue’s cover, would be disappointed by the
curve-ball thrown them. The rankings are more reflective of traditional
Forbes content, with a focus on the business side of higher education.
To be sure, this is logical terrain for Forbes. However, it also
underscores the simple truth that all rankings are different, and
students are responsible for figuring what drives the numbered list in
front of them.
There is also something disturbing that students and families should
not be surprised to encounter during the rankings and college
application season this year. That is, the reporting of inaccurate data
by colleges, which has been reported with far greater frequency this
past year. The practice, willful and deliberate or not, was the focus of
an article accompanying Forbes’ institutional fiscal health feature.
Entitled, “Schools of Deception: Some Schools Will Do Anything to
Improve Their Ranking”, it was written by Forbes staff writer Abram
Brown, whose introductory remarks include the following:
in 2004 Richard C. Vos, the admission dean at Claremont McKenna
College, a highly regarded liberal arts school outside Los Angeles,
developed a novel way to meet the school president’s demands to improve
the quality of incoming classes. He would simply lie.
the next seven years Vos provided falsified data–the numbers behind our
ranking of Claremont McKenna in America’s Top Colleges–to the Education
Department and others, artificially increasing SAT and ACT scores and
lowering the admission rate, providing the illusion, if not the reality,
that better students were coming to Claremont McKenna.
Mr. Brown goes on to identify three other institutions; Bucknell,
Emory and Iona College who as he stated it, have hosted “data-rigging
scandals” for the purpose of improving their respective school’s
academic profile of admitted students and in turn, improving their
ranking. Last year, Scott Jaschik in the January 2, 2013 edition of
Inside Higher Education wrote, “Yet Another Rankings Fabrication,”
regarding Tulane and George Washington University as having perpetrated
similar misrepresentations with their institution’s admissions data.
it may not be fair to lay the entire blame on publishers for the
potentially erroneous content of their magazines, they do provide the
stage for such practices to occur. And until they find a way to fix the
problem, the expectation for reliable information of students and their
families will be lost in the fog, the wind and the waves of dishonesty.
Thankfully, the guidance counseling profession will be there as always,
to serve as their bridge over troubled waters.
The cover of The Best 377 Colleges promises much from the
contents, proclaiming its exclusivity through being, “The ONLY GUIDE with CANDID FEEDBACK from 122,000 students, 62 RANKING
LISTS, UNIQUE RATINGS, [and] FINANCIAL GUIDANCE”. My first thought was “377?
Why 377? Why not 250 or 400?” The bit about “candid feedback” caused me to
wonder how they synthesized all that.
Paging forward quickly, I glanced at a few of what must have
been the ranking lists, among them, “Best College Radio Station”, “Lots of
Greek Life”, followed naturally by one titled, “Lots of Beer.” In fairness, I
will point out that these appeared in the sub-categories on “Extracurriculars” and
“Social Life,” respectively. Still, I wondered if these lists were founded on
the “candid feedback” mentioned on the front cover and, as I read on, found out
that they were.
Before continuing, a word on Princeton Review’s position on
rankings is appropriate. In Part One under the sub-heading, “About Those
College Ranking lists,” the editors of the Best 377 Colleges make a very direct
criticism of the college rankings publications. As they state on page 33,
“Here you won’t find
the colleges in this book ranked hierarchically, 1 to 377. We think such lists
– particularly those driven by and perpetuating a ‘best academics’ mania – are
not useful for the people they are supposed to serve (college applicants).”
With this in mind, I began reading the “School Rankings and Lists” in Part 2, which
are based on the responses of 122,000 plus students who completed Princeton
Review’s anonymous survey. They were
asked to, “rate various aspects of their colleges’ offerings and what they
report to us on their campus experiences.”
What I regard as the principal shortcoming of the 62 School
Rankings and Lists is the absence of a way to cross-list features. That is, if
a student was hoping to put together a list of schools where “Students Study
the Most,” where “Professors Get High Marks,” that has the “Best Campus Food,” “Where
Everyone Plays Intramural Sports,” and is a “Jock School,” each list would have
to be arranged side by side to determine which schools appear on the most
ratings lists. This task is further complicated by the, additional lists of,
“Great Schools for 20 of the Most Popular Undergraduate Majors”, which
incidentally, are arranged alphabetically.
And either I am splitting hairs or
Princeton Review wants readers to buy into the notion that rating
categories such as, “Administrators Get Low Marks”, “Best-Rub College”,
“Easiest Campus To Get Around”, “Students Pack the Stadiums”, are as
inherently valid and compelling as the “best academics” rating criteria
of the rankings publications.
Hierarchy is defined in Webster’s Seventh New Collegiate Dictionary as, “arrangement into a graded series.” Interestingly, the schools on each of the 62 School Rankings and Lists are not ordered alphabetically but rather in, “our ‘Top 20’ [i.e. numbered from 1 to 20] ranking lists in eight categories [based on the compiled results of the student surveys]. Is it reasonable then to ask, “Is this not then a hierarchical ranking?”
While the blog posts up to this point have centered primarily on the annual rankings publications that are released each autumn, experience has shown that some students may bypass these altogether and use a particular guidebook or two in their place.
Regardless of which option they choose my advice has always been that, in attempting to verify the impressions they have of the institutions on their list, what they are looking for is consistency in information from source to source relative to those particular schools and programs of study. In other words, is what you believe about an institution or program and its culture/personality addressed in say statements by the admissions representative, brochures, the viewbook, the school’s website, the departmental homepage, guidebooks and finally, during a campus visit?
I freely admit that it may have seemed overly simplistic to tell them, “If it looks like a duck, walks like duck, sounds like a duck and smells like a duck well, chances are good that it is a duck.”
There were usually nods of agreement from my students indicating their understanding of why all this was important, until I got to the point of “campus culture and institutional personality”. A puzzled facial expression would then be accompanied by their asking in so many words, “Campus culture? Campus personality? What is that and why is it a consideration?”
This was a great segue to a favorite anecdote that pretty much clarified the nature of these two institutional features. I related how during a campus visit where a former student was giving me a tour, she turned to me and said, “Before we begin Mr. Prieto, I want to give you some idea of what to expect. The best way to explain what this place is like is that at [this school] women shave their heads and some of the guys wear dresses. That’s just the way it is, and no one bats an eye. And this is just one of the things about [this school] that you might but probably won’t see in the viewbooks and guidebooks.”
In another instance, during a discussion over lunch with another former student, I asked my usual question of whether he found what he expected at his new four-year home. As he replied,
“With one important exception, yes. With the high level of diversity mentioned in everything I had read about [this school], I took it for granted that it would be just like high school, where the differences in our backgrounds didn’t stop strong friendships from forming. It was one of the things I valued most about my high school experience. Here, yes the student body is very diverse, but too many people choose to hang out with only those from the same background. So, I don’t have any Black friends or Asian friends or Hispanic friends anymore. Maybe I just need to wait a little bit longer to see if my impression of only three months is accurate.”
This leads us back to the principal distinction between the various rankings publications and their corollary, the guidebooks. The latter attempt to identify and elaborate on what students on a particular campus can expect to experience from a sociological and interpersonal perspective – two dimensions of “institutional fit” that go beyond matching up academically with the program, features and benefits of a particular school.
Given this orientation, we will proceed with a review and discussion of the following guidebooks. In trying to place myself in the position of a student, I picked up a representative sampling. They are:
- The Best 377 Colleges, 2013 Edition, published by The Princeton Review
- 283 Great Colleges (no publication date given) Spark Publishing
- Fiske Guide to Colleges 2013, published by Sourcebooks EDU
- The K&W Guide To College Programs and Services for Students With Learning Disabilities or AD/HD,
- 11th Edition, published by The Princeton Review
Alright, let’s begin.
(Next week: The Best 377 Colleges)
In the Australian, Simon Marginson, a professor of higher education at the University of Melbourne, was among several academics who reacted strongly to a data-gathering maneuver used by Quacquarelli Symonds, hereinafter “QS”, a London-based enterprise that compiles the QS Global World University Rankings.
Mr. Marginson’s criticism was prompted by his learning that QS enlisted, albeit on a one-time highly limited basis, Opinion Outpost to collect a very small sample of survey responses. Opinion Outpost is a website that awards points that may be redeemed for cash to users who complete surveys on a variety of items and subjects.
An enterprise such as QS, seen as one of three major international university ranking systems, using a paid survey to collect responses, caused Bruce Leiter, a professor and director of the Center for Law, Philosophy, and Human Values at the University of Chicago to go even beyond Professor Marginson in calling the QS rankings, “a fraud on the public.”
Heavy-duty rhetoric bordering on invective you might think. To be sure, as Elizabeth Redden, author of the article in The Australian pointed out, “All three of the major global university rankers, the QS rankings, as well as the Times Higher Education’s (THE) World University Rankings and the Academic Ranking of World Universities (ARWU), are regularly criticized. Many educators question the value of rankings and argue that they can measure only a narrow slice of what quality higher education is all about.”
It is a pleasure to reiterate that this synthesizes the position taken in each post for this blog since its inception. You have to love affirmation…
This groundswell of scrutiny and multi-source criticism led Ben Sowter, head of the intelligence unit for QS, to send a responsive piece to The Australian entitled “10 Reasons Why the QS Academic Survey Cannot Be Effectively Manipulated”, in which he attempted to counter the views of Marginson, Leiter, Redden and others, and neutralize their impact. A brief sample of his defense follows.
- The QS Intelligence Unit enforces a strict policy prohibiting one respondent from either soliciting or coaching another in terms of how to respond to the survey.
- QS uses a screening process determine the validity and authenticity of every request to participate in the Global Academic Sirvey.
- QS maintains that its “market-leading sample size” of more than 46,000 respondents minimizes the possibility of undue influence being exerted to influence the results of the surveys because any such effort would have to be large in scale and therefore, detectable.
- QS stands firmly on the trust they have in their respondents, “academics who place great value on their ‘academic integrity’” and as such, are highly resistant to undue or unscrupulous influence in the completion of their survey.
In light of the preceding, might the average independent reader be willing to back off a bit on QS? For those who have not yet read previous posts to this blog regarding the whole international college rankings scheme and indeed, had I not delved into and written about it, it would be reasonable to ask, “Isn’t this flap about QS’s methodology just a little bit over the top? After all, compare it to the rather benign potshots that the various ranking publications in the U.S. take at each other, and you have to wonder if there is not a lot more to THE, ARWU and other academics taking direct aim at QS.
Well, to reflect back on what appeared in this blog last winter, there is much more to it and it has to do with not merely a rank, but perceived prestige and what appear to be hardline international economics. To re-quote an earlier post:
“Performance indicators like research, citations, and industry income reflect the cash flow into the institution…”
In summary, when students and their families venture into the realm of international college and university rankings, they would be well advised to remember that the academic opportunities available are certainly a consideration, but only to the extent that they establish and maintain an ever-increasing flow of monies.
The following is an editorial on college rankings submitted by Marc Priester, a sophomore
economics and government and politics major at the University of
Maryland. He recently published another editorial about college rankings
in the UMD school newspaper titled “College Rankings Fail.”
The views expressed in this article reflect the views of Mr. Priester,
and not necessarily the views of the National Association for College
The perceived relationship between prestigious universities and the
“American Dream” has spawned an arms race of students wishing to enroll
at top schools, supported by massive student loans. Because I am only
human, I have been, albeit regrettably, also a part of this travesty. I,
too, had been conflicted over prestige and cost once upon a time.
Rewind two years ago. I was a college senior. Various acceptance
letters fell upon me like rain on a barren farmland. This rain was
toxic, but the temptation to indulge was there.
At first my heart was unfailingly set on attending a private
university in New York that was acclaimed for its political science
& economics departments. It seemed so simple, punch my ticket to
this university, do well, let the prestige carry me to a cushy
investment banking firm or a top law school, proceed to buy mansion,
boat, Maybach, and ball harder than Jay-Z or Kanye West.
I had also received acceptance to UMD, but I had only applied after
my mother continually pestered me because she knew for certain we could
afford it. Thank God she did.
When I received my financial aid package, seeing the dismal Stafford
loan valued at $5,500 set against an almost $60,000 a year cost, my
heart sank. I could only finance this academic expedition with private
lenders who charge criminally high interest rates. But I still thought
maybe it’s worth it. After all, it’s prestigious! So maybe the Maybach
has to wait, but I can still get that mansion right?
The unfortunate truth is that excessive loans are so engulfing that
they absorb almost all of even the most lucrative paycheck, with no
remorse. And if you miss a payment, your credit score tanks. And if you
pay off the loan too quickly, your credit score depresses. Paradoxical?
Absolutely, but the way finances work is unfair. It’s a lose-lose
scenario where we as people are robbed of our autonomy. Read up on the
horror stories; they are true. And, unlike a home, bankruptcy does not
absolve you of loan responsibility; and if you default, your school may
sue you as some have done recently.
I begrudgingly chose UMD with a mindset that a future of deskwork and
being a yes-man to an executive was imminent. Thankfully 18-year-old
Marc was mistaken.
Here at my cost-effective state institution, I have been inundated
with opportunity. Leadership positions in various professional
organizations, success on the debate team, journalistic opportunities
with the highly regarded Diamondback, and our proximity to DC for
interning reflect my experiences here. Even having the fortunate
opportunity to write this article is a byproduct of attending UMD.
Also, the nightlife ain’t half bad.
What I’ve learned is that it is the individual who determines
success, not the name on the degree. Those at Harvard, Stanford or NYU
find success because of work ethic, determination and insatiable
ambition. The same goes for Maryland. Statistically speaking, top students from state universities make equitable pay as those at prestigious universities.
I am not saying attending superior universities isn’t a worthy
investment. What I am saying is attending those universities shouldn’t
come at the cost of future financial solvency. Maybe $100,000 of debt is
unfathomable to a teenager, and frankly it still is to me, but if I can
reach the same ends by going to another university and pay
substantially less, than I say bring on the state schools. Let no
parent, GPA/SAT, or university define your life; you build the future.
As the days of winter fade, many students will dig out one or more of the college rankings publications they acquired last autumn and pore over them again looking for “the revelation.” This reliably consistent tradition was the topic of a recent commentary called, “College Rankings Fail”, that appeared in the University of Maryland’s independent student newspaper. The student author, Marc Priester, took direct aim at college rankings as a whole. As he pointed out,
“Our current obsessions with prestige and rankings border on fetishism…. There is a sad waltz between college rankings and how we value education. It compels individuals to irrationally worship universities, leading to the foolish economic decision to attend exorbitantly priced colleges because of the ‘promise’ [: the promise of the upper middle class, the pipe-dream future we’ve been fed since before we could even spell ‘Harvard’].”
Mr. Priester further attributes blame to the media, with whom students and parents have become willing partners. While I would not use the term “fetishism”, I do credit Mr. Priester for his astute recognition of college rankings as authoritative. And, although the remainder of Mr. Priester’s quote is consistent with the spirit of his message, I feel it does divert attention from the overarching point he was making; i.e., that college rankings are inherently misleading and as such can lead to poor decision making.
A case in point is the media frenzy initiated each year by the various college rankings publication releases, with the U.S. News and World Report Best Colleges issue being the most recognizable. College administrators and admissions offers criticize and debate U.S. News for attempting to do the impossible: determine unequivocally who is Number One, or Number Ten, or Number 75. Unfortunately, some students and parents miss these criticisms.
Kiplinger recently released Best Values in Public Colleges for 2013, and the corresponding, Best Values in Private Colleges for 2013. The rankings publication claims that its methodology measures "value," but that term is just as subjective as the term "best" used by US News. Each student has a unique system of values, which cannot be standardized.
There is one factor in Kiplinger’s ranking formula that could easily be misinterpreted. In acknowledging that an institution that graduates its students within the traditional 4-year timeframe saves them tuition dollars, the reality is that there are numerous legitimate factors that delay graduation for many students beyond the four years after which they began their studies. Georgia Tech, for instance, has a 4-year graduation rate of only 31 percent. What Kiplinger fails to note is that a significant portion of the Georgia Tech student body is enrolled in the co-op program where full-time study and full-time placement at a paying internship occur in alternating semesters. The end result is graduation delayed into a fifth or even a sixth year, but with considerably more real-life experience than most programs offer. In this context, Georgia Tech’s 4-year graduation rate clearly misrepresents the quality of its overall academic experience.
The all-encompassing point being made here is that “value," in economic terms, is just one of the many dimensions of the college selection process. Where students choose to prepare for their future and how much their family is willing to pay for it is a complex, at times an intensely emotional, and let us not forget, singularly courageous decision.
Imagine if you will, a college rankings publication that, as a matter of policy, excludes a certain group of institutions that do not have the huge endowments, vast and far-reaching research programs, large enrollment, and renowned faculties. The rationale for this practice would be a suggested “fairness.” After all, how can this group of schools compete with the larger, wealthier institutions?
Continue imagining that suddenly this rankings publication reverses direction and decides to offer this group of institutions an opportunity to have their relative strengths acknowledged, evaluated, and, where warranted, given due recognition in the form of “stars” that are awarded in designations from one to five. The cost for participation is a one-time “audit fee” of just under $10,000 and an annual “licensing fee” of just under $7,000. In contrast, the higher ed heavy-hitters and their peer institutions are never assessed any charges. How might this affect your confidence in rankings compiled in this manner?
This is exactly what Quacquarelli Symonds, the London-based company behind the QS World University Rankings, is doing. The company is inviting schools with strong local reputations, but who have been excluded from some of the top international rankings sites, to pay for the privelege of seeing their name in print. As Mr. Guttenplan points out,
“Today the QS list of the “top 700 universities” in the world is read by millions of prospective students, parents, academics and university administrators.”
That being listed in either of the three big international rankings pubs (the QS Rankings, the Times Higher Education’s top 400 and Shanghai Jiaotong University’s top 500) is quite a big deal has been well noted. Just last October in a post on this blog, reference was made to an interviewer’s (Ms. Mishal Husain) commentary on The Times Higher Education World University Rankings (interestingly, a former partner of QS Rankings) where she made the following observation.
“So why do these rankings matter? Well, increasingly they influence the choices that students and academics make. Researchers need them to look for new global collaborations. Often, they are also built into a university’s strategic plan. And beyond the academic walls, the rankings play a vital role at government levels with universities trying to drive academic growth through knowledge, innovation and skill.”
To which Mr. Guttenplan recently added,
“QS’s influence can also be felt at the highest levels of policy….Experts say that some governments will not fund students who wish to study overseas at universities not on the list — sometimes those not in the top 100. ….In an attempt to work their way up the ladder, other countries have engaged in programs of consolidation, forcing smaller schools to group together in an effort to emulate the large U.S. and British research universities that repeatedly dominate the top tier."
Finally, Ellen Hazelkorn, Director of Research at the Dublin Institute of Technology, stated,
“You have to ask yourself, ‘Why are all the institutions so caught up in this?’” she said. “For a country like Ireland, where education and establishing an international presence are hugely important to economic recovery, not being ranked makes you invisible.”
What students and parents must realize is that by buying their way into the QS Rankings, institutions are not enhancing the overall quality of the academic experience they offer students. They are purchasing recognition and a spot on the landscape of the international elite. Therefore, it is absolutely necessary to conduct a personal evaluation of the institution and how well it meets the student’s needs.