Skip Ribbon Commands
Skip to main content

Quick Launch

 

 Categories

 
  
Edit
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
  
 
Home
January 12
News You Can Use

​First things first – a hearty Happy New Year to everyone and best wishes to all members of the Class of 2015 for confident management of their college admissions offers, a prudent final choice, and solid closure to their high school careers. The same sentiment is also extended to juniors who on January 1st became “rising seniors” and by virtue of which their college search process will begin shifting into a higher gear.

There are those in either of these two groups who (the juniors, especially) may feel this has come about with a discomforting sense of immediacy. Having always seen this as being understandable, it led to always remembering to share my empathy to juniors who, up to this point in their life been told to, “eat this, go there, wear that, go to bed now, walk this way, talk this way (sorry, I couldn’t resist)”, ad infinitum and for many, ad nauseam. But with becoming second-semester juniors, they are suddenly faced with an expectation that they know where they want to go to college, what to study and what career to pursue.

Years of experience in guidance has shown me that students’ (and parents’) feeling of urgency in this regard is often heightened further by the annual release of college rankings publications in mid-September. Since that time last fall, I have been cataloguing posts that appear on the Yahoo internet search portal on aspects of college characteristics, features and benefits. After sorting through such posts and weighing content, I believe some contain information that can be factored rather safely into a student’s analysis of college data. The list and sources of posts and articles deemed to be “Recommended” for reading follows. Remember, acceptance of their content in its entirety is not the preferred end result. Rather, identifying and using what is most informative, is.

 

Recommended (in no particular order or ranking, of course) 

​Devon, H. (2014, December 9). 10 colleges where applicants are least likely to get in. US News and World Report
  • ​Comments: A general and brief overview of where competition is the toughest.
Goldman, L. (2014, November 18). The 10 most expensive colleges in America. Business Insider.
  • ​Comments: A frame of reference and perspective-building report.
Snider, S. (2014, December 2). 10 most expensive universities for out-of-state students. US News and World Report
  • ​​Comments: A frame of reference and perspective-building report.
Mitchell, T. (2015, January 2). 10 public schools with the lowest in-state tuition. US New and World Report.
  • ​​Comments: A frame of reference and perspective-building report.

Bertrand, N. (2014, October 24). Average SAT score for every college major. Business Insider.

  • Comments: College Board-sourced data establishes credibility.

Martin, E. (2014, October 8). Best college majors for landing a job. Business Insider.

  • Comments: Express Employment Professionals-sourced data lends credibility to this report.

Nisen, M. (2014, October 1). These are the undergraduate schools that land you the best jobs. Quartz. 

  • Comments: LinkedIn-sourced data limits scope of report, but still provides one more bit of information to factor in.

Hefling, K. (2014, November 13). College Board: College prices continue to go up. AP.

  • Comments: Good, succinct overview.

Perez-Penadec, R. (2014, December 27). Colleges reinvent classes to keep more students in science. The New York Times.

  • Comments: Focus on a significant institutional trend.

Korn, M. (2014, December 30). Colleges’ new aid target: The middle class. The Wall Street Journal.

  • Comments: College Board-sourced data isolates trend in institutional aid.

Haynie, D. (2014, October 14). 10 colleges with the highest 4-year graduation rates. US News and World Report.

  • Comments: Highly abbreviated, but no less useful introduction to a critical consideration in evaluating colleges.

Stafford, K. (2014, November 8). Top 20 degrees with the highest starting salaries. Detroit Free Press.

  • Comments: Misleading title as data parameters localized and defined by Michigan State University's College Employment Research Institute, but still provides useful insights.

Zeveloff, J. (2014, December 16). Thousands of high school seniors are finding out whether they got into top colleges- here’s what we know so far. Business Insider.

  • Comments: Brief, but informative summary.

Nelson, L. (2014, October 16). Early admissions to colleges help kids who don't need it. Vox.

  • Comments: Commendably objective discussion of a perennially provocative topic.

Saney, L. (2014, December 11). Elite college admissions. New York Times.

  • Comments:  Criticism of NYT reporter’s article [K. Carey (2014, November29). For accomplished students, reaching a good college isn’t as hard as it seems. New York Times.], is validated by first-hand experience.

J. Weismann (2014, December 01). Do 80 percent of top students really get into an elite college? MoneyBox.

  • Comments: An interesting and well-done expansion on the discussion started by Saney, L. and Carey (above).

R. Bielby,  J.R. Posselt, O. Jaquette, M. Bastedo (2014, October 2). Why are women underrepresented in elite colleges and universities? A non-linear decomposition analysis. Springer Science and Business Media.

  • Comments: Yes, very scholarly, but an excellently structured analysis of a compelling issue in higher education that has been under the radar for far too long (this writer’s opinion).

C. Adams (2014, December 29). As college deadlines loom, seniors urged to keep perspective on fit over selectivity. Education Week’s blog, College Bound.

  • Comments: Abbreviated note due to required subscription for complete text. However, useful reminders and key links articles on post college earnings and keeping an open mind on college options (see next).

J. Mathews (2014, December 28). Those football powers may be good for you. Washington Post

  • Comments: Despite title, focus is on keeping realism and priorities at the forefront of college evaluation process.

S Snider (2014, September 15). Colleges and universities that claim to meet full financial need. US News and World Report.

  • Comment:  Beyond useful alphabetical listing of 50 schools that purport to meet full financial need, there are links to “Best Value” national universities, national liberal arts colleges, and regional universities and colleges, respectively. 

D. Leonhardt (2014, September 8). Behind ivy walls: top colleges that enroll rich, middle class and poor​. New York Times.

  • Comments: Encouraging report (with excellent supporting graphic/chart) that heightens awareness of schools that have incorporated a commendable spin on affirmative action into their enrollment strategy.


One last thought before you go:

This past summer, Forbes magazine sent its ranking of colleges it believed to be the best in the land, to newsstands in July. Apparently, it mattered not to U.S. News and World Report (USNWR) that this was nearly two months before its Best Colleges issue would come out, or that within the week of Forbes’ publication flashpoint, Business Insider, Money magazine and Princeton Review would follow suit. USNWR would release its Best Colleges issue about a week earlier than usual in September, but as it has demonstrated since, keeping students’ and their families’ attention on what it has to say beyond rankings per se, has followed a pattern not previously seen. More specifically, of the fifty-seven posts on the list I gathered since September 2014, and which I have catalogued in a separate file, eighteen have the USNWR link as their source. That’s almost one in three of all such posts.

January 09
The Department of Education's Proposal to Rate Institutions of Higher Education Continues to Stir Debate

​Since the Department of Education last month released a draft framework for how it would “rate” colleges, many stakeholders have weighed in on the conversation. The George Washington University held a conversation yesterday with Drs. Sandy Baum and Ron Ehrenberg, who offered many insights similar to NACAC’s views on how rankings or ratings schemes affect institutional behavior. Below is a summary of some of the major points discussed.

Dr. Ehrenberg argued that US News and World Report (USNWR) rankings have led to all sorts of perverse behaviors by institutions at all levels- a concern shared in NACAC’s 2011 report of the Ad Hoc Committee on USNWR rankings- and moreover, that any rankings or ratings system falls subject to similar problems when trying to “game the system.” Just some of these perverse behaviors by institutions that have been seen over the years include,

  • ​Using merit scholarships instead of need-based scholarships to “buy” top test-score students.
  • Encouraging students to apply who have no chance of being admitted to drive acceptance rates down, and selectivity rates up.
  • Expanding early admission (when yield was a variable in the rankings analyses), which had the effect of forcing many students to make decisions with relatively imperfect information earlier in their lives.
  • Increasing expenditures year after year so the institution will not fall in the rankings.
  • Presenting data in ways that are most favorable to the institution.
  • Focusing on first-time, full-time students admitted in the fall over transfer students and those admitted in the spring semester because only first-time, full-time freshman students are included in the graduation rate calculation.
  • Focusing very little on value added measures, or measures relating to social diversity.
  • Unintentional and often intentional falsification of the data sent to rankings publications. In fact, due to more reports of this behavior, USNWR has had to develop a procedure for what they can do to penalize people when they find out data has been falsified. 

Although the Department of Education has proposed improving existing measures to rate colleges, all of them remain imperfect and the fact still remains that whether an institution is right for a particular student depends upon the student and not the weight any ranking or ratings system uses. Furthermore, grouping institutions into rating categories instead of using ordinal rankings, Dr. Ehrenberg argues, is still concerning, specifically for those institutions on the margins. Dr. Baum agreed that just about any system that tries to give one label to every institution falls subject to these problems. 

Baum raised another important consideration specific to the Department of Education’s stated accountability purpose for rating colleges,

“There are schools that aren’t that great for anybody- that serve no students well… Why does the federal government give student aid to [these] institutions?... In a way I think this whole system is happening because we haven’t politically figured out how to push these schools out. And if we could do that, maybe we wouldn’t have to go to these complicated lengths.”

Expressed in a similar vein through comments to the Department of Education last year, NACAC suggested allowing time for program integrity regulations to clean up waste, fraud, and abuse before considering a move toward a ratings system for this purpose.

Dr. Baum also brought up the point that- hypothetically, even if the Department did the ratings system exactly "right"- would this mean that students would make better decisions? She argues that this would not occur and putting information out there isn’t enough to have the impact hoped for by the Department. In fact, a lot of good information already out there has not solved this problem for the students who need it the most- many require individualized guidance on how to interpret and use such information. It’s important to get better data out of the conversation and put it out there, Baum argued, but using a ratings system to do that isn’t necessary. 

NACAC similarly questioned whether such a system would benefit students more than existing resources, such as the College Scorecard, and recommended that the Administration focus on providing consumer information to allow students, families, and counselors to make decisions about best fit institutions rather than pursuing a ratings strategy. 

Read more about NACAC’s concerns about the Department of Education’s ratings proposal here​.


Dr. Ron Ehrenberg is the Irving M. Ives professor of industrial and labor relations and economics at Cornell University. He’s also the director of the Cornell Higher Education Research Institute. 

Dr. Sandy Baum is a senior fellow at the Urban Institute and a research professor at The George Washington University in the graduate school of Education and Human Development.

December 18
Not a Question of Win Some, Lose Some – But Rather Should You Play At All (Part 2 of 2)

Ah, and now back to those teachable moments I promised in my post earlier this week:

One is for students who factor the opportunity to play an intercollegiate sport into their evaluation of a college or university. In such cases, a final decision to enroll should be made independent of whether or not the student will ever take an at-bat, shoot a jumper, take a snap or spring from a starting block. While great disappointment might ensue from a lost chance for athletic glory as in the case of UAB or if, heaven forbid you do not make the team, the true reason and purpose for getting a college education will not be derailed by unforeseen circumstances.

The second is to take the spurious notion of “happiest students” and transform it into something more personal and productive by tailoring it to your particular needs. Keep in mind that Princeton Review’s ranking of colleges with the “happiest students” only lists them one through twenty. If none of the schools on your list are among them, might you wonder whether that means their students are unhappy, or where they might rank on the happiness continuum among the other two thousand or so other colleges in the country?

For starters, dismiss the highly subjective and hopelessly vague use of “happiness” as a rating gradient. That is for Miss America contestants to define. Seriously though, what truly matters is whether your intuition or sensory apparatus picks up on any evidence of either a positive campus atmosphere or affirmative student interaction during a tour or extended visit. It is a pleasure to illustrate several approaches that may be taken to this investigatory process.

One is most easily implemented where you know an already enrolled student and are able to give advance notice that you will be visiting campus. If arrangements can be made to have a brief talk, it is possible to gain very useful insights through a few basic questions. For example, with former students I would ask some or all of the following:

  • ​“How are things going for you? Or, “How has your experience been so far?”
  • “Please refresh my memory - where else did you apply for admission?”
  • “If you feel comfortable telling me, at which of those schools were you admitted?”
  • “What factor(s) was most influential in your decision to enroll here?”
  • “Did you have any second thoughts about your decision after you had been here for a while?”
  • “If you were able to make the choice all over again, would it be the same?” and,
  • “Do you have any advice or insight to someone thinking about attending here?”

Of course, the questions can be adjusted according to the situation and timeframe. And even though it can be very different if you venture to approach a stranger, once a wholly understandable hesitation is overcome, it can turn out to be quite comfortable and satisfying. After all, you are making mental notes on how people react to a campus visitor who is reaching out to them.  In my many experiences, pleasant exchanges have actually been the norm rather than the exception.

To illustrate, during a visit to Clemson, I noticed students belonging to an organization for business majors holding a bake sale just outside the doors to the union. I smiled, introduced myself, and explained that I was touring the campus in my role as a guidance counselor.  This prelude to your questions is a must in order to avoid (in the popular vernacular) “creeping someone out”. At any rate, I asked if they would answer a few questions in exchange for my buying some cookies and donuts. Their eagerness to make a sale was far exceeded by their eagerness to talk about their school. I still recall how, walking away nearly twenty minutes later, I thought, “What a happy bunch of kids.”

Similarly, during a quick pass through the University of Florida, I stopped for an early lunch in one of the campus cafeterias. It was 11 a.m. on a Sunday morning, so I was not surprised to see most of the tables empty. However, at one there were about ten students talking animatedly and laughing. After my requisite introduction was out of the way, they explained that they were working on a calculus problem set. They added that they preferred the cafeteria to the library because when someone came up with an answer, they could enjoy the moment without disturbing anyone. Even though I did not leave with cookies or donuts, I had some unquestionably good impressions of students at the flagship institution.

Lastly, when I visited Fordham, I joined students who waiting in line for the cafeteria to open. They too were equally happy to share their feelings about attending school in one of the most intensely urban and multi-cultural areas of New York. But of equal importance were the results of a slightly divergent approach I had taken to get there. More specifically, I went to several different places on campus and at each, asked for directions to the cafeteria. Everyone I approached broke from their purposeful stride (this was, after all, New York) and explained patiently how I could best reach my destination. In fact, the last student insisted that I follow him, as the way involved a number of turns. 

It is unlikely that anyone would regard the forgoing interactions as a “scientific” way of finding out if a campus has happy students. However, the outcomes, in my humble opinion, are at least as valid in seeing certain aspects of a given campus environment as the students themselves see them. Obviously, such episodes are more difficult to have within the context of a structured campus tour. Though led by an enthusiastic guide very skilled at walking backwards while reciting facts and figures, the sentient experience is unfortunately, absent. Nevertheless, if given “free time” to visit the bookstore or opportunity to enjoy lunch on your own, keep the questions outlined earlier in mind and “engage” the campus on your own terms. In other words, “Make it your number one.” You will find yourself all the happier for having done so.


Reference: Alter. M. and Reback, R. (2014). True for your school? How changing reputations alter demand for selective U.S. colleges.  Educational Evaluation and Policy Analysis, 36, 346-370.


http://admin.nacacnet.org/media-center/PublishingImages/prieto_small.jpg

Joe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012.
December 16
Not a Question of Win Some, Lose Some – But Rather Should You Play At All (Part 1)

​Because the focus of this blog is customarily on some aspect of college rankings, some may at first see a discussion on a Division 1 football program as being an unrelated topic. However, at the risk of it being regarded as such, what follows will show that the connection is neither misplaced nor a stretch.          

Two weeks ago, the University of Alabama at Birmingham (UAB) revealed that it was closing down its football program. The stark headline in the December 10, 2014 edition of The Washington News read, “An Alabama University Drops Football - Hard numbers challenge the national hysteria over sport.”

In announcing that the decision was made after a campus-wide study conducted by a consulting firm over the past year, University President Ray Watts explained,

"The fiscal realities we face -- both from an operating and a capital investment standpoint -- are starker than ever and demand that we take decisive action for the greater good of the athletic department and UAB… As we look at the evolving landscape of NCAA football, we see expenses only continuing to increase. When considering a model that best protects the financial future and prominence of the athletic department, football is simply not sustainable,” (ESPN.com news services December 3, 2014).

It would not be difficult to imagine how widespread the pain associated with such action is across the UAB campus. It is probably most acute for team members, some of whom have issued public protests stating that they were drawn to the school for the chance to play football, and how they must now find a way to finance their education. Given the “fiscal realities” to which President Watts alluded - UAB projected losses of nearly $49 million to subsidize football for the next few years alone - it is highly unlikely that the university would honor athletic scholarships for a program which no longer exists.

However, this may only be the tip of the iceberg.  The preceding post to this blog centered on the vagaries and patent unreliability of rankings due to inconsistent, inaccurate or truly unquantifiable evaluative criteria. This in turn makes the potential impact of what has happened at UAB, at least with regard to its place in national rankings, even more problematic.

To expand on this, we refer once again to previously cited research done by Molly Alter and Randall Reback in, “True for your school? How changing reputations alter demand for selective U.S. colleges,” (2014).  Their findings indicate that poor academic and quality of life reputations of a college, as purportedly shown in “Happiest Students” and Best Quality of Life” rankings by The Princeton Review, negatively affect not only the number of applications received by the institution but also the academic competitiveness of its incoming class.

That the previous UAB administration acknowledged this linkage was reflected in a 2009 feature article on how Alabama universities fared in that year’s Princeton Review rankings. Stan Diel, writing for AL.com, reported, 

 “The University of Alabama at Birmingham is one of the nation's most racially inclusive colleges, and has some of the happiest students anywhere, according to the annual Princeton Review survey of students nationwide.

The survey, perhaps most well-known for picking the nation's biggest party school each year, ranked UAB No. 3 nationally for interaction between the races and social classes. It ranked 11th nationally for "happiest students."

UAB President Carol Garrison said the survey's results are widely used by parents and prospective students, and the inclusiveness ranking will help the school recruit students who might have dated perceptions about Birmingham.

"It says this is an institution where any student can come and feel comfortable," she said.

The high ranking in the happiness category likely was, in part, a result of efforts to improve the campus, including the recent addition of a campus green and a recreation center, she said.

"We're happy that they're happy," Garrison said of the students.”

But, as the saying goes, “that was then and this is now”. Translation? One can only hope that the student body at UAB continues to be happy for all the right reasons and that the absence of football will not result in a venting on the “quality of student life” or “student happiness”  questions of a Princeton Review survey they may be asked to complete. It is further hoped that then-President Garrison greatly overestimated the weight prospective students and their families give to any such ranking constructs. In a perfect world, these two wishes would hold true. Unfortunately, this is not a perfect world.

Before bringing closure to this discussion, there are two teachable moments to be offered in Part 2 of this post- stay tuned! 


http://admin.nacacnet.org/media-center/PublishingImages/prieto_small.jpg
Joe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012.
December 01
The Games Continue, With No End in Sight

​As the holiday season gets into full swing I hope that most, if not all, seniors are preparing to bring closure to their college application efforts and that a good number of them have already received positive results from timely action on rolling admissions programs. I further hope that juniors, and especially the ones with whom I am currently working, are beginning to formulate preliminary lists of college in which they may have an interest. 

This latter point serves as an introduction to our topic for this blog post – two questions which, due to their contiguous nature, are well-suited to a blended response. They are,


“When a college promotes that they are, “nationally ranked”, how should students and their families interpret that?” and,


“What is your baseline advice for students and families about rankings?”


An initial response that may appear too fast and easy, but which will be developed more fully throughout this text, would encourage students and families to view a “national ranking” as nothing more than a publisher’s formulaic conclusion. It is based solely on a narrowly drawn methodology that cannot possibly take into account infinite variances in the needs of the entire college-bound population. 

Moreover, there is significant potential for confusion in the fact that there seems to be no limit to the number of publishing enterprises claiming to be an “authority” on higher education – a circumstance The Princeton Review takes to the extreme, ranking colleges in no fewer than 62 categories, including financial aid and campus food.

This state of affairs was the subject of an article by Kevin Carey of The New York Times who in, Building a Better College Ranking System. Wait, Babson Beats Harvard? (July 28, 2014), wrote,

“For a long time, U.S. News & World Report had a monopoly on the college rankings game. Every August, the magazine would announce that, once again, Harvard was America’s best college, or Princeton, or, to shake things up, a tie between Harvard and Princeton. But in recent years, there has been a profusion of rankings competitors, each with a different perspective on what “best colleges” really means.”

In spite of this, U.S. News and World Report (USNWR) appears to be holding on as the preferred association. Support for this can be found in what was appended to an e-mail sent out on the National Association for College Admission Counseling (NACAC) listserv last week. Appearing prominently below the sender’s name and contact information was the U.S. News and World Report “Best Colleges” logo and the statement, 

“USC Aiken is AGAIN ranked #1 Public College in the South by U.S. News and World Report. To find out more, visit www.uscavisit.com.”

This may be seen as another reminder that, as much as many counseling and admissions professionals wish college rankings would fade away, higher education’s desire to be connected to and included among them seems to be gaining strength. 

But, before proceeding, my oft-repeated disclaimer remains - that it is not the position of this writer to dispute the right and good business sense of any enterprise calling widespread attention to favorable critical review. However, as also noted in previous postings, we are not talking about espresso machines or lawnmowers in this blog. We are talking about the nature, purpose and efficacy of life-shaping, life-enriching and life-determining entities that are our nation’s colleges and universities. Therefore the orientation, lexicon of analysis and import must be grounded in humanistic considerations rather than data points and ordinal rankings. 

Imagine then, if you will, the possible implications of colleges incorporating their respective ranking throughout their marketing outreach. For those just beginning their college search, seeing both the USNWR logo and promotion of the school’s ranking (as referenced above), might well give rise to several assumptions, none of which would be wholly correct. To list a few, the student might:

  • ​Completely overlook or misunderstand the sub-category designation, and assume the school is among the best, even beyond the region/category to which it is assigned;
  • Conclude that the high rank verifies program excellence throughout the institution’s curriculum;
  • Interpret the high rank as indicative of the quality of life, the level of instruction, the campus setting and resources, the financial aid program and post-graduate placement; And/or,
  • Assume the college is more selective than other similar institutions and thereby more desirable.

Compounding such well-intentioned but no less mistaken thought, are factors that weigh heavily against the few certainties in college rankings as a whole, but which nonetheless are used to cement their accuracy. One prime example is the suggested “selectivity quotient” of a particular institution and the factors that can impact its respective ranking and image.

Institutional selectivity accounts for 12.5 percent of the ranking methodology used by U.S. News and World Report.  It is based on the acceptance rate, or the ratio of students admitted to the total number of applicants. The vulnerability, if you will, of this ratio to any number of influences was once the subject of a conference lunch discussion among several highly experienced colleagues of mine. Their observations are summarized as follows:

  • ​“Has anyone else heard that (highly selective, Midwestern university) has significantly increased the size of its applicant pool by buying more names from College Board and ACT in order to generate more applications, but with no plans to increase the number of acceptances? As I understand, it is part of a strategy to make themselves appear as even more selective and thereby, strengthen their ranking.” Which prompted someone to add,
  • “We (and others) achieved somewhat of the same by joining the Common Application. Streamlining the process made it much easier for students to send out more applications and we saw a sizeable jump in our numbers right away. We too, decided not to increase the number of accepted students – at least for the time being.” This prompted a third party to reflect,
  • “Things sure have changed since Hilary Clinton’s joining her husband Bill in the White House was said to have produced twenty percent more applications for Wellesley. It got more selective in one cycle. You still get that same benefit if your football team wins a national championship or your basketball team reaches the Final Four, but how many schools can do that on a regular basis?”

A corresponding view of the manipulation and maneuvering related to a critical component of ranking methodology, Princeton Review’s in particular, was presented by Molly Alter and Randall Reback in, “True for your school? How changing reputations alter demand for selective U.S. colleges.” Educational Evaluation and Policy Analysis, 36, 346-370 (2014). As they reported,

“Our findings suggest that changes in academic and quality-of-life reputations affect the number of applications received by a college and the academic competitiveness and geographic diversity of the ensuing incoming freshman class. Colleges receive fewer applications when peer universities earn high academic ratings. However, unfavorable quality-of-life ratings for peers are followed by decreases in the college’s own application pool and the academic competitiveness of its incoming class. This suggests that potential applicants often begin their search process by shopping for groups of colleges where non-pecuniary benefits may be relatively high.”

In the final analysis, irrespective of the questionable legitimacy of academic ratings, the work of Ms. Alter and Mr. Reback leads them to conclude that, in some instances, they can be trumped by something as nebulous as unfavorable “quality of life” marks. More unfortunately, the net effect of this is inaccurate and unreliable information for students and families, which also continues to be troublesome and problematic for many guidance counselors as well.

In summary, the foregoing was intended to illustrate some of the more unsettling aspects of college rankings. They start with, but are certainly not limited to, misunderstandings stemming from being highlighted in college and university image initiatives, and extend to their inherent volatility and unreliability in the area of institutional selectivity. 

With this being the case, the most prudent baseline advice to students and their families would be to proceed with caution and understand college rankings of any sort and from any source for what they are – one tool among many to incorporate into their college search and evaluation efforts. And, just as with a hammer, being aware of its limitations along with proper, judicious and appropriate use minimizes the risk of mishaps.

November 19
Looking Beyond Methodologies: What is the Role of Firms in Driving Global Rankings?

​A critical moment in the history of global rankings of universities occurred in 2003 when Shanghai Jiao Tong University in China developed the first global rankings, Academic Ranking of World Universities (ARWU). Since then, the landscape of global rankings has continued to grow. In the paper, “World university rankings: On the new arts of governing (quality),” authors Susan Robertson (University of Bristol) and Kris Olds (University of Wisconsin-Madison) discuss different explanations in regards to the development and growing influence of global rankings. These explanations range from viewing rankings as accountability measures, to viewing rankings as part of status competition internationally, or even providers of a new service industry. 

This post will focus more specifically on a central tenet raised by Robertson and Olds, that while there is much discussion around different global ranking methodologies, ”the role of firms, such as Elsevier and Thomson Reuters… in fueling the global rankings phenomenon, has received remarkably little attention.” The authors argue that current explanations regarding the influence of global rankings are missing a critical piece, “one that places many players driving the process on centre stage, with their interests in full view.” Let’s take a look at this issue more closely.

Discussed in previous posts in NACAC’s Counselor’s Corner, global ranking methodologies tend to focus predominantly on research publications, citations, and reputational scores as indicators of “quality,” a significant shift compared to rankings in the U.S, which rely more on publically available data on indicators such as completion rates and class sizes. Both companies that Robertson and Olds highlight- Thomson Reuters and Elsevier- house databases (such as Web of Science, Scopus, and more recently, the Global Institutional Profiles Project) that provide a huge portion of the data used in many global ranking formulas (see the table in NACAC’s previous blog post​ that shows where such data is used within the QS World, Times Higher Education World, and U.S. News & World Report global university rankings). Publishers pay for this data to create the rankings and in turn, to sell newspapers. However, Robertson and Olds also point out that the same data can “feed into the development of ancillary services and benchmarking capabilities that can be sold back to universities” for the kinds of knowledge they think they need in order to go up in the rankings. And we’ve already seen instances in which some countries place enormous weight on the rankings, including those that give more resources to “top rated” universities.

Another point the authors point out in regards to the influence of firms in the spread of global rankings is,

“One of the interesting aspects of the involvement of these firms with the rankings phenomenon is that they have helped to create a normalized expectation that rankings happen once per year, even though there is no clear (and certainly not stated) logic for such a frequency.” 

Indeed, it is very unusual for the top rankings to change significantly in the short term, let alone year to year without a shift in methodology. So beyond an “informative” ranking, what is the purpose behind firms collecting this data every year from institutions? Furthermore, what are the profits generated each year for firms and publishers? And beyond publishers of the rankings and firms selling the data, the authors also point out that we must consider “the role that universities themselves play in enabling rankings to not only continue, but expand in ambition and depth.” 

Undoubtedly, placing the industries and people that fuel the rankings, as well as their interests and profits in full view would certainly provide a more complete picture of the global rankings industry, as Robertson and Olds have argued. Indeed, the comparisons made by global rankings are fueled by many players and contexts that are important to consider along with other explanations. 

November 19
As Crazy As We Will Allow it to Get

​“What is the craziest ranking you have ever seen?” 

This question, recently submitted to the Counselor’s Corner, provoked several interesting lines of thought on formulating an answer. The first brought the following quote to mind:

“People are desperate to measure something, so they seize on the wrong things,” Mark Edmundson, a professor of English at the University of Virginia (PayScale), 76). "I’m not against people making a living or prospering. But if the objective of an education is to ‘know yourself,’ it’s going to be hard to measure that.”

Mr. Edmunds’ observation was on yet another “return on investment (ROI)” study. But, more importantly for this blog post, he was calling attention to how with college rankings in particular, the principal goal of finding the best possible match between student and institution can be obscured by erroneous evaluative criteria and misplaced emphases.

So it stands to reason that Mr. Edmunds’ initial point fits in well with a discussion of the “craziness” that can surface in the realm of college rankings. And, within this context  it is no stretch to freely associate, “craziness” with several synonyms offered by Merriam Webster; i.e., “impractical”, “absurd” and “nonsensical”, to list but a few. For good measure, the term, “irresponsible” will be included as a consequence, unintended or not, of this type of ranking.

Before proceeding however, it should be carefully noted that the purpose of this present blog post is not to highlight or in any way endorse the existence of such rankings. To the contrary, it is hoped that raising awareness that they are out there will help minimize the potential for their misuse or misinterpretation. 

That being said, the discussion will begin with a partial list of some of the craziest college rankings I have seen. They are:

“The Best Party Schools”

“Schools with the Most Beer Drinkers”

“Schools with the Most Hard Liquor Drinkers,” and

“Schools with the Most Potheads”

The rankings were put on the website of an outfit named CollegeAtlas.org which basically lifted the results of surveys conducted by Princeton Review. Whether this was a veiled attempt at gaining some legitimacy for the feature is unclear but, for good measure, Atlas also included the category rankings for 2014, 2013 and 2012 respectively, adding that,

“The 5 top party schools in the US from 2014 stayed the same, however their orders were rearranged, with the exception of West Virginia University, which stayed in fourth place. Syracuse University went from fifth place right to the top, showing the most change in the top 5. There are also 2 new schools added this year: 9th place, Bucknell University (Lewisburg, PA); and 20th place, University of Delaware (Newark, DE). These two schools replaced: 15th place, University of Texas at Austin (Austin, TX); and 17th place, University of Maryland (College Park, MD). Bucknell University didn’t waste any party time as they jumped into the top 10, especially since this is the first time they have shown up in the previous 3 years!”

At first impression I thought, “How nice of them (Atlas/Princeton Review) to keep us current on which schools are trending in this critical area of campus life … I will bet that Texas and Maryland are ratcheting up their toga party schedules after this.”

If the sarcasm appears a bit extreme, one need only consider the irony of Atlas/Princeton Review’s mention of West Virginia retaining its fourth-place rank among other party schools as being the flashpoint for my view. Moreover, the most recent suspected alcohol-related death of a student occurred at a fraternity house at the University of West Virginia. Further, only those who have been living in a hermetically-sealed chamber for the past ten years or so, would be surprised by the findings by the National Institute on Alcohol Abuse and Alcoholism. To quote from the Institute’s website:

“Virtually all college students experience the effects of college drinking – whether they drink or not... The problem with college drinking is not necessarily the drinking itself, but the negative consequences that result from excessive drinking.

College drinking problems

College drinking is extremely widespread:

  • ​About four out of five college students drink alcohol.
  • About half of college students who drink, also consume alcohol through binge drinking.

Each year, drinking affects college students, as well as college communities, and families. The consequences of drinking include:

  • Death: 1,825 college students between the ages of 18 and 24 die each year from alcohol-related unintentional injuries.
  • Assault: More than 690,000 students between the ages of 18 and 24 are assaulted by another student who has been drinking.
  • Sexual Abuse: More than 97,000 students between the ages of 18 and 24 are victims of alcohol-related sexual assault or date rape.
  • Injury: 599,000 students between the ages of 18 and 24 receive unintentional injuries while under the influence of alcohol.
  • Academic Problems: About 25 percent of college students report academic consequences of their drinking including missing class, falling behind, doing poorly on exams or papers, and receiving lower grades overall.
  • Health Problems/Suicide Attempts: More than 150,000 students develop an alcohol-related health problem and between 1.2 and 1.5 percent of students indicate that they tried to commit suicide within the past year due to drinking or drug use.”

Source: NIAAA, November, 2014

While it was not originally intended to introduce the foregoing statistics as a part of this discussion, citing the work of a particular publishing enterprise as “crazy” or “irresponsible”, in fairness, should have a factual basis. To be sure, the data reported is not only factual, but in my humble opinion, frighteningly so. Which is why anything that appears to sanction or condone such destructive behavior and where it appears to flourish (and we never even got to the pothead issue), leaves itself open to judgment.

All right now – are you ready for the “flipside”? To be more specific, because this blog has always sought to provide teachable moments for students and families, it may be possible, the foregoing criticism notwithstanding, that there is a positive use for the subject rankings featured herein.

Imagine, if you will, a best case scenario where a student has one or more of the “best party schools” on her list of preferred colleges. Her parents, aware of this and of the school’s notoriety (according to Atlas/Princeton Review) incorporate this knowledge into a list of questions they will take on their campus visits. At a question and answer session where various campus officials sit as a panel, the parents present the following sample queries:

What is the institutional policy regarding the possession and consumption of alcoholic beverages by students living in the residence halls?

Who oversees the administration and implementation of this policy?

Does this policy extend to off-campus but university-affiliated student residences, most particularly, Greek houses? 

What is the protocol for dealing with a student who has been involved in a disciplinary, alcohol-related incident?

How are determinations made on whether a student has an alcohol-management problem?

What support services are available to students identified as having an alcohol-management problem? How are such services implemented, by whom and for how long?

This is neither an exhaustive list nor one that cannot be shaped according to parents’ individual concerns. It would also not be surprising if members of the panel felt a bit put on the spot. But when it comes to the health and welfare and yes, the safety of the young person parents are handing over to be educated through a multi-dimensional approach to life-coping skills, they do not want their “return on investment” to be measured in empty alcohol bottles.


http://admin.nacacnet.org/media-center/PublishingImages/prieto_small.jpg

Joe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012.

November 13
Citations, Citations, and More Citations: A Closer Look at Global Rankings' Methodologies

​As mentioned in our previous post​, global ranking methodologies tend to focus primarily on research publications, citations, reputational scores, and international influence as indicators of "quality"- a significant difference from domestic ranking methodologies​ (which tend to focus more on indicators such as graduation rates, cost, and earnings, to name a few). To take a closer look at this subject we put the methodologies of three prominent global rankings- QS WorldTimes Higher Education World​, and U.S. News & World Report Best Global Universities-  side by side so you can take a look at how each publication factors in publications, citations, reputation surveys, and more. 

ah1.png
 

November 10
U.S. News and World Report Releases Rankings of “Best Global Universities”

Last month, U.S. News and World Report (USNWR) released its inaugural “Best Global Universities" rankings. Inpreviously announcing the company’s plans, USNWR chief data strategist Bob Morse stated that “an increasing number of students plan to enroll in universities outside of their own country” as a motivation. A 2013 report by the Institute of International Education, New Frontiers: U.S. Students Pursuing Degrees Abroad, provides pertinent data and analysis about this trend. Specifically, the report found that from 2010/2011 to 2011/2012, the number of U.S. students pursuing full degrees abroad increased from 44,403 to 46,571 (a 5% increase). The top host countries for U.S. students pursuing full degrees abroad were the United Kingdom (36%), Canada (20%), and France (10%).

The new global rankings differ from USNWR’s domestic lists, in being focused entirely on research and reputational measures. As the Washington Post reported, “Factors such as undergraduate admissions selectivity and graduation rates….are absent from the global formula.” Instead, research reputations and citations are featured heavily (which tends to be the case with similar publications that attempt to rank universities globally).

For example, two of the ten indicators for the USNWR global rankings include research reputation scores- a global research reputation indicator (12.5%) and a regional research reputation indicator (12.5%)- both of which are compiled using Thomson Reuters’ Academic Reputation Survey. Another 12.5% of a university’s ranking comes from the “number of highly cited papers.” Click here to see the complete list of indicators used and full methodology behind the new rankings.

USNWR last week, also released a list of “Best Arab Region Universities,” which uses different methodology and sources of data- the rankings focus even more exclusively on citations and publications.

See here for more NACAC resources on rankings. And, in 2015 look to the NACAC Journal of College Admission for analysis and insight in the forthcoming article, Rankings Go Global.
November 06
Changing Students' and Families' Mindset about Rankings

​A recent question submitted to the Counselor’s Corner requests a brief discussion of, “what many counselors would like to see change with the present rankings state-of –affairs”; a “wish-list”, as some would put it. For this we will first look at a synthesis of the comments offered by more than six hundred guidance professionals who responded to the previously cited NACAC survey. 

An overview based on the compiled results included:

  • ​That USNWR (and most if not all college rankings enterprises) develop methodologies for measuring the value-added considerations of graduation and retention,
  • That the metrics used provide reasonably accurate indices of student engagement and satisfaction with their undergraduate experience,
  • That a wholesale revision of rankings methodologies would encourage students and their families to evaluate and interpret input, process and output variables according to their own needs, thereby enabling them to generate their own ranking scale.
These three points presume that the rankings are here to stay, and therefore, we must find a way to live with them. However, there is ample evidence that, if among the items on the guidance counselor wish list was the power to make them “go away forever," it would be no surprise to see it exercised. 

To expand on this a bit, eliminating the need to undo or unravel the misunderstandings created by rankings would be a welcome change for guidance professionals. More specifically, it would enhance their efforts to help students and families focus first on the characteristics that define a quality program of study along with those which suggest a high quality of life at a given college or university. Shifting the emphasis toward how well post-secondary study will prepare a student to make a living and a life within a setting that nurtures and stimulates personal growth would transform college counseling sessions into something far more personal, meaningful and efficacious.

To summarize, in the final analysis, a positive change in the college rankings state-of-affairs is probably not in the hands of the publishers but rather, in the minds of students and their families. Understanding and accepting the fact that data on teaching quality, character-developing student involvement and effective avenues to job and career readiness are not amenable to any current publication-ready format, is an essential mindset.

Further, it must be firmly realized that one, rankings are subjective, limited-use tools that reflect largely what the editors feel is important, and not the student, and two, that the most valuable resource in the college search and selection processes sits in the guidance counselor’s office. There students have a trained, knowledgeable advocate who keeps their best interests at heart. Most importantly, the counselor’s primary goal is to facilitate their personal empowerment through the acquisition of strong decision-making skills. Gaining mastery of this, will enable students and their families to weather the storm and emerge into the sunlight.

http://admin.nacacnet.org/media-center/PublishingImages/prieto_small.jpg

Joe Prieto, a former college admissions officer, guidance counselor and past member of the Illinois and NACAC Admissions Practices Committees, has been a contributing writer to the NACAC Counselor's Corner since its inception in 2012.​

1 - 10Next
 

 #CollegeRankings