@privatebanker I just saw your comment, and agree 100%
@privatebanker Oh that’s so funny! I knew we’d come to a meeting of the minds!
Does anyone actually disagree that it is flawed, other than if you are trying to measure “conventional wisdom” college prestige and desirability (not necessarily whether a college is better for you)? If no one here disagrees, there may not be much writing on it.
@waitingmomla : I suspect that the GC/Peer component’s purpose is indeed to add a human component to perhaps over-ride the trickery that goes on with the scores. As in there may just be the small hope that a few perhaps do use logic and factor in other elements outside of numerical components going into selectivity or mass marketing efforts (as in the type of marketing that says: "Look we are building lots of stuff, and our selectivity just increased. Please give us a boost in reputational ratings). Basically it serves as a possible correction for places that pump those numbers up to the extreme meanwhile their programs do not actually serve that caliber of students as well as some higher caliber undergraduate programs with similar or lower scores. Just maybe someone (especially an administrator at a peer school or elsewhere) would look and say: “Oh I notice that X’s already high score ranges are quickly increasing to even higher levels to the point that they rival Y’s. However, I know or at least strongly suspect that the caliber of programming for the students at X still is unlikely to compare that well to Y’s”. They may also be able to use a judgement call when Harvard and “non-Harvard” schools have the same scores. The Harvard administrators may think: “Yeah, but we notice that we are yielding lots more students with nationally or internationally award winning talent in areas that goes far beyond what the test scores can measure than these other schools that appear as selective numerically”.
So yes, those components are problematic as adding in a more human element will introduce bias and laziness, but could also lead to common sense and balance when needed. It may serve as a check for the incentive to primarily play games in the admissions office meanwhile staying relatively stagnant in comparison to higher caliber schools in terms of program strengths. It may provide incentives to make the changes/enhancements in programs more visible to other schools and highered entities which may be interested to the point where you give someone a reason to respect a school more beyond its ability to attract more high scorers (this may help a school that has been pumping up its scores but is still not really “trusted” as a similar caliber as schools that have already been in that range. Conversely it protects others known to have stronger programs than stats whoring schools but choose not to prioritize the scores as much). As hard as it is, it is possible to market the strength of new or enhanced undergraduate programs to make folks peer schools watch and pay attention to what is going on (they’ll look at the program’s structure and potential impact). The thing is, what happens when the stats, especially at the most selective schools basically converge so that they have irrelevant differences (they are almost there IMHO).
It would make no sense to weigh scores so heavily to discriminate them. It has been proven that, those within a certain prestige/caliber can suddenly change the admissions scheme to emphasize them. At that point, I guess endowment would have to be weighted heavier. I really don’t know. In general, I don’t care for this whole ranking business. I kind of just throw certain schools into vague academic tiers in terms of caliber and then start discriminating based upon what students may be interested in socially or in terms of programmatic interests within or between the tiers. I’m not interested in the rat race that many schools are engaged in and creating a set of metrics that allows some admissions schemes at some schools to work to increase the rankings and others not. I see too many people agonizing over quite subtle differences in rankings and then arguing based upon only incoming stats when nothing else suggest a meaningful difference in caliber of the schools.
Just seems it is this way to prevent a wildly intensified version of some of the admissions tactics we have been seeing over the past decade or so among these schools that either want to enter more elite brackets or already elite and just wanna hurry and raise the rank even more before they do much of anything else different from other highly ranked schools.
@ucbalumnus True. But if the reality is that it’s not really even the deans/presidents filling out the surveys and such a small % of GCs return the surveys – is it even a realistic measure of conventional wisdom? If it’s not the right people submitting the information and/or such a small response rate, I honestly wonder. But maybe you’re right and most people do already write that piece off as being of little value.
@bernie12 Thanks for that well thought-out answer. I agree with much of what you said. And if the right people are filling out the surveys and are truly educated in what they are commenting on, then yes in theory that would be a good check to all the numerical components. My question is I don’t know if that’s the case, based on what I read. And if it’s not the appropriate people or it’s people who are insufficiently informed, then I’m not sure how much value the check has.
@ucbalumnus : Didn’t notice you addressed me. Thank you for identifying yet another metric that interacts with the selectivity metrics!
And I notice people on here are listing and concerning themselves the very top schools and still saying things like: “Increased selectivity influences the curriculum and level at which things are taught”. I feel as I have addressed the randomness within the selective bracket of schools with regards to this in STEM which is typically known for attracting the highest scorers at any undergrad. institution. You can see wild differences between selective schools (some are consistent across nearly all departments and some are on a department by department basis) that could not be predicted by looking at incoming stats. Like how many would predict that many STEM courses (for majors) at Berkeley and Michigan (or even some privates for that matter) are kind of at a different level than some of the private schools that rank beyond them and have higher incoming stats? Some of the selective privates and publics clearly put more effort into their undergraduate curricula than others and that has to do with departmental and institutional culture because most of trends preceded their current levels of selectivity. I wish folks would just let this concept go. Just because faculty “may” be able to teach at higher levels because of higher selectivity (and then this may have a threshold anyway) based on ACT/SAT (which as Data10 highlighted are easy in comparison to rigorous or even medium courses at most selective colleges. Yes I am being shady by saying “most”) does not mean they absolutely will, especially research faculty whose praise and pay hardly come from teaching quality. Nor does it mean the higher scoring students at one selective will be as receptive to a certain type of academic rigor as those at another. We need to leave the fantasy land behind sometimes, especially when looking at places with strikingly similar levels of selectivity in the first place. Other differences come from resources and culture.
@waitingmomla : I am certainly less inclined to trust GCs. They may be more likely to be influenced by admissions trends they see among schools than a college administrator. So yeah, I get the problem with it.
How about a metric asking employers and hiring personel how they percieve certain schools. You might get a very different picture.
@gallentjill : Would be good assuming that the schools/alumni have good representation among certain employers. For example, WS will have bias from the get go. They may auto-saturate with grads. from you know where and also have hiring personnel who are alumni from said schools. Assuming large enough samples or limiting it to say, high paying employers with high representation of certain graduates, it would be interesting to see how they perceive the performance (one would think this would reflect in hiring practices, but unfortunately in some prestige oriented and driven fields, it may not. Again the WS example among several others. They have inherent preferences that have lasted and have little reason to truly consider others as much) or if they have data on it. Looking at things like professional schools too would be nice (so say for medical schools where certain alumni have high representation, maybe look at residency placement versus those coming from another school with similar representation. However, then the STEP 1 test score can heavily influence that. And some are better at MC tests than others). Basically you are proposing something outcomes oriented. I am for this as it would encourage more schools to seriously work on strengthening and modernizing their programs and resources even more and not merely the inputs in hopes of basically harboring and entertaining a bunch of smart folks for 4 years banking on them all being ready because they came in amazing and did cool things while attending. I don’t believe college is just for a great job, but some sort of accountability beyond graduating students at high rates with decent grades could be nice.
Probably close to conventional wisdom in terms of what colleges high school students and their influencers (counselors, teachers, parents) find to be prestigious and desirable.
To the extent that college prestige actually matters as a treatment effect of attending a college, this may be more relevant in the actual value of college choice than high school counselors’ opinions which relate more to conventional wisdom (which is not necessarily accurate). However, there may be confounding effects, such as the large effect that one’s major has in terms of some types of job prospects and how college-prestige-conscious hiring is.
Obviously, this may be different from a ranking that tries to match conventional wisdom of college prestige.
Lots of “fun with numbers” in this thread. btw- an above post forgot that there is overlap in ACT/SAT test takers so numbers of HS’s is not the sum of those with ACT plus those with SAT test takers. Another factor- a student’s scores can vary from session to session, within whatever points. So many ways to manipulate numbers/statistics. And a perfect score is never a guarantee of admission- just bragging rights in one’s ancient past (ie HS after in college).
Took note of the Illinois resident who chose Michigan. Not Wisconsin- UW doesn’t pay for test scores Emory’s math department improved when they lured Ken Ono from Wisconsin. Do not forget that state flagships are two tiered since they serve their elite students as well as their good ones with Honors programs.
There is a difference comparing schools top/mid/low test scores although an ACT point alone is not significant. A ten point difference, however means something. Don’t be nitpicky, folks. Not surprising a rich school district with a majority of highly educated parents will produce higher test scores. Wonder how many of those students went beyond a practice test or two for their scores? Then you get perfect (or a point off) scores despite an average population.
Back to the thread title. USN&WR rankings are to be taken as a general guide- school #1 is not necessarily better than #5 or whatever.
I agree that there are serious flaws in the subjective US News GS and peer ratings, but I don’t think college and university presidents and provosts are quite as much in the dark as some here would have it. A few have said they just don’t know and they assign the task to someone else, but they don’t all say that, and for good reason. Of course they don’t have comprehensive information on hundreds of schools nationwide, but they all know who they’re competing against, and they watch them like hawks. They know who they’re chasing, and who’s chasing them in terms of competition for top faculty, top students, research dollars, and much more. They know which schools are frequently successful in poaching their best faculty, and those whose faculty they’re regularly successful in poaching, largely on the basis of perceived prestige among those actually in the field. They know who they regularly beat in the competition for top entry-level faculty, and who regularly beats them. They know how their faculty stacks up against others in a wide variety of fields, as measured by major awards, success in garnering external research dollars, NRC graduate program rankings, research publication impacts, and the like. They know what faculty pay scales are like at their peer institutions. They regularly get an in-depth look at some of their competitors through service on accreditation committees that do in-depth interviews, site visits, and analysis of voluminous data. They regularly meet with the presidents and provosts of their peer institutions at conferences and such, and they compare notes in a conscious effort to keep up with the competition and to tout their own successes. Just as in any highly competitive industry, they need to know the competition and what they’re up to.
Every school maintains a list of its closest peer institutions, and collects fairly detailed information on them so it can see where it’s excelling and where it’s falling behind with respect to its peer group. And the members of that group can change as schools rise and fall. Some schools are outside the peer group because the school itself knows it’s not competing on the same high plane. So, for example, the public flagship where I work, the University of Minnesota-Twin Cities, never lists any Ivy League or other elite private institutions as its peers, because they’re really not. It does list a number of other large public research universities, all AAU schools with major research budgets, including most of the public Big Ten schools along with UVA, Florida, U Texas-Austin, UNC-Chapel Hill, U Washington, Pitt. Arizona, and four UCs (Berkeley, UCLA, Davis, and San Diego). Some of these are “aspirational peers,” schools Minnesota knows it’s chasing but which might provide valuable models for improvement. Others are on a similar plane overall but perhaps stronger in some areas and weaker in some. Many schools don’t make the list because they’re perceived to be weaker in most regards (correctly, in my judgment).
The University of Michigan, on the other hand, regularly lists some Ivies and other top privates among its peers, and that’s probably fair. Again, some of this is aspirational, but they know that. And both Michigan and Minnesota would say—again, correctly in my judgment—that Michigan is a stronger school overall in in almost any dimension that matters to the schools.
It’s a separate question how much of this has anything to do with the quality of undergraduate education. Having spent a lot of time at both Minnesota and Michigan, I do think there’s a qualitative difference in Michigan’s favor, but that’s harder to measure. But then, nothing else in the US News metrics really measures that, either.
@wis75 btw the overlap was diiscussed. And really wasn’t important to the larger point.
There were over 2mm discrete test takers of the act last year. The top 1% included all 36 35 and a large chunkof the 34 scores.
If only 35k scored 34 35 36 it’s is mathatstically and statistically impossible for these to be common scores at more than literally a handful of schools as percentage of all schools. Making that uncommon as well
I still don’t understand why this information would be upsetting or controversial in. any way.
@bclintonk Very good information. I have a couple questions if you don’t mind giving your opinion (or anyone else who wants to offer)
- Assuming it's accurate that, in most cases, the correct admins are the ones filling out the surveys + they are informed as you mention above -- how much "competition bias" do you think is present? I guess I've always felt that this part is kind of like asking the participants in a beauty contest to rate their competitors. You can't rate yourself, but you can downplay those running against you. Some have mentioned how schools actively try to game the rankings with data like test scores, acceptance rates, etc. Why wouldn't they in this case? Especially if they know (as another poster mentioned?) that USNWR throws out some ratings to account for any deliberate attempt at manipulation (i.e. if you know they're throwing out the highest & lowest, you can still work around it). The prevailing view seems to be that these ratings are a such a game, so I would think if it's really so mercenary, that these schools would play every card available to them. Especially when it affects something worth 22%
- You mentioned you work at a university. Do you know if schools provide the list of peers you mentioned to USNWR when submitting ratings on other schools (and submit data for only that immediate peer group)? Or do they provide ratings on a larger group of schools? (i.e. 10 schools vs 50) Just curious if the magazine can see that just because one school cites X,Y and Z as it peers, doesn't meant X,Y and Z return the favor. As you say, the peer group could be somewhat aspirational on behalf of the school.
Note also that one reason for this is the title of this thread.
“22.5% of US News’ weighting is based on perception, which is a “soft” number. Instead SAT/ACT scores are “hard” numbers, which cannot be as easily manipulated through marketing and volume repetition.”
Since OP’s opening sentence mentioned 22.5%, perception and “soft” number, I assumed they were referencing the GC/peer ratings.
So I read the question to be – why is there so much weight put on this soft piece (which can be manipulated), but less weight put on this hard piece (which can’t). Also, some of the early posts mentioned the relative weight of these two pieces, so I think I just took it from there. Maybe I misinterpreted OPs question, that’s possible
You can see the list of peers reported to IPEDS at https://www.chronicle.com/interactives/peers-network . The reported peers do tend to be more upward than downward. For example, Tufts selected 5 Ivies, Duke, Northwestern, WUSTL, Georgetown, and BC as peers. Among that group, only BC selected Tufts as a peer. Instead the colleges that selected Tufts as peers included Clark, Drexel, University of Phoenix, etc.
re: 74
@ucbalumnus, just FYI, selectivity is worth 12.5%.
@bernie12, i would hope that the ratings of the academic admins would be based on their knowledge of teaching/resource/curricular strengths of schools, not (or much more than) selectivity. Here, and with the faculty resources metric, USNews is trying to gauge academic quality… I think. There are other measures not quite as directly related, but these two seem to be.
@bclintonk : I believe you on Minnesota vs. Michigan. But people try to use all this to discriminate between schools in essentially the same threshold of selectivity. And again, I don’t think you can measure that unless you look at syllabi, major requirements, co-curricular offerings geared towards undergraduates and course materials. I have tried with a bias towards STEM and an increasing interest in the social sciences at some places. Again, within the same selectivity tiers, it becomes hard to predict trends as scores and stuff change more at some schools versus others. I feel that most instructors are not going to change their teaching in terms of pedagogy and rigor based upon increases in those stats. It is possible that newer/younger faculty may do so, but many junior faculty on tenure track will typically not increase rigor or try new and risky pedagogical techniques because they don’t want the risk of a “red flag” coming from their teaching evals. They tend to play it safe. The only way they do otherwise is unless the department they join puts pressure on them or provides incentive. Some schools have more departments that put this pressure on than others do. Michigan is one of them in the “highly selective” category. Clearly that is a cultural thing at Michigan. People on CC like to under-estimate academics at elite publics versus elite privates, but I refuse and just know better from looking with my own eyes.
@wis75 : “Took note of the Illinois resident who chose Michigan. Not Wisconsin- UW doesn’t pay for test scores Emory’s math department improved when they lured Ken Ono from Wisconsin. Do not forget that state flagships are two tiered since they serve their elite students as well as their good ones with Honors programs.”
Yes! That last part is so key. I would argue that many honors and accelerated courses at state flagships especially those at least already falling in the “selective” category (I’m gonna say 1250-1300 SAT means or so) often offer stronger experiences and teaching quality than standard courses at some elite privates. And honestly, I use this same criteria to discriminate between elite privates. When you look in the top 25-30 or so, it appears that the top 10-12 have edges that are especially notable in STEM, but in other areas as well. And by “edge”, I mean being much more able to cater to the extreme talent recruited at a school. For example, do you have classes and special academic tracks that would intellectually stimulate an International STEM Olympiad medalist or competitor? Is there something to entertain already published writers (Things like Hume sequence at Princeton), or just really ambitious and aggressive about some area of interest. Publics are large and already can handle these students without straining resources too much, especially elites like Michigan and Berkeley, but with elite privates…it is not as even at all. Some don’t offer as many accelerated tracks or courses (seems everyone really only offers them in math and sometimes physics). And then some clearly have a different culture, such that the crew at the very top is more inclined to encourage very prepared students to pursue the more/most advanced tracks, whereas other elites, not as much (both through advising apparatuses and peer pressure).
Seems with the latter, students must push past conventional wisdom to stumble upon or really force themselves onto such tracks, or even craft them if they don’t exist formally. I don’t thin the difference necessarily occurs because of lack of talent. I just think that some of them have way more money to cover the costs of these tracks and courses. Like say, a chemistry department decides to have both a freshman organic AND an intensified/modernized version of general chemistry. To allocate 1 or more instructors to the latter course would take away from a section that could be allocated to the droves of students taking regular gen. chem. Plus an additional lab for that section may have to be created. This is not worth it for some schools, so it is either ochem or basic gchem. Emory tried to get around this by just modernizing (okay, to be real, they are basically just stealing from the content design and curricular structure at Williams and other liberal arts colleges) the general chemistry courses, but most in its tier don’t do that. Seems generally that among the “not extremely wealthy CAS unit” selective privates, those that are larger than average (say Cornell) or those smaller than average (Rice and I guess Chicago) can pull off additional tiering more consistently. Others either don’t try or have to work within whatever limitations they have (seems Emory and WUSTL go out of their ways to pump up or do the “general” courses for the masses differently and both do have “some” special tracks in STEM and elsewhere. It just isn’t really the types seen at most of the “very” leading institutions. But credit needs to be given where it is due).
*Math improving at Emory…I guess you mean reputation wise (grad/research and through the program that Ono hosts). Very little seems to have changed for undergrads (I think there are just too many math majors and joint majors at Emory to the point where it puts stress on the teaching load, even in upper division and intermediate courses. I am almost beginning to think that some courses should just be made harder to weed less serious folks out or encourage them to not take as many upper divisions) except that they finally added a legitimate honors course for super advanced/ambitious freshmen. Seems a lot of energy is shifting to the Quantitative Methods Institute for undergrads because it is more interdisciplinary and incorporates computational techniques into every quantitative core course. I suspect the joint major between QTM and Math will grow more popular though (as opposed to the applied math track).
@prezbucky : Indeed one would hope. It is certainly more likely that would come into play if the metric remained than not. “faculty resources”…I never knew what this meant outside of salary and benefits. Instructional quality and co-curricular offerings to undergrads is far too complicated to be dictated by that, especially at research universities. If there is no way of measuring “culture”, you simply can’t know. You can use student surveys, but I have learned not to fully trust those with regards to academic quality. Seems many students rate ease, edutainment, and cult of personality (being “nice” and “caring” which is sometimes code for “grades easily and provides not much or easy work”, basically “cares about our grades because that is the only thing we care about” lol). I now sort of like to just investigate course enrollment patterns for my areas of interest. Tells me what student bodies like, how much they are willing to stretch intellectually. When course materials are available, I compare the materials to the rating they give the professor (both “ease” and “quality”). I’m always interested in seeing what some student bodies deem as “difficult” (if the course appears standard or simple…I assume the teaching is not but so good, even if they say it is great, especially at selective colleges and universities). It is fun (and sometimes funny), but time consuming to look into this stuff.
The so-called hard piece of test scores can be manipulated over time by emphasizing test scores in admission, offering more test score or NM based scholarships, as well as trying to increase the volume of applicants.
^ Test scores are also manipulated by reducing the size of the freshman class and making up for the lost revenue by increasing the number of transfers you admit, including many with lower test scores than those you admit to your freshman class. The scores of transfers aren’t counted in the freshman class medians that US News uses in its ratings. Or you can achieve the same result by deferring lower-scoring admits to January enrollment, since only fall freshman enrollment is counted in the US News ranking. Or both. Or you can go test-optional or test-flexible, so enrolled freshmen with lower SAT/ACT scores will tend to apply without submitting these scores and never get counted toward the freshman medians. Some schools even tell marginal applicants to spend their first year elsewhere and reapply for transfer admission as sophomores. They end up in the class but aren’t counted toward the school’s ACT/SAT medians.
Even superscoring is a kind of manipulation. Most private schools superscore; most public universities don’t.
With the use of these techniques increasingly widespread, ACT/SAT scores are not an apples-to-apples comparison.