Selectivity Ranking: National Us & LACs combined

I will address the above to dispel a false notion about the data the was reported by Dean Vos. Here are a few salient points to understand:

  1. The data reported to USNews had NO impact on its ranking, including the selectivity ranking. In plain English, as this seems to help the people who do NOT understand the methodology used by USNews, CMC's numbers yielded the same numerical order with the "false" and the correct data. In so many words, they were ahead of its next competitor before and after the revised ranking done by Bob Morse.
  2. The naïve and clumsy rearranging of the numbers was trivial in terms of reporting but was considered a grave lack of integrity by the Board of Trustees. The superchery was uncovered by an internal mechanism -- as it should have. Drastic steps to avoid similar issues were proposed and adopted by CMC. A call for more schools to adopt similar steps feel mostly on ... deaf ears when it came to other schools. Had the boosting of the USNews' rankings be the objective of Dean Vos, he could have asked a couple of people who ... might have shown him HOW to do with much greater results. Hint: it's not via the selectivity index. That was simply not the objective!
  3. As far as "correct" numbers, it is pretty simple. One has to realize that the numbers reported by the schools are voluntary and are NOT subject to much scrutiny, and this despite the claims by Morse. In so many words, it is based on a honor system and based on a VERY loose set of standards. Schools can apply "filters to the reported data, and exclude or include reported numbers with glee and abandon. Want to exclude certain students? Go ahead? Want to not report Spring admit a la Middlebury? That is fair game? Want to report "fantastic" class ranked students a la Columbia? Go ahead? Want to meet with Morse and redefined most criteria and work around the resources, class sizes, add faculty that has not seen an undergraduate in a class setting in a decade? All that is fair.
  4. Is there a bottom line? Yes, that certain schools are NOW making an effort to report date that has been audited, and others continue to walk the same path as Lee Stetson or Ted O'Neill and report numbers on a whimsical basis, and continue to follow policies of non-disclosure of the simplest of data on a timely basis. We know the schools that cling to such policies, and the absence of a CDS is a clear sign that the numbers ought to be taken with a grain of salt.
  5. And, fwiw, there is no bigger grain of salt than the one that is needed for the "class rank" with the FFA that allows schools to either exclude many students (as there are many who are not ranked" or simply "guesstimate" what the number might be. Hint: look at the UC!

@xiggi - scenario #3.4 would be Northeastern? Why not name them? Afraid of the Huntington Ave Mafia?

The 2009 list I was referring to was the list created on CC. This was in response to post #26.

Your general comments regarding Claremont McKenna in relation to USNWR may have been warranted, but they didn’t have a direct bearing on my post.

Oh, I think I have dedicated many posts to name the offenders over the years. After a while, I can’t remember them on the fly and it is not really useful to go dig the information out of the arcane world of this forum or the depth of web.

The reality is that there has been very little improvement in terms of pushing for more transparent information and more verification. The other reality is that most people could not care less about the issues debated here, at least not to a high level of granularity. The conversations will still be about how low the percentages of admissions are in March for the Ivy League et al club, and a bit a renewed interest when the USNews publishes the older data in the summer.

In the meantime, schools will continue to “massage” the numbers, announce what they can get away with it, and rest assured that people like the OP will even push the envelope by accepting the numbers at face value without much understanding of the underlying lack of integrity or relevance of the inputs. Lost for the members of this latter group is that glorifying a percentage of “ten percenters” when the school only enrolls one fifth of ranked students and four fifths of students from high schools that have no use for rankings is … a rather myopic view of the world.

Xiggi, do you have any idea how patronizing you’re sounding?

Believe it or not, you aren’t the only person here who cares about these issues.
As far as I’m concerned, the numbers on the morning weather report are suspect.
However, they are suspect even without assuming nefarious intent.

I mean, this morning you were tarring all my data with the “horse manure” brush apparently because a couple of schools high on the list don’t post CDS files. I asked you to list them; you didn’t; I did. So we can all fret over the fact that 8 colleges out of 100+ don’t post CDS files.

I guess your underwear is on so tight today that 8 CDS-less colleges isn’t a big enough problem?
So now all college data everywhere is suspect because people like that bad bad Ted “Dean of Love” O’Neill are poisoning the well for the few people like you with noble intent?

Oh brother. I guess you didn’t bother to trace the genesis of this thread.
Again: Somebody on another thread asked if there wasn’t a simple way to aggregate several measurements.
I commented that poster PapaChicken had done it in 2009.
Somebody else commented it was too old to be useful.
So I took it upon myself to gather more recent data and re-sort it.

Perhaps you know that I attended a college that wound up pretty high on my list and you think that was my ulterior motive for posting this information. I won’t deny I was pleased to see it move up as high as it did.
But you know, if you’re at all familiar with my posts … despite any other faults they may have … I don’t think you’d have much basis for calling me a shill for my alma mater. I more frequently recommend small liberal arts colleges than the university I attended. And as I pointed out, many of them moved down in this list.

“people like the OP will even push the envelope by accepting the numbers at face value”

It’s a lot easier to view the OP’s list objectively if you don’t especially care where a particular school ranks. The list is apparently intended as an approximation regarding selectivity. Unfortunately it cannot account for all the uncertainties from which it has been composed.

The list’s greatest strength is its interactivity. Any reasonable contributions or criticisms have thus far been incorporated, or at least acknowledged, by the OP.

“I attended a college that wound up pretty high on my list”

I’ll assume this may be the top 10 or 50. If this is the case, what would be the motive for the OP to bother to create a list of 101 schools?

^ Meant: “what would be the ulterior motive”

“I won’t deny I was pleased to see it move up as high as it did.”

If anything, in looking for ulterior motive, the OP’s school’s rise (assuming the OP has already graduated) can be interpreted as a negative. By the most superficial of standards, the OP would have attended his college at a time when it was “ranked lower” because of its less competitive admissions environment.

TK, you are still not getting it! The missing data is a small part of the problem of your compilation.

I have not read the whole thread but I’m still recovering from laughing from the horse manure comment. This turns out to be my best Sunday activity.

Here is a summary of the final report of the investigation into CMC’s misreporting.

It is actually pretty interesting.

It was a multi-year event with significant misrepresentation of selectivity data that appears to have resulted from a disagreement in admissions philosophy between the (president who wanted to see improving stats) and the director of admissions (who thought it was better to use a more holistic approach). Note that CMC had the applicants with higher numbers, but the director of admissions chose not to admit them. Which of course begs the question of whether or not schools with higher stats are really more selective.

The fact that CMC’s USNews rank was not impacted (at least in the year it was discovered) is not particularly relevant, because the appearance of increased selectivity (via higher stats) can be as (or more) important as a change in rank. Some people make judgements about schools primarily based on the selectivity stats. In fact, there was a study that showed that changes in peer assessment (one of the biggest factors in the rating) correlated with changes in selectivity.

https://www.insidehighered.com/news/2012/04/18/claremont-mckenna-admits-extent-deception-admissions-statistics

I found it interesting that the first thing mentioned in CMC mission is that it is selective - never seen that before…

https://www.claremontmckenna.edu/about/mission.php

TK,

If you change the weights a little bit, for no particular reasons, your ranking will be changed, for no particular reasons also. What xiggi and Blah have tried to say, in harsh non-mathematical terms, is that any weighted average ranking is very subjective and has no real values. In my opinion, any weighted average rankings that do not put HYP on the top may not have the correct weights, and many weighted average rankings, USNEWS’ included, are “garbage at core”.

Sorry if I am using strong language here, and I don’t want to offend anyone as I am already labeled as “shameless to defend Stanford”, even I was just quoting some of the “facts” from some school newspaper.

@Mastadon: I agree with your points. I hope they fall on receptive readers.

@ewho:

“any weighted average that do not put HYP on top may not have the correct weights”

But for how long? How far would these schools have to fall in actual selectivity (as you perceive it) before you would be forced to reconsider this? One of the purposes of a fully developed selectivity ranking would be to identify actual changes in advance of common – or even erudite – perception.

I don’t want to put you in a position where you feel obligated to answer my questions. Consider them mostly rhetorical.

“I am already labeled as a ‘shameless defender of Stanford’”

Lol. It’s funny what we get accused of on CC. It’s been said or suggested that I attend/attended at least six schools. It would be nice to have that many degrees. I think even my participation on this thread may result in an accusation that I have a particular interest in selectivity. I’d say my interest here has been mainly to see the actions of a work in progress, the result of which could be a fine, tentative, list, suitable for some purposes and not others. As for the “six” colleges I’ve attended, I have little concern as to where they rank – that’s not why I chose “them.”

Nope, I’m not getting it, because you haven’t even clearly described the “missing data” part of the problem. What “missing data” do you mean? You alluded upthread to “data that has no verifiable sources for a number of schools”, without bothering to identify or even quantify “the number of schools”. Let’s be clear. About 8 schools (out of the 100+) don’t publish Common Data Sets. But then … along with some cryptic ad hominem criticisms … you’re shape-shifting this into a “missing data” problem (what’s “missing”?) … while alluding (again, cryptically) to other, allegedly bigger problems in the compilation.

You praised the earlier 2009 compilation as fine work. What exactly do you think is so different about how I’ve approached it? As I said before, PapaChicken would have had at least as big a problem with missing CDS sources in 2009. Or is it that you think that the CDS reporting itself has gone to pot in the past ~5 years?

Yes … quite possibly. Do you think I’m claiming these little lists are the eternal Ground Truth of all college assessments? Even an equal 33/33/33 mix is “weighted”. So if you’re afraid of weights, then avoid any college ranking/assessment compiled from more than a single measurement. Go by average SAT scores alone, if you like that (it’s actually a fairly good predictor of the overall USNWR rankings). Or build your own model with weights tailored to your own scenario. Or forget about data altogether and go ask your Great Aunt Agnes what she thinks about Vassar.

I prefer to do the best job one can of gathering data, listening to problems that people report, and adjusting it accordingly. Let’s try to avoid the Nirvana fallacy (http://en.wikipedia.org/wiki/Nirvana_fallacy).

Well, that in a nutshell is roughly how data modeling works for rankings like USNWR. In the beginning, there’s a committee of trusted experts. They say HYPSM etc seem to be the best colleges. Then the modelers try to identify a plausible bunch of measurements that will mimic the opinions of the experts. They experiment with the choice of measurements, experiment with the weights, to come up with a formula that replicates the “expert” opinions. Then they apply that formula to a bigger set of colleges and see if the output still looks plausible. Lather, rinse, repeat.

I simply re-applied the weights PapaChicken said he used ~6 yrs ago (50-40-10), which apparently were the weights USNWR used, which would have been the weights that did contribute to placing HYP often at the top. Except that now … unless I’ve made mistakes (which of course is possible) … HYP aren’t quite at the top (in THESE measurements, with their admitted limitations). Is that the kind of thing that is really bothering a few people? I did try re-jiggering them to 65-25-10 (the USNWR current formula). It doesn’t seem to change the results all that much. The same 2 schools are still at the top. Exactly the same schools are still in the top 10. Some state schools (those with very high numbers of students in the top 10%) do see drops of 10 positions or more. As far as I’m concerned, these (or even most of the 2009 to 2015 changes I showed) aren’t all that momentous.

Sure, the integrity of the source data is a legitimate concern … but I’m not persuaded the situation is all as bad as Xiggi seems to think it is.

As there has been much discussion about CMC here, I’d like to share our experience. My kid is graduating top of her class of 114 (likely Val) with a 35 ACT. There is on average a 25% admit rate each to UCLA and Berkeley and a 10% Ivy/Stanford admit minimum. My kid chose CMC and was told that her stats accounted for nothing. Being a recruited athlete got her her admission. I can verify that our school Naviance shows 4.0 unweighted/2400 SAT scores being rejected from CMC. The only admits we have had in the past six years from our highly selective college prep high school that doesn’t rank have been recruited athletes. Don’t know what this adds to the conversation, but at least I can confirm what StagAlum said a few pages back.

acemom, you’re bringing up an aspect of selective college admissions that I simply don’t know how to capture in an exercise like this. I’m ONLY trying to measure 3 features: scores, class rank, admit rate. Those are the only features that US News considers, too.This methodology may very well understate the selectivity of some schools relative to the selectivity of others (esp. from the perspective of students who bring more than high stats to the table). The distortion may be greater in some quartiles than others (for example in the 2nd quartile, where a bunch of top state flagships start showing up with very high T10% numbers, even though their admit rates are relatively high).

@tk21769 I really appreciate the effort it took to do all of this. Quite an undertaking.

I noticed a couple of things for future consideration. 1) I know it is tough, but limiting the rankings to those who also have top overall USNWR rankings is problematic. Those rankings use things like spending as part of their formula. 2) As someone pointed out earlier, with public schools (like Texas, Penn St.?) you often have two sets of information…one for in-state, one for out-of-state.

Ultimately selectivity does not really tell you much in an of itself. That is what makes this whole process so difficult. What if we had a system where the top academic students could submit one application and then the schools bid on them. Make them the best offer they can and the students choose the schools?

Thanks for the efforts. Don’t worry about all of our quibbles.

Quibbles are fine. Corrections are great. Caveats about the limitations of these numbers are, too.

What I don’t like are ad hominems (including insinuations about my “obvious” motives), or attempts to poison the entire data well with allegations about a few schools’ reporting methods.

Well, let me repeat what I wrote earlier but with different words:

  1. There are schools that do not publish data -- as you know and have confirmed the absence of CDS for say Chicago or Columbia, among others. However, given the absence of a CDS or other verifiable data, one --as you did-- will rely on data that might be correct or might be the result of fudged numbers. To be clear, you reported a SAT 1515 average (or was it median or something else) for Chicago based on the 2013-2014. Are you sure that the 1515 corresponds to the 746M and 746V reported by Chicago in other publications? All in all, the absence of verifiable data (and verifiable takes its own grain of salt) makes it hard to make meaningful comparisons. Data that is found might be for admitted classes, enrolled classes in May, or enrolled classes in the Fall ... with all kind of possible adjustments. In so many words, why should anyone trust the data "reported" by Chicago? Inasmuch as the SAT scores are subject to minute adjustments, why would anyone believe the "top 10" percent reported is not subject to massive adjustments and exclusions/inclusions? While you might disagree with me, I do not think that the number you used in the case of Chicago would survive much scrutiny unless the "auditor" was schooled by Lee Stetson.
  2. The biggest problem, beyond the obviously hidden data, is that the numbers are suspect in many cases. I have made comments on this in the original thread compiled by PapaChicken.
  3. The second biggest problem, and it is one of methodology, is to rely an overweighed factor for the "ranking percentage" as this number hardly represents the enrolled class correctly. Inasmuch as you do not have the capability to apply control mechanism, you are compounding the weakness of the formula used by USNews. To be clear, the top 10 percent is a poor criteria by excellence, and one that is --obviously-- open to much manipulation in certain circles.
  4. As far as ulterior motives and the other "accusations" there would be one simple way to shut down the voices that "expose" a shill --as you said-- and that would be to eliminate the schools that continue to play games with transparency and refuse to make their CDS public. How about relisting the schools with the transparent schools and prepare a separate listing with the nebulous by design ones?

Dare I to predict that your favorite school would be in the second list?