Are Test Optional Schools Committing Fraud When Posting Scores Obtained By A Fraction of Students?

Count me in the “failing to see the problem here” crowd.

“Almost every institution – up to and including Harvard and Caltech – probably has some layer of kids, 5% or 10% of the class, that is admitted completely without regard to test scores, based on other qualities and strengths. That layer never shows up in the 75-25 spreads – they are just part of the bottom 25%, and may not even move the 25% needle down much, if at all.”

Or they might not show up at all. Example: Harvard’s Z list.

I agree.

It would be like complaining about the college posting its tuition & housing costs when many student don’t pay full-cost because they receive financial aid. Or, conversely, to complain about figures for “average” amounts of student aid or loan debt, when not all students qualify for financial aid and not all students take loans.

Of course those figures are also misleading for people who don’t understand the financial aid process – families who are either deterred from applying because they don’t realize that their student is eligible for a substantial amount of scholarship or grant aid; or families who are overly optimistic about financial aid because they don’t understand, for example, that if a college meets 95% of need on average that doesn’t mean that all students have 95% of their need met.

That’s not “fraud” – that it is just a case of unsophisticated people not understanding what the numbers mean. Maybe colleges could do a better job of explaining on their websites-- but they really don’t have control over data is presented by other entities, such as on the College Board site or US News.

I’d add that the colleges cannot simply decide that they aren’t going to report the scores, because US News won’t list colleges that don’t report scores on its primary lists. So if, for example, Bowdoin decided it simply wouldn’t report scores at all, since not everyone submits; then US News would no longer list Bowdoin. It’s not just a matter of dropping a few spots in the rankings-- they would simply not be listed. (There is some sort of “other” list that US News maintains, but no one looks at that). Even applicants who don’t care about rank order typically use US News as a resource to develop a college list – so Bowdoin would probably do just fine if it was ranked #8 instead of #5 – but not being listed at all would lead to a devasting drop in application numbers.

The numbers in the cases of some colleges have meant some other numbers entirely. I’m happy to be in the minority on this thread when it comes to having at least some interest in how this would arise.

Deceptive might be a more accurate word than fraud. The numbers reported may be legally defensible, but they are, and are designed to be, deceptive. An 85 point score drop, as in this case, isn’t insignificant. If it were, those schools would report their actual enrolled numbers. The colleges know what they are doing, and why. One can say the rest of the numbers are misleading too, and they might be, but it still behooves us to call out deception when it occurs. Integrity of the process, and all that. Or maybe we’ve just given up on that too.

Hampshire manages to survive being unranked.

Actually, the financial aid numbers are far more accurate than the SAT scores reported. Most websites will disclose something like “x % of students qualify for need based aid, the average amount of such aid given to those students is y dollars, in the form of a grants, b work study and c dollars in loans.” That is likely accurate, and a far cry from “We are test optional, average SAT scores submitted were 1330-1500.”
If this were financial reporting, one would be required to immediately note that 25 % of applicants are not submitting scores, and that the average scores of those actually admitted and/Or enrolled may differ substantially from those reported above.

Well, maybe when a college has a 70% admission rate and 20% yield, and is ranked at #110 by US News, the calculation of the value of being “ranked” is a little different. At a certain point the numerical ranking probably hurts more than it helps, especially for a college that is part of a consortium of more selective and prestigious schools.

For kind of the same reasons that a student expecting to earn a C- in class might opt for Pass/Fail grading instead.

Thread’s going in circles, always coming back to, in effect, “No, I know it’s deception. Because I know.”
Btw, average financial aid awarded doesn’t tell what you’ll get.

The difference is that financial aid numbers are critical information for some students to understand, because they can’t attend schools they can’t afford. Whereas if a school is test-optional, the only information the student needs to know about test scores is whether it is worthwhile to submit. By disclosing the test scores of submitters, the student has the pertinent information: is their score at or above the median that the ad com will see for submitters?

In fact, what could be misleading in that context would be a policy that includes the scores of nonsubmitters, because that might cause a student to overestimate the value of a lower end score.

It appears the suggestion here is that only students with standardized scores at or above the median should submit their results to test optional schools. If this suggestion were followed by all applicants, pretty much all test optional schools would, through iteration, eventually report perfect or near-perfect scores for those submitting test results.

This is guaranteed to happen over a period of time. Maybe standardized tests will be done away with entirely before it happens… This shows how illogical some of the arguments here are.

@rickle1
Very interesting that you mentioned Wake. As there is a prevailing belief (that I neither agree nor disagree with) that TO schools do so to increase apps and decrease acceptance rate. If so, Wake is not doing a great job, and it seems as though TO has diminishing results after the initial few years. Wake received less apps last year which is hard to do as all other highly ranked schools increased apps, even slightly.

The referenced Bowdoin 85 point drop was computed as sum of 25th percentile scores on M+V. Note that the actual combined score is unlikely to have an 85 point drop, as many students who are in the sub 25th percentile on one section are not on the other. Using ACT composite is more reflective of the actual combined score. If you look at ACT composite instead, then the difference looks quite a bit smaller.

Full Student Body:Middle 50% ACT Range = 30 to -34
Only Test Submitters: Middle 50% ACT Range = 31 to 34

The margin of error on SAT test scoring is about 60 points. That’s without accounting for practice/coaching effects for students who retake. Students who are applying to test-optional schools have less incentive to retake, or even to prep extensively. So based on the data posted at https://www.bowdoin.edu/ir/data/admissions.shtml – you see these differences between submitters-only and full class data:

Reading:
25th percentile: -40 (690/650)
Median: -20 (730/710)
75th percentile: -15 (765/750)

Math:
25th percentile: -45 (685/640)
Median: -10 (720/710)
75th percentile: -10 (770/760)

So nothing that even comes remotely close to being significant in terms of the class as a whole. Certainly nothing that would provide useful information about academic ability of incoming students.

A 680 or 685 are far from incompetent. I suspect some are focused more on the production of results lists, after admits matriculate, than what it takes for kid X to get in. Calmom is right that lower percentiles can encourage kids otherwise not qualified. As it is, too many kids only look at the easy superficials. Do you understand what does matter?

Colleges have increased and decreased applications for a variety of reasons besides whether they are test optional or not. For example, in the 2 most recent years listed in IPEDS, the top ranked LACs had the following application changes. 5 of the 9 top ranked LACs had a decrease in applications. I don’t see any clear pattern between test optional vs test required and which ones had decreased applications. During the referenced year Wake Forest applications increased by 5.2%, which was a higher rate of increase than occurred at any of the colleges listed below, as well as higher rate of increase than occurred at HYSM and most Ivies.

Williams – Up 1%
Amherst – Down 2%
Swarthmore – Down 1%
Wellesley – Up 5%
Bowdoin – Up <1%
Carleton – Down 4%
Middlebury --Down 1%
Pomona – Up <1%
Claremont Mckenna – Down 13%

Interesting, in the yr immediately following Wake’s transition from TR to TO, they had a spike in applications (circa 2008). They gradually grew to their current application count over the next several yrs with little dramatic change and have hovered in the 13k - 14k range with a freshmen enrollment of 1250 - 1350 or so ever since. Perhaps they, and the list above, actually are in the holistic admissions review business and see testing as simply one data point of a much larger body of work.

Part of their process is the admissions interview. They really want to know their applicants. This is handled by Wake staff, not alumni. On campus or via skype. Hard to dramatically grow app count if they are being that detailed.

Re #134

This describes a shift from individual testing to that for a population. The two have widely different significance requirements. A relatively small change in aggregated testing results can be enough to change one college, statistically, into another.

Exactly, merc81. And that is why the stats are presented as they are. Individual results are quite different from aggregated results of thousands