Are Test Optional Schools Committing Fraud When Posting Scores Obtained By A Fraction of Students?

@Massmom

Well, other people don’t just make up phony statistics. I’ve tried and cannot confirm that claim, which wasn’t supported by any sort of citation when it was posted.

All of the real data (i.e., stuff I can cite to) - suggests no difference in graduation rates between submitters & non-submitters:

The studies do show a boost in diversity numbers for schools that go test-optional.

As long as you were able to get the statistics on how many scores were represented, how is this fraud?


[QUOTE=""]
The studies do show a boost in diversity numbers for schools that go test-optional.

[/QUOTE]

/\ /\ /\ /\ /\
BINGO!

The 15% higher average grad rate for colleges that require SATs comes from an analysis of about 1650 liberal arts colleges and research universities in the US Department of Education database (IPEDS). I find it hard to believe that the correlation between average HS GPA and grad rate exceeds +.8. Calmom, What was the correlation between average HS GPA and grad rate?

The colleges are just so appallingly hypocritical. Of course they care about test scores-that is why even those test optional schools cheerfully report their inflated scores. The colleges like high scores, and they like getting score reports from students who meet or exceed their averages (the rest of you please hide your score report). If they truly believed test scores weren’t important, they wouldn’t advertise them so much.

I think I found the source of the statement that HS GPA exceeds SAT in predicting graduation rates:
http://www.aei.org/wp-content/uploads/2018/05/What-Matters-Most-for-College-Completion.pdf
This article is very flawed and biased. It is based on a sample of students from less selective public schools in 4 states. Nowhere does it state how many colleges or students were in the data. It violates the most basic principles of research reporting. The data are suspect. At some SAT points in the data, the graduation rate actually goes down as the SAT category goes up. This lacks validity on the face of it.

Graduation rate is primarily a function of selectivity, so it’s important to compare comparable colleges, particularly ones that are similarly selective, rather than just take a simple average. For example, looking at colleges in Ithaca, Ithaca’s test optional report shows that Ithaca test submitters had a 63.9% graduation rate, while test non-submitters had a 63.3% graduation rate – less than a 1% difference. Since the year of the report, Ithaca’s graduation rate has increased to 75%. Cornell had a more than 15% higher graduation rate of 93%. Is Cornell’s higher graduation rate primarily due to Cornell requiring test scores, or is it primarily due to Cornell being far more selective?

It sounds like you are looking at the correlation between the average SAT of the whole school, which is a primarily a measure of selectivity. Nearly all measures of college selectivity are well correlated with graduation rate. For example, you’ll also find a significant correlation between admit rate and graduation rate (negative correlation).

If a school is concerned with the graduation rate of individual students, then the important metric is not the average SAT score of the school. The important metric is how how much test scores add to the prediction of graduation chances beyond the information that is available in the rest of the application, for individual students. All studies I am aware that considered both a measure of GPA and good measure of high school rigor concluded that SAT I offers little benefit beyond these metrics. Similarly all studies I am aware of that compared graduation rates of submitters and non-submitters at test optional colleges found little difference in graduation rate.

You can find many example papers. The Ithaca one I referenced is at https://www.ithaca.edu/ir/docs/testoptionalpaper.pdf . The author found the who could explain a large 44% of variance in cumulative GPA at Ithaca using a model that included FIRSTGEN, GENDER, ALANA, APCRHRS, HSGPA, STRENGHTSCHEDULE, SATM, SATV, SATW . When he dropped SAT from the model and only considered , FIRSTGEN, GENDER, ALANA, APCRHRS, HSGPA, STRENGHTSCHEDULE, then he could explain 43% of the variance in cumulative Ithaca GPA instead of 44% with SAT included. The author writes,

@collegehelp

Whose analysis? Please provide a citation or a link.

Wesleyan publishes the figure (75%) on its Class Profile page:
https://www.wesleyan.edu/admission/apply/classprofile.html

“It was strongly suggested at my kid’s private high school to pick either one or the other to take lest you come off as a test-taking drone who could have better used his time.”

Oh, please. Taking the tests 5 tines each, retaking them when you already have a great score—that could give such an impression. Taking both tests, not so much.

My state requires the ACT in all public high schools, as do several states I believe. Any student who has National Merit potential/makes semi finalist is probably going to take the SAT. Many many students take both just to see which they do better on.

And if a student takes both and does well on both, no reason not to send both sciences.

C’mon. The Bates 88/89% grad rate can’t be improved 15% by some other test requiring college. It would add to more than 100%. But I suspect some are confusing this with selectivity. Not every TO (and not every test requiring college) is of equal selectivity, in the first place.

@PetraMC I also work for a test requiring. It absolutely doesn’t mean the vast bulk of applicants are more qualified. The reject rate is huge.

The kids who get in are not just high stats, but the ECs and the self presentation in the app and supp are tops, too. Without “the more-than-stats rest of it,” they don’t get into tippy tops. That’s regardless of superior stats. It’s not rack and stack.

It’s not just about scores. Ignore the rest at your peril.

Btw, anyone can find all sorts of proclamations about what leads to higher college gpa. Some association of hs math teachers claimed taking calculus is the key. Just taking it. This was under the auspices of College Board.

Meanwhile, ime, adcoms are far less concerned with predicting college GPA than most realize.

It’s probably not a coincidence that 75% of students submitted scores at Weslayan and USNWR ranking methodology states, “If the combined percentage of the fall 2017 entering class submitting test scores is less than 75 percent of all new entrants, its combined SAT/ACT percentile distribution value used in the rankings was discounted by 15 percent”, allowing Weslayan to avoid the USNWR ranking penalty.

Calmom,
I have access to the US Dept of Educ database and I did the analysis using an Excel spreadsheet.

Data 10, in stepwise multiple regression the increase in the predictive power of the model depends on the order in which variables are entered. Whatever variable you enter first will subtract from the predictive contribution of subsequent variables to the extent that the variables are correlated.

But, that just goes back to what everyone was just talking about: The combined SAT/ACT percentage submitting scores of every college entering class we’ve been talking about is >75%. It’s not a difficult bar to meet.

In which case, the so-called “statistic” is nothing more than your opinion.

Since from the tenor of your argument, you appear to have made the mistake of confusing correlation with causation, I see no particular reason why I should trust your math, either.

The predictive power reference was related to an analysis being done with certain variables being fully excluded and not being entered at all, rather than an issue with order. When only SAT scores were excluded, which is more comparable to admissions decisions at a test optional college thab looking at correlation between a single stat in isolation, there was little difference in predictive ability. When only the HS transcript variables were excluded, there was a large difference in predictive ability.

All Variables Entered – 44% of variance in cumulative GPA explained
All Variables Except SAT Score Variables Entered – 43% of variance in cumulative GPA explained
All Variables Except HS Transcript Variables Entered – 25% of variance in cumulative GPA explained

I think the percentage might be lower than this in many cases. For example, Bowdoin appears to fall perennially below the 75% figure:

(Though the above quotation itself is worded so as to be somewhat unclear, I believe it means that between 2/3 and 3/4 of matriculated Bowdoin students gained acceptance without having submitted traditional standardized scoring results.)

https://www.insidehighered.com/admissions/article/2018/09/28/proponents-test-optional-admissions-point-momentum

Re # 76, s/b “believe it means that between 1/4 and 1/3 of matriculated Bowdoin students gained acceptance without having submitted traditional standardized scoring results.”

There are two separate numbers: the number of students who submit scores during the application process; and the number of students who submit them after admission. Many test-optional schools do request for score submission post-admission; also, even without formal submission, often that information is included on high school transcripts and the colleges pick it up that way.

Bowdoin’s CDS indicates that 53% of students submitted SAT, and 52% submitted ACT. They require matriculating students to submit scores, including ones who did not submit scores during the application. This allows Bowdoin to report scores of all students in the CDS, their website, and to USNWR; including those who did not submit during the application process. This also allows more than 25% of Bowdoin’s class to not submit during the application process, without facing the USNWR ranking penalty.