New pet peeve: test optional at top schools

Furthermore, grad schools are starting to drop the GREs, because, among other things, they do not, surprise-surprise, predict grad school success:

There’s a little circular reasoning in that, they conclude that PHd in physics grad rates depend on undergrad GPA and ranking of the grad school. Well the ranking of the grad school is partly based on scores so maybe the reason MIT or Princeton is ranked higher is because they have stronger students, based on undergrad gpa, scores, research done before applying.

I think that the 2 cycles where universities went TO abruptly + the UC decision will provide enough data to let them (as an aggregate and individually) know if they can make good enough decisions for them without test scores.
Most can’t use AP tests/taking an AP course as a benchmark (since their applicants aren’t in that cohort) so if they need a national benchmark I wonder what that’d be?
I doubt they’ll return to “prepped for” (in the “learn ways to trick the test”) psychometrician-designed standardized tests though, unless they find major problems with their recruited pool v. back when they used the tests.
The UCs will not even be TO but test blind - they’ll be gathering data, I suspect, about what they want and how to achieve their desired objectives without these tests. Since most applicants come from CA, CA HS curriculum, or something state-wide? Or “achievement bands” like the IELTS (where each school decides whether 5.5, 6, 6.5 or 7 are required) or the GCSE’s (where a national standard is needed to reach a 5/C or a 7/A, but for the truly exceptional a predetermined national percentage gets 8s and 9s)?
It’s really short sighted that CollegeBoard scrapped the Subject Tests though as one, 2 or 3 of them could have been used :slight_smile:

2 Likes

I hope test-optional admissions continues at the many schools who are test optional during COVID- in addition to all those schools already test-optional or not emphasizing testing,
FairTest | The National Center for Fair and Open Testing

I think that test optional admissions really help with the extremely high levels of stress in later high school. And frees up time and energy to pursue more important activities , exploring and developing interests and as people.

Some schools award merit based on service or other activities, and that could extend to all schools.

Test scores encourage the idea that there is some hierarchy among people as if scores somehow mean certain applicants are higher than others. That is just not how admissions work, and it is an unhealthy perception.

Granted, large schools are often less able to be “holistic” in this sense and that needs to be addressed.

ps A few things on accommodations: 1) It is pretty darn difficult to get accommodations at SAT/ACT. 2)The concept of leveling the playing field is hard for folks to grasp. 3)Public schools can do testing. 4)Many times a kid (s) with accommodations will take the test in a separate room from the general test population. 5) In a competitive situation with scarce openings, there can be resentment based on anxiety; again, test-optional admissions may help with that

3 Likes

For law school yes, but for PhD programs?

1 Like

Point of fact: (a) The number of college classes in which the entirety, or even in many cases the majority, of the final grade is based on exams is vanishingly small and shrinking, and (b) as the research continues to pile up, educators are recognizing more widely that exams aren’t really an optimal way to gauge student learning anyway.

Which leads me to think that one reasons admissions offices are heading toward test-optional (and even test-blind) may be precisely that: a growing skepticism across the higher-ed sector about the utility of exams as a useful measure in most cases.

1 Like

I’m not sure these are universally shared views in higher education and I don’t consider them to be necessarily accurate reflection of academics in that sector either. In any event, your opposition to testing seems to be principled and much broader than many others.

2 Likes

Wow, I must be really old. I don’t know what tour major was, but I had multiple ( undergrad) and one grad. And another post grad. All used exams to derive scores and grades. These were Ivy league schools.
Are you telling me no one sits for exams any longer? I find that VERY hard to believe.

Please re-read what I wrote. That is most emphatically not what I said, whether in the post you’re replying to or anywhere else.

I said that exams are decreasing in importance, and that the model where grades are based entirely on exams (e.g., 30% midterm+70% final) is disappearing rapidly.

Part of this is because there’s more and more recognition that best practices in higher education don’t center around exams, or even the sorts of things that exams generally measure. See, for example this summary of high-impact educational practices produced by the Association of American Colleges and Universities, summarizing a research report (that is unfortunately not free to access); exams do not appear on the list.

1 Like

Seems very likely that it will highly depend on the courses taken, the college/university and its size and many other factors. I might be convinced that its trending away from 100% testing, but I think it’s a leap to think that tests are going away. It’s funny on CC, folks often posit something then link a report which they find useful and then ask anyone who responds to their opinions back to their the report. They also expect someone to read their report which I did in this case because your argument was perplexing at best and I wanted to see if I could learn something new/gain perspective. I didn’t. LOL. That report had zero data, no schools and IMO zero value.

What I have often wondered is how folks convince themselves that national tests are invalid and then send their kids off to college to take many tests over the years. They never explain how someone can do exceptional well in college yet can’t take a national test. It’s not just the SAT, but ACT and in many states other tests which actually test based on the curriculum.

“more and more recognition” By whom, who ‘recognizes’ based practices in higher education. Who is making the final decision to throw out testing?

Sorry, I’m not convinced or enlightened in any sense. Love writing intensive aspects, love projects, love it all but if you don’t test, you can’t tell who actually learned the material.

3 Likes

I’ve read the entire thread, and my faith in humanity is definitely being tested by some of these responses.

Yes, my 34 ACT (one test sophomore year), straight A student with as many AP courses as our private school offers kid isn’t getting into some of the schools she would have a year or two ago. But she will go to an amazing school (TBD) and I’m thrilled she will be hopefully going there with a more interesting and diverse crowd than maybe a few years ago. She still has all the advantages in life she has always had and she is fortunate.

I just hope that she’s not in school with some of the kids of the responders. The system is not rigged against you, and it never will be.

Sincerely,
Mom of white, no financial aid, no hooks daughter who appreciates the new and probably permanent system

17 Likes

Wake Forest, an exceptional university, has been test optional for over a decade and hasn’t seen a decline in their student population. Go figure.

4 Likes

I’d like to emphasize “some” responses. No need to get all crazy over my generalization. Most of you are pretty normal people. :slight_smile:

1 Like

University of Chicago has been TO for a few years as well and clearly it’s working for them!

3 Likes

Do you have any evidence for this? That’s a pretty significant change, especially stem education. There’s been one major change, in that projects have gained some popularity but it’s like 80% tests, 20% projects. How you do in a class is still determined by how you do on the tests, as most projects will be graded similarly.

“a growing skepticism across the higher-ed sector about the utility of exams as a useful measure in most cases.”

Again, broad generalizations, I think test optional may be the way go, but GPAs are still based mainly on tests, especially if you’re in honors or AP classes. Teachers of AP classes typically use old AP tests as tests in their classes.

3 Likes

Well, now you started it. Here come the UofC “chorus.” :joy:

1 Like

“I just hope that she’s not in school with some of the kids of the responders.”

If you want to take potshots at parents or adults go for it, but attacking kids is really poor form.

4 Likes

There were over 1000 test optional/blind/flexible colleges prior to COVID, including quite a few highly selective ones such as Chicago, Bowdoin, Colby, Middlebury, and Bates. A common theme is all of the colleges that have been listed as examples is that they have hardly any students majoring in engineering. At many of these schools, most students major in STEM + econ, but not engineering. There were a few test optional tech schools prior to COVID, such as Worcester Polytechnic Institute, but none that I’d consider extremely selective.

4 Likes

Like I said, the PDF I linked to was an executive summary. The report it summarizes has all the research and the data—but it seems impolite to link to a report that costs cash to access. But it’s on seriously solid footing, based on very intensive multi-site pedagogical research.

And the report was produced by the AAC&U—a highly respected educational group. It doesn’t matter whether you know about them—they’re a pretty big deal.

And they’ve done many more reports than just that one, including a number of studies on assessment trends in higher education (spoiler: fewer exams). Like, seriously, this is what they do.

I mean, I get it, you see value in tests. (Whether all tests in general or standardized exams in particular I’m not certain, but either way.) And I have not said—and let me repeat that, with emphasis, because several people out there seem to have not gotten this message, despite me saying it repeatedly—I have not said, ever, that all exams are bad, or even more bizarrely that there are no tests offered in college courses.

What I have said is that the focus and reliance on testing as an assessment method in colleges is decreasing, and that there’s a recognition that tests are a suboptimal way to assess massive chunks of postsecondary learning. And frankly, I’m kind of surprised that that’s a controversial claim.

5 Likes

Not PhD programs. Rankings there are based on what the PhD students do, because, you know, a PhD is a degree which demonstrates that a person has been trained to perform original research. Standardized tests like the SAT do not do that - they test how good a student is at repeating what they learned.

Rankings of PhD programs are ranked by the publications of the grads, outside funding, etc.

Grad programs use GREs for two reasons

First, because they want to see how much material a prospective graduate student retained from their four years of college. While the program could do it themselves, it takes a lot of time to grade, even more time to formulate, and even more time for the faculty to argue what material should be covered and how much of each.

Second, because it is an easy way to cull applicants. Reading applications takes a lot of time, and faculty do not have much spare time. So anything that can provide a quick way to reduce the time that faculty spend is good.

Basically, they don’t want to work to bring students up to speed on the material they need, and don’t want to spend times testing them to make sure that they know this.

Imagine car designing companies being ranked based on how well their engineers are able to identify the tools they use, how well they know the commonly used software, or how many parts of a car they know by heart.

To use an SAT metaphor: that knowledge is to designing new vehicles like GRE scores and GPAs are to doing a PhD.

Some people really seem to not understand what a PhD is.

People, a PhD is not a giant test of the sum of knowledge in your field. It is not a test of memory, it is not a test of understanding complex ideas in the field, and it is not a test of your ability to perform experiments.

A PhD is an original piece of work. In general, it is figuring out an important unsolved problem, figuring out how to find the solution, finding the solution, and and then writing this all down. In math it is slightly different - it is about finding a non-trivial mathematical problem, and finding its solution (or proving that it has no solution). All people with PhDs have done this.

That is also why success in almost all standard courses do not do a good job in predicting how well a student will do their PhDs. Only courses for which the students are required to do a project are useful for predicting success in grad schools which require a thesis.

Field for which you cannot do this, like fine arts, do not have a PhD as their terminal degree, but an MFA.

Professional doctorates and JDs do indeed get tested on their level of knowledge from their field, and that is why their doctorates are not PhDs.

The final defense of a PhD is critically different from other tests. It is the only test in which the person being tested is expected to know more about the subject on which they are being tested than the people who are testing them. That is what the committee is testing - does the candidate know more about this than they do.

3 Likes

Being a standardized national exam doesn’t necessarily mean that the test adds any value in predicting college performance. For example, the President’s Physical Fitness Test was intended to be standardized national exam, but I wouldn’t expect that score on the fitness test adds a lot in value to predicting college performance. While the SAT is more relevant than the President’s Physical Fitness Test, it still is not a good reflection of the skills required to be successful in college, including the skills required to being successful on college exams.

For example, suppose you wanted to predict who would be most successful in a challenging college engineering exam that involves solving 5 long and complex problems using calculus and post-calculus math, and requires showing work. Would you focus on who can answer the most simple multiple choice algebra/geometry/trig questions quickly without making careless errors?

Or suppose you wanted to predict who would write the highest quality 10+ page final paper that involves hundreds of pages of reading and analysis of a complex topic. Would you focus on who can answer the most simple multiple choice questions about things like reading a paragraph or correctly identifying punctuation/grammar?

There is no doubt some degree of non-zero positive correlation in the examples above, but that does not mean that the test adds much beyond what is available in the rest of the the college application at the “top colleges” that are mentioned in the subject header . The SAT math score might weed out some kids who haven’t mastered simple algebra/geometry, but the schools are probably going to figure that out in numerous other areas of the exam besides just the SAT math score. As such, you rarely see admitted test optional kids who bomb the SAT at highly selective “top colleges”. Instead optional admits may have a somewhat lower SAT score than the average among other applicants with similar transcript, LORs, essays, ECs/awards, etc; such that the score hurts their chance of admission. But the scores still are usually quite high compared to a national pool.

For example, Bowdoin is a test optional “top school” that required (prior to COVID) admitted students to submit scores over the summer prior to attending. This allowed Bowdoin to report scores for all students, including test optional admits, which was a good portion of the entering class. The portion scoring in different SAT ranges is below. I also listed score ranges for test required (in past years) CMC, as a comparison, which is a LAC with similar admit rate to Bowdoin that has cared enough about their reported scores in the past to lie about them to USNWR.

2019 CDS
700-800 EBRW – 58% Bowdoin, 54% CMC
600-699 EBRW – 38% Bowdoin, 45% CMC
500-599 EBRW – 4% Bowdoin, 1% CMC
Less than 500 EBRW – 0% Bowdoin, 0% CMC

700-800 Math – 66% Bowdoin, 74% CMC
600-699 Math – 26% Bowdoin, 23% CMC
500-599 Math – 8% Bowdoin, 2% CMC
Less than 500 Math – 0% Bowdoin, 0% CMC

Bowdoin was test optional for a good portion of the class, yet they don’t appear to have any entering students who scored less than 500 on either exam. Bowdoin also admitted very few who scored less than 600. The specific numbers are slightly more on the lower end than CMC and other test required – 4% 500-599 at Bowdoin vs 1% 500-599 at CMC – but it’s still very few students within the class. Instead the majority of Bowdoin students scored >700, and the almost everyone scored >600.

Bowdoin has been test optional for more than 50 years, so it’s not easy to compare how stats changed before and after test optional. However, Bowdoin students appear to have slightly higher retention and graduation rates than is typical for other test required LACs of similar selectivity, so the test optional kids certainly do not appear to be failing out. Some specific numbers are below, along with the continued CMC comparison. The previously linked Bates study compared submitters to non-submitters more directly and found no significant difference in either GPA or graduation rate between submitters and non-submitters.

First Year Retention – 98% Bowdoin, 96% CMC
6-Year Graduation Rate – 95% Bowdoin, 91% CMC

1 Like