I don’t see how Bowdoin is misrepresenting ability of standardized test takers. Bowdoin has always disclosed that 1/4 to 1/3 of the class do not submit scores and we all know the non-submitters either didn’t take the tests, or had scores that would not put their (holistically evaluated) applications in the best light. The change in Bowdoin’s reporting validates what we already knew, so there should be no surprise nor was anything ‘misleading’. Knowing the full range of scores further tells us that applicants who choose not to report test scores absolutely have to have an application that stands out for other reasons.
Bowdoin’ s enrolled students are from the top 10% of test takers. Why not say that, which is more accurate? Why pretend otherwise, particularly if SAT scores aren’t so important anyway? It is easier to give the range of enrolled students than the accepted students with caveats that 1. Many accepted students don’t enroll; and 2. It is only reporting the accepted students with the highest SAT scores, some fraction of the overall group. Is it really that shameful to say Bowdoin students are top 10% rather than top 5% of test takers? I guess Bowdoin thinks it is.
Well, you don’t “design” a study to meet what you want to find. Rather, to fairly test.
We’re talking applicants, who vary in strengths and presentations. Not machines. Without test scores, one can find the quality of students the college wants. Those kids at a Bates or Bowdoin are highly capable. Many were qualified for a single digit college.The theoretical muss and fuss doesn’t get one far. Imo, you’ve got to allow that quality TO colleges, the top of the bin, know what produces college success at their schools. This is not a crapshoot, not a case of, “He’s got the gpa, let’s push him to finalist.” That’s holistic 101.
Even at my test requiring college, kids are vetted on more than stats. Top stats don’t mean top applicant. It may seem counterintuitive to those fixated on scores, some notion they correlate to good, better, best. Top TO schools are looking for the same "rest of the story " that filters applicants at a tippy top. It’s part of match, a big part. And so you get a Stanford or WashU grad rate of 94+%, but UCB or Mudd at 90%. And guess what? Bowdoin and Wes at 92+, Bates at 88+. That’s not just kids who submitted scores. That’s not ASU (TO) at 56%.
Imo, if you’re vested in your kid’s success, you look for more than test superiority in that college’s pool, you try to match holistically. You don’t need to question how many admits got what scores. You do need to reach beyond that, TO or TR.
No doubt, @Looking forward. But that still doesn’t address why TO schools feel it is imperative for them to manipulate the stats to begin with.
A correction for the 2018 new SAT percentiles. Bowdoin reported 25th percentile of accepted students was 91st percentile. The enrolled 25th percentile was 82nd percentile. Same principle, different numbers.
It’s common that a school’s accepted test stats are higher than the test stats of the subset of students who enroll. Are you possibly referring to the change in Bowdoin’s reporting methodology of test stats (now reflecting test scores of all matriculated students?), re: @merc81’s posts?
“Manipulate” is a loaded word. It assumes people sitting around trying to game it. Not necessarily. There are bigger things to do than rub their hands in anticipation of the kids they’ll purposely mislead. Lol.
But I’ll say this: anyone just matching based on stats is missing the point, from the get go, not doing due diligence to the variety of points that matter. Then we get so many complaining the kid has the stats and got robbed, that it has to be a crapshoot or fraud.
Admissions officers are cogs in the giant business of higher education. Of course it is their job ( and often their compensation depends upon) to increase applications, yield, and reported scores and rankings. Test optional helps them to achieve that goal. It may serve other useful goals as well, but the reason they are reporting only their high scores is to game the rankings.
Mwfan1921, I realize enrolled stats will be different than accepted stats. But the magnitude of the difference, 85 points in this case, reflects that only the accepted students with high scores were reported. We can dispute whether there is a different type of student testing at the 91 vs 82 percentile, but the size of the difference resulted from skewed reporting to begin with.
Maybe a bit off track but curious what others find to be the actual value (from an admissions perspective) of the tests. I see opposing views well represented:
- In a world of subjectivity (let's face it, each HS is different. My kids attend / attended different schools in the same town - night and day. I'd say a 4.0 at D's school is more like a 3.0 at S's school. Very different level of student and teacher), the standardized tests are supposed to provide just that, standardized data. A 1500 is a 1500. Or is it?...
- ... with superscoring, multi test dates (don't think they let you take calculus over- just because you didn't do well- without it affecting your GPA), and test prep for those with time and/or money. My own kid improved significantly after getting a baseline and then working with a tutor. Was he smarter or a better student after the increase? No. Same kid. Had he spent more time in tutoring (vs. all the other things that make him a great student and leader) he likely would have cracked 1500. Again, same kid.
Another issue is that not all Admission offices / processes are created equally. Different resource allocation. Don’t know this but my gut tells me the TO schools tend to have far greater resources to truly practice holistic admissions (than large schools). Of course HYP could do it if they wanted. Probably fear it would effect the brand.
I agree with @lookingforward re: manipulation/gaming the system. Keeping with the Bowdoin example, I don’t buy that they are trying to game rankings, nor do they fret over a 1, or 3, or 7 ranking. Specifically with regard to USNWR rankings, the standardized test piece is a smallish piece to start with, and they do adjust downward for schools reporting less than 100% of test scores. We also don’t know what data schools give USNWR, or if it equals what is published elsewhere (CDS, website, student newspaper, etc.)
"Standardized tests: U.S. News factors admissions test scores for all enrollees who took the mathematics and evidence-based reading and writing portions of the SAT and the composite ACT. The SAT scores used in this year’s rankings and published on usnews.com are for the new SAT test administered starting March 2016.
We weighted standardized tests at 7.75 percent, down from 8.125 percent in 2018.
Schools sometimes fail to report SAT and ACT scores for students in these categories: athletes, international students, minority students, legacies, those admitted by special arrangement and those who started in summer 2017. For any school that did not report all scores or that declined to say whether all scores were reported, U.S. News reduced its combined SAT/ACT percentile distribution value used in the ranking model by 15 percent. This practice is not new; since the 1997 rankings, U.S. News has discounted under these circumstances because the effect of leaving students out could be that lower scores are omitted. U.S. News also footnotes schools that declined to tell U.S. News whether all students with SAT and ACT test scores were represented.
If the combined percentage of the fall 2017 entering class submitting test scores is less than 75 percent of all new entrants, its combined SAT/ACT percentile distribution value used in the rankings was discounted by 15 percent. U.S. News has also applied this policy in previous editions of the rankings."
Their compensation? Where’d you get that?
Sure TO opens the pool. So what? Really. It does not mean un- or under-qualified kids get through. The first mission is to preserve the strength of the institution via the quality of incoming kids. Not game the rankings.
And some here are pointing out not all TOs only provide partial scores.
We have lots of college interests on CC, some more selective than others. The higher the tier, the better anyone needs to understand the colleges’ goals. It’s a level of thinking and action it takes and that has to “show.” But CC keeps cycling back to crapshoot and deception, one way or snother.
How do you know?
The admission officer at a large private school in Boston so informed me. I suppose I should withhold the name to protect him. Surely you aren’t surprised by this.
I should clarify the private school was a university.
This thread is very confusing.
For people who think that TO colleges are gaming the system there is a super easy solution- don’t have your kids apply to them. Are you really offended that a college where you are not paying tuition and have no connection to does something that bugs you? If so- get a life. My state flagship spends more on a few sports than it does on beefing up its science programs. That bugs me because I pay taxes so I have a modest stake in how it allocates its funds- but besides that, who cares? My kids didn’t go there, I didn’t go there, and some folks apparently believe that sports rankings matter more than how many Nobel prize winners are working on advanced genetic research (the number is zero btw- zero Nobel prize winners).
For people who think that ALL rankings are bogus, again- a super easy solution. Have your kid apply to University of Phoenix or another low residency/open enrollment type college, and get a BA from the comfort of your living room and his/her laptop.
For people who are bothered that high scores and high graduation rates have some correlation, again- get some perspective. It won’t come as a surprise that ballet dancers have a lower BMI than Sumo wrestlers. Why? Ballet as a discipline selects skinny/muscular people; Sumo selects large/muscular people. Similarly- kids with high scores may or may not be smarter than kids with low scores, but among a large group of high scoring kids you are going to find MORE kids who are better prepared for college and with better work habits than among a large group of low scoring kids. Simple math. No knock on the smart and well prepared kids who happen to have low scores- speaking about the pool in general, not any particular kid.
Can’t we all go back to worrying about bed-risers?
@roycroftmom I know some AOs are comped on certain metrics. I expect other schools beyond the one you heard from do as well. Knowing that however does not allow for applying broad generalizations or conclusions to the 1,000+ TO schools, or the remaining 2,000+ schools. Further, comping AOs in a certain way doesn’t necessarily mean said schools are gaming the system or being dishonest.
Blossom, but just scoring higher isn’t what makes a kid more qualified for a top college. It’s the complete picture. And without scores, TO schools focus intently on the same “rest” that tippy top TR schools do. Theres a lot to be learned from the “rest” and obviously it works at the best TO schools.
@roycroftmom I do find it curious when AOs seem to give the full inside scoop. There are many sorts of “clarifications” some get from reps. Not usually the full picture. I find it hard to believe an AO would get nitty gritty with a visitor. (Maybe you know this person personally.)
Part of their job is to travel to their areas, keep the pipeline full, sure, find and interest the best kids. Can you imagine if the number of apps from some area dropped? But that doesn’t mean the focus is on ranking, per se. The pipeline is an institutional concern, not about the media.
Yes, they’re very happy to publicly report total apps, total admitted, and the selectivity percent. They publicize when some source names their X program or Y opportunity to be among tops. But to twist the minds of vulnerable 17 year olds? I think not.
And those kids can keep looking, trying to self match to what the college wants, not just what the student likes.
I’m not accusing them of twisting minds, just of doing their job. Many former admissions officials have complained that they were pressured to encourage applications from students with no realistic chance of admission, just to pump up the numbers. That doesn’t seem to surprise anyone. So I am not sure why this is any different- their AO jobs are to get the best possible reportable numbers for their school (whether that is yield, SAT score, or other any other desired numbers). Some part of their compensation may be based on job performance, as it is in most fields. Really not shocking.
Looking- of course, but my point is that if the kind of complete picture that TO colleges use to evaluate kids offends you- simple solution-don’t apply. You want a strict “admit by the numbers” evaluation? I can describe about 30 state flagships and another 300 state directionals that do that. They don’t care about the essay and they don’t care if you are the concermaster of your region’s youth orchestra, and they certainly don’t care how many hours you spent at the animal shelter last year. Tell us your GPA and scores and we can turn around a decision in 48 hours. So the folks who get bent out of shape about TO- hey, here’s your solution.
But very often total number of apps from a region drops- and the AO doesn’t get penalized for it. Why? The raw number of applications is one metric. If that region is low yielding, a smart director of admissions may decide to divert some resources to another part of the country. It’s not as simple as “more applications means better admissions strategy”.
I cannot figure out what problem this thread is addressing, either.
Standardized test scores are one particular type of information. Like every other type of information, they have some strong points and some weak points. No institution I care about makes admissions decisions solely on the basis of standardized test scores, or even primarily on the basis of standardized test scores. Almost every selective college is test optional to this extent: they may require applicants to submit test scores, but it’s completely up to the admissions staff whether and how much to pay attention to them. Almost every institution – up to and including Harvard and Caltech – probably has some layer of kids, 5% or 10% of the class, that is admitted completely without regard to test scores, based on other qualities and strengths. That layer never shows up in the 75-25 spreads – they are just part of the bottom 25%, and may not even move the 25% needle down much, if at all.