<p>hawkette, the Cal admissions committee look into the things that concern you. If you would just take time to read the articles which dstark provided, it says there that the committee checks into the kind of HS where the applicant comes from, aside from checking the totality of the person. So, in effect, your concern was already addressed by the admissions committee or whatever they call themselves.</p>
<p>Again, the reason why I reacted to your post is because you claimed that Emory students are smarter than Berkeley L&S students. At one point, you even claimed, “way smarter” using SAT scores as your ONLY basis. So, I informed you that having SAT scores as the only basis is misleading because the other school which you try to evaluate and compare with does not put as much weight on SATs as the other does. If only both schools have the same admissions method, it would have been fair for both schools. But it’s not. And now you know that it’s not. So, don’t insist it. </p>
<p>As for MIT and Caltech, here’s a better way to say it.</p>
<p>Say 1000 students attend MIT every year. If 18% of these students don’t graduate on time due to academic reasons that means 180 students with near perfect to perfect SAT scores fail to validate the effectiveness of SATs. Then you add up the numbers from Caltech, Harvard, Stanford, Princeton, Yale, the rest of the ivies, you’re likely going to have over a thousand students with near-perfect-to-perfect SAT scores that fail to validate the effectiveness of the said test. </p>
<p>My point is, if the SAT is that near precise of its function, then those over a thousand near-perfect-to-perfect SAT scorers would have a very small margin of errors and that would have reflected on these students’ college performances.</p>
<p>The fact that over a thousand (and maybe even more) fail SAT’s effectiveness should tell you that it’s not a very, very consistent, reliable tool. The best way is to combine SATs with the strength of the students’ performances in the classrooms.</p>
<p>Now, by whose standard, set of people, groups, race, nationalities, socio-economic strata are SATs standardized? </p>
<p>Studies have shown that certain race or socio-economic level have different “standard” which does not necessarily means low intelligence or aptitude. </p>
<p>The Berkeley study has found out that Black people aren’t likely to perform well on SATs. Are you telling me that Black people are intrinsically (by nature) dull people? What about English students at Oxbridge, do you expect them to perform quite excellently on SATs? And, if they don’t then you’re going to tell them - they are not as smart as those students at Emory?</p>
<p>
Well, I guess that’s not true at all. And, even if we say I would agree with that statement, I don’t think Emory should be one of those schools. HYPSM? Acceptable. Maybe Duke, Columbia, UPenn and Dartmouth too. But Emory? Show me how more competitive those students are compared to Berkeley L&S other than SATs.</p>
<p>Then you’re better than Berkeley. Is that what you’re saying? </p>
<p>Anyway, knowing now that Berkeley does not put more weight on SATs as the privates do, do you still think that it’s correct and moral to claim that Emory students are smarter than Berkeley L&S students?</p>
<p>RML,
All of your arguments about the usefulness or not of standardized test scores have been turned over and turned over and turned over time and again here on CC. As far as I’m concerned, this horse is dead. If it makes you feel better, stick with your myopia with GPA and class rank and acceptance rates and continue to shun the most obvious correlated link in reviewing admissions data. </p>
<p>And btw, the crack about morality and Emory students is so far off the mark, I hardly know how to respond. Claiming one group is stronger than another has zero to do with morality. But I certainly stick by my earlier statement that Emory’s student body is stronger as a group than UCB’s non-engineering student body. Whether that is correct or not, I leave up to the reader, the admisssions counselor and the prospective employer to make his/her own judgement. </p>
<p>My apologies to the good folks at Emory for RML’s insistence to turn this thread in a throwdown between his blessed UCB and Emory. LOL. Is anyone else still bothering to read this abortion? Heck, even I’m strongly tempted to turn to other threads for entertainment.</p>
<p>SAT scores are the best indicator…hmmmm…not according to this study…</p>
<p>No wonder UC Berkeley uses many indicators…and looks at the backgrounds of the students and the high schools of these students </p>
<p>“High-school grades are often viewed as an unreliable criterion for college admissions,
owing to differences in grading standards across high schools, while standardized tests
are seen as methodologically rigorous, providing a more uniform and valid yardstick for
assessing student ability and achievement. The present study challenges that
conventional view. The study finds that high-school grade point average (HSGPA) is
consistently the best predictor not only of freshman grades in college, the outcome
indicator most often employed in predictive-validity studies, but of four-year college
outcomes as well. A previous study, UC and the SAT (Geiser with Studley, 2003),
demonstrated that HSGPA in college-preparatory courses was the best predictor of
freshman grades for a sample of almost 80,000 students admitted to the University of
California. Because freshman grades provide only a short-term indicator of college
performance, the present study tracked four-year college outcomes, including
cumulative college grades and graduation, for the same sample in order to examine the
relative contribution of high-school record and standardized tests in predicting longerterm
college performance. Key findings are: (1) HSGPA is consistently the strongest
predictor of four-year college outcomes for all academic disciplines, campuses and
freshman cohorts in the UC sample; (2) surprisingly, the predictive weight associated
with HSGPA increases after the freshman year, accounting for a greater proportion of
variance in cumulative fourth-year than first-year college grades; and (3) as an
admissions criterion, HSGPA has less adverse impact than standardized tests on
disadvantaged and underrepresented minority students. The paper concludes with a
discussion of the implications of these findings for admissions policy and argues for
greater emphasis on the high-school record, and a corresponding de-emphasis on
standardized tests, in college admissions.”</p>
<p>“The result has been a de-emphasis of standardized tests as admissions criteria at some
institutions. This trend is evident at the University of California, which is the focus of the
present study. After California voters approved Proposition 209 in 1995, former UC
President Richard Atkinson charged BOARS (Board of Admissions and Relations with
Schools), the UC faculty committee responsible for setting university-wide admissions
policy, to undertake a systematic re-examination of all admissions criteria and to
consider a number of new policies.
Following BOARS’ review and recommendations, UC instituted several major changes in
admissions policy that became effective in 2001. UC introduced “comprehensive
review,” an admissions policy that more systematically took into account the impact of
socioeconomic factors, such as parents’ education and family income, on students’ test
scores and related indicators of academic achievement. UC also revised its Eligibility
Index, a numerical scale which sets minimum HSGPA and test-score requirements for
admission to the UC system; the revised index gave roughly three-quarters of the weight
to HSGPA and the remainder to standardized tests.iii In addition, BOARS proposed
and the UC Regents adopted a new policy called “Eligibility in the Local Context,” which
extended eligibility for UC admission to the top four percent of graduates from each
California high school. Under this policy, which also took effect in 2001, students’ class
rank within high school was determined solely on the basis of their HSGPA in collegepreparatory
coursework, so that the effect of this policy, too, was to diminish the role of
standardized tests in UC admissions.”</p>
<p>"Lower test scores among some admitted students also caused
misgivings among those concerned with collegiate rankings in national publications such
as US News and World Report, which tend to portray even small annual fluctuations in
average test scores as indicators of changing institutional quality and prestige.: :)</p>
<p>“But the diminished emphasis on SAT scores in favor of HSGPA and other factors has
not been without its critics. De-emphasizing tests led inevitably to the admission of
some students with poor test scores, as the then-Chair of the UC Regents, John
Moores, demonstrated in a controversial analysis of UC Berkeley admission data in
2002 (Moores, 2003). Lower test scores among some admitted students also caused
misgivings among those concerned with collegiate rankings in national publications such
as US News and World Report, which tend to portray even small annual fluctuations in
average test scores as indicators of changing institutional quality and prestige.
At the root of critics’ concerns is the widespread perception of standardized tests as
providing a single, common yardstick for assessing academic ability, in contrast to highschool
grades, which are viewed as a less reliable indicator owing to differences in
grading standards across high schools. Testing agencies such as the College Board,
which owns and administers the SAT, do little to discourage this perception:
The high school GPA … is an unreliable variable, although typically used in
studies of predictive validity. There are no common grading standards across
schools or across courses in the same school (Camara and Michaelides, 2005:2;
see also Camara, 1998).
Researchers affiliated with the College Board also frequently raise concerns about
grade inflation, which is similarly viewed as limiting the reliability of HSGPA as a
criterion for college admissions:
As more and more college-bound students report GPAs near or above 4.0, high
school grades lose some of their value in differentiating students, and course
rigor, admissions test scores, and other information gain importance in college
admissions (Camara, Kimmel, Scheuneman and Sawtell, 2003:108).
Standardized tests, in contrast, are usually portrayed as exhibiting greater precision and
methodological rigor than high-school grades and thus providing a more reliable and
consistent measure of student ability and achievement. Given these widespread and
contrasting perceptions of test scores and grades, it is understandable that UC’s deemphasis
of standardized tests in favor of HSGPA and other admissions factors would
cause misgivings among some critics.
For those who share this commonly-held view of standardized tests, it often comes as a
surprise to learn that high-school grades are in fact better predictors of freshman grades
in college, although this fact is well known to college admissions officers and those who
conduct research on college admissions. The superiority of HSGPA over standardized
tests has been established in literally hundreds of “predictive validity” studies undertaken
by colleges and universities to examine the relationship between their admissions
criteria and college outcomes such as freshman grades. Freshman GPA is the most
frequently used indicator of college success in such predictive-validity studies, since that
measure tends to be more readily available than other outcome indictors.
Predictive-validity studies undertaken at a broad range of colleges and universities show
that HSGPA is consistently the best predictor of freshman grades. Standardized test
scores do add a statistically significant increment to the prediction, so that the
combination of HSGPA and test scores predicts better than HSGPA alone. But HSGPA
accounts for the largest share of the predicted variation in freshman grades.”</p>
<p>dstark, thanks for the links - they’re all very useful. I actually took time to read them and I’ve been thoroughly informed. Sadly, despite their usefulness… some people just consider them like trashes… </p>
<p>Ladies and gentlemen, </p>
<p>the SATs are good tools of the students’ intellectual ability. I have no objection with that for that is a fact. But like the Berkeley study has shown, it’s not a perfect tool, so it’s not right and moral to use it as a single basis for intellectual or aptitude measurement. And true enough. Many students with almost perfect or perfect SAT scores don’t register a near perfect aptitude skills as many of them drop out, expelled or don’t even graduate from college anymore. If the SATs are perfect measures of intelligence, we would not have seen many students with perfect SAT scores dropping out of college. But we do see such things happen every single year - in large quantity. </p>
<p>We also know that Emory students are smart students as Emory University is a tough school to get into. But making a conclusion that they are smarter than those Berkeley L&S students using only SAT scores as the basis is wrong, unjust and immoral. That is a huge insult to Berkeley L&S community/student body. Unless more data are shown, such claim shouldn’t be taken seriously.</p>
<p>Well… RML…As it has been shown…UC Berkeley…has done studies… compiled the data… observed the data… analyzed the data…formed some conclusions from the data…and has implemented admission policies (grades are more important…SAT scores are affected by a student’s background) that take the data into account.</p>
<p>And that is what counts.</p>
<p>UC Berkeley does what UC Berkeley thinks is best after doing research.</p>