<p>There has been a lot of discussion about the value of "elite" colleges, and many of their graduates would claim that regularly being around exceptionally smart students in their dorms (and smart professors in small classes) helped challenge them - but what about objective data?</p>
<p>Have there been other colleges brave enough to share the data on their graduates GRE scores or even better the actual GRE scores vs. the "Expected GRE scores" for such a student population (based on their incoming freshman classes selectivity and test scores)? Have there been objective studies linking exit test scores (e.g. GRE or MCAT) vs. the "expected" test scores for the students. Knowing that Harvard grads have higher GRE scores than most any public college is not useful without comparing what the "expected" GRE score for a HPYSMC student should be (given average SAT of 2200+ at these schools their seniors would be expected to do well on the GRE), and comparing with similar students who chose to go to less elite schools (perhaps for academic scholarships).</p>
<p>Many High Schools, and grade schools test every year to try to assess whether they are improving - and look for possible problems with teacher quality, academic rigor and other areas where they might need to improve.</p>
<p>One would think colleges (other than Oberlin) would also self assess (Freshman vs. Senior) based on GRE scores or other objective measures.</p>
<p>Is there comparative data among colleges for Freshman vs. Senior testing (or SAT vs. GRE/MCAT/LSAT etc.) to help assess (in part) whether they are improving or getting worse and whether they have problems with graduates coming out less prepared than would be expected given the incoming student admission statistics?</p>
<p>This data would be interesting but I know of no other college that has studied it. GREs are just one test…should also look at MCAT, LSAT, and GMAT. I can think of a lot of factors that could affect results other than educational quality. Results should be interpreted carefully.</p>
<p>I don’t feel that GRE had anything to do with my daughter’s college education. She didn’t take a subject GRE which might have had more relevance. She was always a very good test taker. She did a heavy math and science program and didn’t even take the usual English composition class in college-- her college does not have distribution requirements. She did minimal prep on the GRE just before taking it (as with SAT) and scored a very high Verbal and Analytical Writing, near the top. She had a ton of math classes as a math/cs major, much higher advanced math than is on the GRE. But she scored lower on the math portion, a little lower than ideal for her grad application. I don’t know if she just hadn’t used that math for awhile or needed to drill on practice more than she did. This test just wasn’t related to her curriculum. Isn’t the GRE just very similar to SAT but harder vocab words and still very low level math? I can’t see you could get any info from looking at those scores particularly. She did get into very good PhD programs in CS and I think that would be more useful info.</p>
<p>I’m wondering how these tests would be developed? What to measure, and how? For example, an anthropology major vs. a business major vs. a chemistry major vs. an education major? The college education of each of these students would be markedly different one from the other.</p>
<p>How can colleges objectively measure that they are doing a good job with their students education, challenging them enough, developing them intellectually? High Schools and grade schools in the US and overseas routinely do such tests every few years to measure changes in students relative progress, and to try to identify any problems at an institution by comparing scores of groups of students with “expected scores” of that group. These tests are not the sole criteria, but they have been of value in identifying problems in High School, and would be expected to be able to have some value in identifying problems in colleges.</p>
<p>Obviously looking at Payscale data, and also eventual PhD and MD and Law Degree outcomes from your college’s graduates, helps - but in addition to the usual published data - differences in actual test scores vs. “expected” scores of seniors could indicate what is working and what is not working in educating college kids.</p>
<p>I believe that a challenging curriculum and student body should be beneficial to a student, all else being equal - but by how much (so we can make more accurate value comparisons of educational alternatives) and how to prove that?</p>
<p>My suspicion is that many colleges, not just Oberlin, have reams of internal data on educational progress of their student body (if in nothing else, in writing, which so many colleges require now, no matter what you did on your AP tests), but very likely colleges have data on other objective measures.</p>
<p>That is not a surprise, but if for certain subpopulations, certain majors at the particular University the GRE scores (or MCAT, or LSAT or …) were significantly lower or higher than predicted based on the student’s SAT scores (4 years earlier. plenty of time for the University to have some effect positive/negative) - then it gets interesting and the data mining could begin to try to determine models and practices which work to improve college educational outcomes.</p>
<p>One thing that may be worrisome to Oberlin is the apparently lower performance on GRE subject tests (which test subject knowledge for students who major in a subject and which some PhD programs ask for) for physics, geology, chemistry, and biochemistry compared to other majors (47.0% to 54.7% percentile rankings in those subjects, versus 61.0% to 81.2% percentile rankings in other subjects).</p>
<p>However, these are from 2003, and it is possible that this may reflect greater strengths of the overall pools of these majors taking the GRE subject tests, rather than weakness in Oberlin’s students of these majors taking the GRE subject tests. For example, Oberlin physics majors were in the 47.0% percentile, while Oberlin psychology majors were in the 81.2% percentile. Does that mean that Oberlin physics majors are weaker, or that the overall pool of psychology majors is weaker (so that similar strength Oberlin students score a higher percentile against a weaker overall pool)?</p>
<p>Comparing the entering SAT or ACT scores of the pools that are being evaluated would be helpful as a way of reducing such effects. Taking a specific example, the NCES site indicates that Duke and Johns Hopkins and Rice have similar test scores (JHU a little lower on SAT, Rice a little higher on the ACT, but close). By comparing outcomes among those schools on subject tests (or LSAT or MCAT or others) with the entering SAT scores, it is easier to understand expected outcomes. Even better if you can correlate the expected scores of the subpool (based on entering SAT scores of the same pool of psychology or Physics or History majors at Duke vs. Rice vs. JHU) it would be even better. That would help eliminate the problem that schools which may have similar test scores, may attract their strongest scoring students into different majors, and thus have “weaker” majors that are different.</p>
<p>Presumably Universities do this sharing of data, but don’t publish it.</p>
<p>Such data would be helpful in finding problems with different approaches to education among the many varied colleges.</p>
<p>I think we’d need a much better test than the GRE to measure this. And for many subjects, like history, English, and many others, what would you test? There isn’t a single core curriculum in these subjects at many colleges.</p>
<p>@Hunt
Agreed. But the point is not to say that college A is always better than college B but to recognize signs when college B may have a problem (poor college culture, bad teaching, lack of student challenge, bad curriculum) or alternatively identify what college A may be doing well. These clues could be used to dig deeper to try to determine if there actually is a problem. Some of the GRE test is general enough (not the subject tests) - I would expect that the reading sections of the GRE should go up relative to expected score depending on how rigorous some of the electives the student took were, but even the subject tests could be interesting if scores go down over a period of a few years at the same University without the curriculum changing.</p>
<p>Since we lack almost any useful data to compare college outcomes (other than the Payscale data, PhD outcomes, and percentages of certain academic awards) - having any objective data would be helpful, especially since there is a common perception (probably true, but hard to test) that:
harder schools with good professors and small classes and better pools of students help make their classmates smarter (but is the effect really that strong? and how much is it worth?)</p>
<p>The GRE subject tests attempt to measure in-major knowledge (but, as you mention, majors without a lot of common core courses for the major would be difficult to measure), but there really is no good generalized way of measuring progress across all majors and possible course selections in college by standardized test.</p>
<p>I actually do NOT believe colleges study this sort of thing internally which is ironic because they have all kinds of research expertise. Most don’t make a serious attempt to rigorously track the success of their graduates or to see which approaches to education work best. I think universities are almost as prone to follow the latest educational fads as the public secondary, middle, and elementary schools. Universities are pretty much in marketing mode all the time and not in research and development mode.</p>
<p>It would be ironic if High Schools and grade schools and the various school districts they are in study what works more than colleges - try to determine educational best practices and objective measurements more than the research Universities that have the expertise but not the objective data to find problems in their own institutions. Obviously graduation data (% of 4 and 5 year degree completion), payscale data, and eventual PhD/MD/JD outcomes are useful but would be far more useful if Universities tracked some of the test data that High Schools and grade schools do on test outcomes to at least notice if scores decline over time, or show trends indicating possible problems. Very strange.</p>
<p>Keep in mind that the foremost objective of most elite research universities is research, not undergraduate instruction. I found that my HS (albeit a public magnet) spent far more time and energy trying out innovative ways to teach material than the elite private research university that I attended (and I would say that my HS was more successful). Instruction in many uni classes seemed downright retrograde, unchanged in methodology for the past several centuries or so.</p>
<p>Interesting
At least the survey results can be used internally by the colleges which participate and some of the national survey data can be used by parents to identify which general factors, based on survey results, to look out for on campus visits</p>
<p>Click on a subject of interest, then click on Comparative Data and Institution List. Scroll down to see the colleges that use it.</p>
<p>The “bad” part is I’ve yet to see any public comparison online with easy access, however, colleges we’ve been interested in have shared their scores with us when we asked admissions. Therefore, it can be useful for an individual who knows what schools they might be interested in if they participate.</p>
<p>Tippy top schools tend to not participate - which often does leave me wondering if they are scared to do so to be honest. I definitely respect those who do participate.</p>