<p>It is the lack of detail that makes it useful in a way. Because if you can look at ALL the decisions being made, you can see trends across a large number of students. It is a meta-data metric, not a single student metric. It is a view of the market at work, and in a way that shows us which schools won and lost overall. The Common Data Set can tell you numbers of who applied, who was accepted, and who decided to attend. It can’t tell you which schools this school beat out, and if one school is trending at the expense of a specific competitor in a given year. The colleges try to collect this info themselves; every college D2 was accepted to last spring asked her to tell them where she was accepted and where she was attending. But they sure don’t give that data out to applicants!</p>
<p>Of course one student can make a decision for what you and I might consider irrational reasons. But if a herd of students are doing it, that tells you something about market movement. The current ranking systems are pretty slow to pick up trends – this kind of market-driven data with school-vs-school adds an element they do not include. Like I said before, if the data source was better (say from Naviance), this would be a fascinating addition to the rankings picture.</p>
<p>Something does not compute here. I am stuck in the office here waiting for a conference call with some people in Tokyo I am playing around with their app as I have nothing else to do so I am killing time. If you use their cross comparison app</p>
<p>Pay attention to ‘how’ Parchment reached students to take their surveys. </p>
<p>The methodology, even though cited many different research approaches, are not with the same type of samples the researches used - the researches are with random samples. The volunteers who took the Parchment surveys are not random - maybe word of mouth, email or facebook announcements to attract specific groups to take the surveys. A group cannot keep up to advertise to their group every year, that’s why the results can change dramatically each year.</p>
<p>and used verious inferred relative ranking and came up with this.</p>
<p>Harvard
Yale
Stanford
MIT
Princeton
Caltech
Columbia
UPenn
Brown
Duke
Chicago
Dartmouth</p>
<p>Note this 12 overlaps the US News and World Report top 12 with the exception of Brown and JH. It could be Parchment uses the UNWR rankings as part of their alogrithm in these head-to-head matchups.</p>
<p>I don’t think this is an accurate representation of how Parchment works. It isn’t really a “survey”. It is a website where you go enter your statistics and the colleges you intend to apply to, then you can get an estimation based on other students who have entered the same info AND their results what your chances are. Then you are supposed to go back after being accepted and put in what really happened at all of the colleges you applied to (where you accepted? did you choose to attend?). That is what they build their rankings from. So it is strictly voluntary – but I think there is an implication in your post that they are tracking people down for surveys – not really how it works.</p>
<p>
</p>
<p>Again…I think you don’t understand Parchment. I think all they are using is what students who were accepted at more than one school say they are doing when they choose a college from among those schools. They are not taking any external information like USNWR into account. For example: my kid last year was accepted at:</p>
<p>U of Chicago
Swarthmore
Harvey Mudd
Carleton
Kenyon
Mt. Holyoke
Lawrence
Macalester</p>
<p>She picked Mudd. So when you choose Mudd and another college on her list (say, Mudd & U of Chicago), her data is part of what goes into that percentage calculation. If someone picked Mudd and a school she did not apply to, her data isn’t used. If they picked 2 colleges on her list and she turned both of them down, that also would not be used in the calculation.</p>
<p>As discussed above, there are issues with this approach. It is not a random or complete sample of students who enter their final results. And someone could just lie and make up that they got into a top school, or choose a lower level school over a top school just to screw with the system.</p>
<p>inparent. I hear you. I do understand that is how Parchment works. It was just that using their app to do various head-to-head battles came up with a inferred ranking which mostly matched UNWR in terms of top 12 so I came up with that conjecture that they might adjust their results by using UNWR rankings as a factor (just like polling agencies does adjustment based on raw polling data.) Of course the relative rankings in this top 12 is quite different than UNWR. </p>
<p>Of course they are two very different measurements. One is overally college and one is just a rating of which colleges applicants prefer.</p>
<p>I tend to believe parchment data tends to reflect the prefences of applicants. Yes, there might be stat inflation or acceptance inflation in a site where people self report. But there is no reason why a person would report a lower preference on over a higher prefence on said list. Most likely accpetance inflation would narrow the gap between the various colleges. A person might have been admitted college A and B but rejected from C and then decided on A. In such a case it is more likely that C will be ahead of A and B in the rankings. Said person might report being accepted to C and then report that he/she would go to A or C. If it is A then it would have the affect of narrowing the gap between A and C. If it is C then it correctly reflected the prefences of the applicant anyway despite acceptance reporting inflation.</p>
<p>…the major concerns of Parchment users, and therefore the inferences from this data, are not really about the quality of the college education, but about desirability, perhaps taking into account the finances and perceived lifestyle. I think a great many 17/18 year olds would choose Berkeley over Dartmouth for undergrad, especially if they live on the West coast. OTOH, perhaps few such people would actually apply to both (and thus provide data points).</p>
<p>Nothing stops a ■■■■■ from claiming they were accepted at HYPS and accepted the offer from one of those (or from a state university or something like that). We see people ■■■■■■■■ like that on CC – someone annoyed with their results might just put in junk to make themselves feel better, too.</p>
<p>“It is a website where you go enter your statistics and the colleges you intend to apply to, then you can get an estimation based on other students who have entered the same info AND their results what your chances are. Then you are supposed to go back after being accepted and put in what really happened at all of the colleges you applied to (where you accepted? did you choose to attend?).”</p>
<p>Exactly non-random surveys!</p>
<p>Who told them to go to the website? Friends of their same caliber from same area? Once the admission results are out, are the admitted school friends reminding them to fill out the second part of the survey? This would give advantage to groups that actively advertise to ask their peers to fill out the survey and get higher preference. ((i am just now making it up as an example, say, Penn admitted students facebook group encourage them to fill out the second part of the survey, and Brown didn’t have anyone doing this. This would give an advantage to Penn since these students chose Penn already.)</p>
<p>If the invitation is sent out to ALL SAT/ACT test takers and reminded all of them when they were enrolled in schools, it would make this survey more credible. (Even so, there are still problems exist: for example, the survey cannot tell people why a school is more preferable - like if a student chose a school over others solely because of financial aid. If his family financial situation changed, his preference may be different.)</p>
<p>So, if I were to recruit a slew of people who graduated from my alma mater, we could all go on and pretend to be students and enter data to skew the numbers? Real reliable ranking method, isn’t it?</p>
<p>Moreover, this doesn’t take into account that students make even more important choices from the beginning, when they don’t apply to some schools at all. The vast majority of students apply only to a few of the numerous colleges for which they would be well-qualified or at least competitive admits. My oldest child would have been a viable candidate for most of the top schools on this list, one of which is not that far from us, but she was not interested in them. If she had applied under peer or school pressure, and been accepted to any of those schools, she still would have turned it down, even with a full scholarship. There are huge numbers of students throughout the country who could get in these “top” schools, but they do not apply to them for reasons ranging from proximity and cost to athletic allegiance. They make their choices before they send out a single application.</p>
<p>The only rankings that are reliable are ones that measure a specific, quantifiable, objective thing. We can rank which college has the fastest group of backstrokers on its swim team, the largest campus in acres, or the highest percentage of students with National Merit Scholarships. We can’t, however, rank the college that is the “most preferable” because we have no way to do that.</p>
<p>Applying to both falls in the realm of possibilities. Being accepted to both is a lot more problematic. And that is why Parchment is mostly a figment of the imagination of its “customers.”</p>
<p>It is as valuable as the moronic attempts on CC to compile a list of the preferences.</p>
<p>Students who are really interested in engineering, computer science, and to a minor extent, physical sciences, would rather be attracted to Berkeley than Dartmouth even for undergraduate education. Yes it is true that Dartmouth’s undergrad is strong. But Dartmouth’s undergrad in all those areas I enumerated above are not as strong as Berkeley’s. So, for someone who’s dead serious, in, say, becoming an engineer someday, he’d rather be doing undergrad engineering at Berkeley than at Dartmouth, unless he’s OOS/Int’l and he can’t afford Berkeley.</p>
<p>Thanks for right correction by CrookedOne (#27) and RML (#33) against my broad generalization. I accept your error-correcting or case-specific argument without any hesitation.</p>