<p>As decisions roll in over the next couple of weeks we will have an opportunity to determine the degree to which the admissions process is unpredictable. If we could get a number of folks to report on results, we might actually get some interesting information. There are a number of ways of accomplishing this. For simplicity, I suggest that we just report whether the results were exactly as expected within the University and LAC lists (let's use USNWR rankings for simplicity) or, if the results are not exactly as expected, the number of schools out of the total number of applications that had unexpected outcomes. For example, if your S or D is accepted at a school that ranks above a school that rejected them you would report both schools as anomalies. If you know a reason why this may have occurred e.g. legacy or recruited athlete at one school but not the other you can mention it, if you want. If you know a reason but don't want to mention it you can just say that. But if you know a reason for the reversal of order but aren't willing to mention that there is one, it would be helpful if you simply did not report that event as an anomaly. Any suggestions on how to structure this analysis would be appreciated, but let's try to keep it simple.</p>
<p>But that's not how adcoms work. For example, if Yale is looking for a bassoonist (from another thread) but not Harvard, a bassoonist stands a better chance of being admitted at Yale than at Harvard. The adcom knows whether Yale needs a bassoonist, but not the applicant. From the applicant, it's a crapshoot process, but not from the point of view of the adcom. Applicants are not always in a position to know which part of their application clinched the decision. And unless we know the full details of each application, we won't be able to make informed judgments.</p>
<p>LOL.
A parental version of CC's What (Were) His Chances?</p>
<p>(Marite's comments illustrate why the student forum of that name misses the mark: incomplete information, because there's no <em>comparative</em> information)</p>
<p>Yes, but we are concerned about the process from the applicant's viewpoint. And since the applicant rarely, if ever, knows if the university needs a bassoonist, it's one of the factors out of the applicant's control. From this view, once you make the initial cut of potential acceptees, it is a crapshoot as to whether you satisfy whatever it is that the college is looking for. And you'll never know exactly what that is, unless you're a recruited bassoonist.</p>
<p>Please, it is time to stop with the oboeists and bassoonists. None of the elite schools, without strong music departments, will evaluate music talent.</p>
<p>I also think that the opinions of students and parents are not likely to be useful. My D was rejected by a couple of schools that were substantially less selective than other schools which accepted her. At the time, there was a lot of stress and ego involved. It was difficult to see a pattern which should have been obvious. My D really did not fit or want to attend the schools which rejected her. I suspect some of that may have come across in the supplemental essays. Maybe those college adcoms were smart enough to realize my D did not have a social fit and had different academic goals which did not fit well.</p>
<p>Unexpected admissions results cannot be used to distinguish wise, holistic decisions from a system which some consider to be arbitrary and inadequate.</p>
<p>I don't think that this will work. For instance, a kid might be rejected by a safety or match school because the kid blew off the application or interview. Parents wouldn't know that.</p>
<p>It is a total waste of time to try to attribute whether a single student gets in (or not) to a single school to some random element of luck. There are too many variables to quantify(and indeed, many variables CAN NOT be easily quantified at all) and one student's experience, or one single school's results, do not make a statistically reliable sample. </p>
<p>Yes, there is sometimes an element of <em>surprise</em> in admissions decisions from the APPLICANT'S (and the applicant's parents) point of view, but a good, reasonable list of options treated in a serious manner, is the best way to insure that surprises are kept to a minimum. I also think most admissions officers will tell you that there is not as much <em>surprise</em> from their point of view as outsiders would expect - they know what a good solid applicant to their institution looks like, and they also understand that sometimes difficult decisions have to be made based on other applicants in the pool, space available, financial issues, and other things that do not have a "crapshoot" element at all - just an institutional one.</p>
<p>Thank you Chedva. </p>
<p>To the rest of you,</p>
<p>The point of this excercise is to find out if what you are saying is true. If it turns out that we have a sample of 100 and all the acceptances and rejections are exactly what you would have expected based on rankings, then the process it fairly predictable. If we have a sample of 100 and none of them are consistent, then other factors clearly play a role and we can argue about what they are. I used to teach statistics at the university level, believe me I understand the limitations of the excercise. Nevertheless, I think the results will be interesting.</p>
<p>Curious, you probably can get the info that you are looking for by checking out this long, detailed admission results thread on the "Chances" board: <a href="http://talk.collegeconfidential.com/showthread.php?t=215577%5B/url%5D">http://talk.collegeconfidential.com/showthread.php?t=215577</a></p>
<p>Northstarmom,</p>
<p>The trouble with the thread you mention is that it would require an enormous amount of effort, on my part, to to look up the ranking of every school mentioned. But I think most applicants are aware of the rankings of the schools to which they are applying and can check this fairly easily.</p>
<br>
<blockquote> <p>For simplicity, I suggest that we just report whether the results were exactly as expected within the University and LAC lists (let's use USNWR rankings for simplicity) or, if the results are not exactly as expected, the number of schools out of the total number of applications that had unexpected outcomes.<<</p> </blockquote>
<br>
<p>One difficulty of comparing results versus what was "expected" is that expectations are subjective and highly variable. One parent may have seen a previous year's bloodbath from selective admissions and thus have very low expectations for Junior despite his high stats. By contrast, other parents may have delusions of grandeur for their kids.</p>
<p>Another difficulty is that selective admissions themselves also have a strong subjective component -- the rating of the essay, the message delivered by the recs, the evaluation of the ECs, the impression from the interview. All of these are highly subjective. In fact it is the sum of this considerable subjectivity on the adcom side that many people mistake for or misname as "randomness."</p>
<p>So throw in the factor of "what the school is looking for" or what slots they are trying to fill and what have you got? Basically this analysis would be attempting to compare whether partly subjective results were in line with highly variable and subjective expectations as compared to the controversial and questionable USN&WR rankings, all confounded by the uncontrolled variable of what the individual schools are looking for. </p>
<p>I don't think you are going to get meaningful data from such an analysis.</p>
<p>In the hopes of collecting useful data, I would suggest that you eliminate some kids where special circumstances might influence admission results. This would include the kids with hooks, legacies, URM's, kids applying to schools which generally have few applicants from their geographical area, athletes, musicians, kids with special or strong EC's, kids who have decided on a major or who have special academic interests, kids whose essays might highlight any special college interests. I am not sure there will be many kids left for your statistical analysis. Why don't you just take SATs and/or GPAs and see how well they correlate with the USNWR rankings?</p>
<p>coureur,</p>
<p>I refer you to post 8.</p>
<p>edad,</p>
<p>I know you are being sarcastic but in the process you miss the point. Most of the factors you mention are true across the board. For example, a URM at one institution is generally a URM at other institutions. Th exceptions, which I address in the OP, are things like a legacy at one school but not another or a recruited athlete at one school but not another.</p>
<p>Schools have different needs, some of those are fairly constant, some vary from year to year. If you want to measure the crapshoot factor, you need to eliminate the special need variables.</p>
<br>
<blockquote> <p>I refer you to post 8.<<</p> </blockquote>
<br>
<p>Yeah, I saw post 8, and in it you said:</p>
<p>" If it turns out that we have a sample of 100 and all the acceptances and rejections are exactly what you would have expected based on rankings, then the process it fairly predictable."</p>
<p>And my point is that this is a totally lost cause, because you have not, and indeed you cannot, define what is "expected based on rankings." The expectations are personal and variable and the rankings are flawed.</p>
<p>Yes, we all agree that it is not really a "crapshoot", and admissions are often based on particular schools' needs. But since we don't know what those needs are, what do we expect for our child's (or our own) results? When we create the lists, we deem schools "reach, match or safety" (or some similar classification). We expect that the child will get into certain schools and that they might not get into others. And we do this in terms of our child, without considering the special needs of the schools or the particular competition.</p>
<p>Perhaps the problem is in the term "crapshoot". Perhaps we should just ask, did your results meet your expectations? If not, in what way were you surprised (getting into mega reach, not getting into safety, etc.)?</p>
<p>Different institutions have varying goals/targets, usually spelled out quite explicity on: gender balance, international students, ALANA admits, in-state vs. OOS, geographic diversity, legacies, recruited athletes, full-pay admits, kids admitted who need loan and workstudy only, EA/ED, probability of yielding (which often makes expressed interest a factor), et al.</p>
<p>With some digging, you can get a rough sense of some of these institutional priorities; some "unexpected" results then seem much less like a crapshoot.</p>
<p>Chedva,</p>
<p>Yes but we do need some kind of objective standard of expectations. A parent or student may have a personal belief based on some random conversation that one school is harder to get into than another. I'm not wedded to the use of the USNWR rankings as the reference point, we could use the mid-point of the reported SAT 25/75 percentiles or something else.</p>
<p>I agree, curious - I was responding more to those who keep insisting that looking at it from the student's point of view is worthless because of the variable of institutional needs.</p>