<p>For example, George Mason and University of Pittsburgh. I am dying to get into UPitt main. I have a 4.0 weightd my junior year, which i think is a 3.5 UW, I am on high honor roll, several clubs, with 3 presidencies, volunteer year round, etc. My SAT score was a 1700. Pitt's rate is 56% or 58%. When I look at my chances, chances are I'll get in but I'll have to wait and see because there's still a chance I might not. George Mason has a 53% acceptance rate, yet when I look at where I stand on their graph of acceptance, I'm basically guaranteed in. Why is this? I am dying to get into Pitt and it makes me nervous when I see that I'm iffy, and then I look at schools like Pepperdine (30% AR) and it looks like I'd get in. Are the graphs flawed?</p>
<p>The difference occurs because applicants are self-selected and not an evenly distributed, random sample. Each data point on the different graphs do not represent the same applicant. Students who choose to apply to Pitt simply have higher objective data than those who choose to apply to GMU.</p>
<p>I have heard that statistically, the hardest school to gain admission (i.e., the lowest acceptance rate) is actually a nursing program at a community college in Seattle that accepts less than 3% of applicants.</p>
<p>Could you explain that to me? Self selection and objective data? I’m a bit lost! Thanks!</p>
<p>Let’s say five 4.0 students and five 3.5 students apply to College A, which admits the five 4.0 students for a 50% acceptance rate.</p>
<p>Let’s say five 3.0 students and forty five 2.5 students apply to College B, which admits the five 3.0 students for a 10% acceptance rate.</p>
<p>Is College A or College B more selective? If you have a 3.2, do you think it will be easier to get into College A or College B?</p>
<p>Nowasi: what is the Acceptance Rate?</p>
<h1>applicants divided by # accepts. If one school gets fewer applications relative to slots available, their accept rate is higher than one which gets more apps to offers.</h1>
<p>Remember your analytical skills</p>