Evidence Of Racial, Gender Biases Found In Faculty Mentoring

<p>

</p>

<p>Oh yes, they could. For instance, they could have further described what the responses were and segregated the positive replies from others. In this case, an answer that was … no can do was viewed as a positive. Then the pool of non-respondents could have been contacted to obtain an explanation to why the email was not answered. As we know, the researchers sent an additional email within ten minutes to the respondents. </p>

<p>Lastly, there was no efforts to describe what constitutes discrimnation. They simply tossed the term or assumed that people would understand what the researchers meant. Of course, there was no control group to :measure" against in the first place. </p>

<p>Fwiw, unethical is not a yardstick here as it is as difficult to evaluate as discrimination is. I prefer to describe the study as flawed by design. :slight_smile: </p>

<p>I think it is a problem that they don’t tell us more about the responses. But I think it’s worth noting that 67% of the emails got responses. That cuts against several of the criticisms here.</p>

<p>A better but much more difficult (and even more unethical) study would require the construction of fake application packages, differing only in the names. This has been done for job application studies many times before.</p>

<p>The direction of this thread is interesting, but not surprising. Sometimes it’s easier to criticize a study than to take a look at results which may make us uncomfortable, and come up with ideas on how to improve. Of course, some would argue that there is no need for improvement.</p>

<p>I don’t think questioning the study equates to questioning whether bias exists or infers that there is no need to look at the possibile causes of discrimination. In this paper, however, I did not see any raw data presented as to the number of emails sent from each group and the actual responses received. Or, as others have said, analysis of the nature of those responses. I just see statistical analyses defining “bias” as the ONLY reason for lack of response. The actual paper goes on to discuss why professors at private institutions are more likely to display bias than those at public instituions, even stating things like “therefore it may be that more discriminatory individuals prefer to work in higher-paid fields and at private institutions” (but acknowledging that more research would be needed to confirm this). Most of the PhDs I know take the best job with the best research possibilities at the best institution. </p>

<p>The size of the “discriminatory gap” appears to be based on regression analysis. There appears to be fairly wide variation. Why would professors be more biased against hispanic males than hispanic females (the difference is not even statistically significant for hispanic females) and yet much more biased against Chinese females than Chinese males? Why is a 9% “gap” signifcant at the 0.1% level for caucasian females but a 14% gap for black males only signficant at the 1% level? And the 13% public gap for Indian males is only signficantly different at the 5% level. For many of the public institutions, there is no signficantly signficant gap (see table 3 of the paper). Yet all of these gaps are presented in Table 2 of the paper. </p>

<p>There may well be discrimination based on race or ethnicity (and ability to speak English) in higher education. I am just not sure this paper really captures it well or that the conclusions/causation are valid. It would be difficult to send the same email (or a simliar one) to the same professor with a white male name and an ethnic names, but that might show be a better design. </p>

<p>The positive thing about a paper like this is that it brings the issue forward and will hopefully casuse professors to think about why there may be bias. However, if such papers over reach on conclusions, the results could be easiliy dismissed. </p>

<p>I think some people may go through the following analysis:

  1. I am a fair-mind, non-racist, non-sexist person.
  2. As a result, I would not behave differently based on somebody’s name in an e-mail like this.
  3. Most academics are probably also fair-minded people like me.
  4. Thus, there must be something wrong with the study.
    In my opinion, statement 1 can be true, but statement 2 can still be untrue.</p>

<p>

</p>

<p>I am far removed from academia so no, the results don’t make me “uncomfortable.” But then, I’ve never believed in #3 below:</p>

<p>

</p>

<p>:)</p>

<p>Hunt, certainly, but one could also believe that #2 and #3 are untrue, but that #4 is still true. The study only looks at whether professors respond to emails, not how they treat applicants or mentor students once they have them as PhD students. </p>

<p>If all the study shows is that academics (tend to) respond to e-mails from white males, but (tend to) ignore e-mails from others, it should still cause people to take notice. At the very least, it would justify a better study.</p>

<p>Let me clarify: I think 1 and 3 are both true–I think most of these academics believe themselves to be fair-minded, and if asked (even anonymously) if they’d be more likely to respond to an e-mail from people with this name or that name, they would say that it would make no difference, and they would believe it. What I think this study shows (along with similar studies) is that we act on biases that we are unaware of, and that are even contrary to our values.</p>

<p>To relate this to another much-discussed topic on CC: this kind of unconscious bias could possibly explain some of the disparities in college admissions of members of different ethnic groups.</p>

<br>

<br>

<p>It is also easier to comment without having made much of an effort to read the study with sufficient attention to understand the arguments of the critics, And, fwiw, pointing out that deficiencies exist is the first step on the road to improvements for both the study and the issues it purports to address. </p>

<p>Addressing the recurring problems of discrimination and inequalities deserve rigorous studies and unbiased researchers. Simply stated, we deserve better than a flawed hypothesis and foregone conclusions. </p>

<p>hunt:</p>

<p>you’ve made a not-so-subtle shift from the singular (yesterday) to the plural, this am. And therein lies the issue with the conclusions.</p>

<p>Your thoughts 1 & 2 from yesterday (below) can easily both be true for an individual. And this “study” does not address that bcos either the individual responded, or s/he did not. We can draw no conclusions wrt to your 1&2.</p>

<p>

</p>

<p>“Can’t we all just get along?” copyright RK</p>

<p>

The study sent the e-mails to a large number of (more or less) similar people. That, as far as I know, is the only way to perform a study of this kind. It could be a coincidence that the results turned out the way they did, but it’s not very likely.</p>

<p>

</p>

<p>The mechanism used by the investigators is not in question as this type of “audit” survey is acceptable. But again, it is the lack of in-depth analysis of the replies and non-replies that is high; questionable. This is similat to using a simple pregnancy (yes/no) to reach conclusions about the sexual preferences of various racial groups. The biggest failures were to think that the analysis of the positive replies did not need to be quantified and distributed to identify to the :Thanks but no thanks" and perhaps use this a control group (not an ideal scenario but better than nothing) and compare the polite refusals to the “ignore the emails” answers. And, of course, the study should have tried to obtain an “explanation” for the 1/3 who did not respond, and distribute the answers in “I never saw the email” to “I did not have time” or “I did not think the email was appropriate.” </p>

<p>As far as the lack of coincidence, this goes to the cliché of correlation versus causation. Although high correlation can be used to confirm evidence, the correlation observed here does not allow to draw the type of conclusions presented by the researchers. At best, and this a stretch, the study should be repeated with a better hypothesis and more attention to the quality and contents than the overall narrow measurements used in this “study.” </p>

<p>Asking for an explanation from non respondents wouldn’t have done any good, because that would assume a)that the motivations of those who didn’t respond were conscious on their part and b) that they were inclined to be honest about them. But there is no reason to think that people who received e-mails from minority students were any more likely to be busy or to have lost the e-mail in their spam filter than those who received e-mails from white males. </p>

<p>The study already accounts for the fact that some professors are going to be busy, or won’t respond to such a generic e-mail, by focusing on the disparity between the two groups rather than the raw response rate. If they were claiming “50 % of profs didn’t respond to minority students, indicating bias,” that would indeed be a totally unwarranted conclusion because it would be attributing motive where there may have been none; for all you know, those professors never respond to any e-mails of that kind. But if you send an identical letter to two comparable groups of professors and one group of students gets a 50 % response rate and the other gets a 75 % response rate, that suggests strongly that there is another factor IN ADDITION to the ordinary reasons for not responding.</p>

<p>As for the issue of quality of responses, I just don’t see any real basis for the hypothesis that professors might have been more likely to respond negatively to white males than to to other students to the extent that it would negate the given results. Sure, if someone else wants to come along and run that study, that would be potentially interesting, but it isn’t, IMO a logical inference. </p>

<p>A way that would have been interesting to do the study is if somehow email addresses had some sort of avatar on them (akin to how we … well, at least some of us … have avatars on forums like this one), and to see the discrepancy between “Steve Smith” (with a white face) and “Steve Smith” (with a black face). Because distinguishing between “Steve Smith” and “Taneequa Jones” is partially about race but also potentially partially about socioeconomic status. </p>

<p>I currently have 8361 unread email messages in my Inbox, although I have scanned all the titles. I don’t think I have any unanswered requests for a meeting that come from students. I do have unanswered emails from people that I respect highly; I’ve read them but haven’t replied yet (some of them have been around since March). I am likely to get to them when the dust has really settled from this semester. </p>

<p>If someone sent me a request for a same-day meeting, I might not respond to it. I have issued invitations to faculty members elsewhere to give seminars at my university, and found that they need two years notice to come to give a seminar.</p>

<p>

</p>

<p>Actually, that is either blatantly incorrect or an assumption you cannot make based on statistical evidence. The mere fact that an email remained unanswered is a good sign that something could have gone wrong. On the other hand, there is a 100 percent chance that the emails that were answered were … opened and read. You simply cannot assume that 100 percent of the non-answers were from recipients who had read the email and declined to answer. And unfortunately that is exactly what the investigators are doing here. </p>

<p>Accordingly, there are PLENTY of reasons why an email might have remained unanswered, including a slew of technical or physical issues. And, obviously, the investigators did not discuss the inclusion of software that would have tracked the delivery of the email. Unless I missed it. they did not even addressed returned emails. </p>

<p>This bothers me. I’m both a woman and a minority, and I plan to go into a STEM field.</p>

<p>@xiggi, I don’t get your point at all. How is it “blatantly incorrect” that the fact that emails from minorities and women were answered less often is concerning? You think the spam filters only kicked in for certain names? Sure they could have tracked delivery, but whatever technical issues there may have been would have equally affected all of the sent emails.</p>