Evidence Of Racial, Gender Biases Found In Faculty Mentoring

<p>By the way my original phrase was: “who needs an accusation of sexual misconduct?”</p>

<p>Apprenticeprof reports that as a “patently absurd sexual assault comment” and oldmom4896 calls it a “sexual-harrassment prediction”.</p>

<p>

</p>

<p>It can be, if the prejudice is based on race.</p>

<p>

</p>

<p>Relying on apparent racial, ethnic, or national origin characteristics to pre-judge a candidate before actual evaluation is likely to lead to suboptimal selection. You might never know whom you miss because you rejected a candidate based on apparent racial, ethnic, or national origin characteristics before any further evaluation.</p>

<p>

</p>

<p>Partly true. The soulless drone happens when one has a 2400 SAT and nothing else except the usual academics awards. And, fwiw, the schools admit a VERY large number of those students, and in a proportian substantially are than their distribution. The prejudice might be in the form of denying admission to a “favorite” school but hardly to a school that is an immediate and comparable peer. And that prejudice might be based on the formation of the expected class. </p>

<p>But this study isn’t about “racism,” really. At least, in my opinion, it’s highly unlikely that any of those professors thought consciously, “I don’t want to meet with an obviously black doctoral candidate, because blacks are inferior.” Rather, this study (most likely) gets at unconscious biases–the kind we don’t like to think we have. The study may not show a lot, but I think it shows that names can affect whether a person will consider an e-mail to be just a kind of spam that can be ignored, or at least needs a response.</p>

<p>If you got an e-mail like this, would you just ignore it? It seems to me that I’d respond that I was too busy, or to send me a letter, or something. But I do, in fact, ignore a lot of e-mails, often barely even looking at them. What made these professors ignore some of these, but not others?</p>

<p>Xiggi, do you have a reason to assume that confirmation bias was skewing results here? I’m not being snarky; I’m honestly curious. It seemed like a fairly good study to me, assuming one interprets the results reasonably.</p>

<p>

My understanding is that the results showed that female and black professors were not more responsive to members of their own groups, although Chinese and Indian professors were.</p>

<p>

</p>

<p>I think that Dadx did a fabulous job in interpreting the results reasonably. </p>

<p>To answer your questions, which I did not find snarky at all, I think that the hypothesis was as flawed as the methodology of the study. I also believe that the mechanism to build the recipient pool served no other purpose than buring the flaws of the original concept. In so many words, I do not think that demonstrating "DISCRIMINATION’ by faculty based on the mere non-responses to an email was possible. At best, and despite its obvious flaws, the study might confirm that homophily exists --oh, what a surprise! </p>

<p>The biggest problem is that the conclusion of this paper presents a case of discrimination. A term that is thrown around by the researchers without much thought and integrity, in my opinion! </p>

<p>I didn’t read article posted by the OP, but could the emails gone have to professors spam/junk files? I know for myself that when I receive emails from unfamiliar email addresses, they go to spam.</p>

<p>Fwiw, let me offer a parallel. On this site, many of us have received invitations to read essays or help with standardized tests. Assuming that the racial ID of the students who send the PMs might be known, would a negative answer (not responding to the PM) be considered “discrimination?” </p>

<p>All I could say is that my record would indicate that I answered tons of them, but ignored a LOT of them as well. And especially the “Hey, I have to submit to HYPS tonight. Can you review my essays?” </p>

<p>And considering the fabric of this community, chances are that I turned down many, many Asian kids … for the simplest of reasons. They probably represent the most active group of students here. And, they represent, by far, the overwhelming majority of students I tried to help in this past decade. </p>

<p>I agree that it doesn’t show discrimination. It may show bias, though. What I really think it shows, though, is that different names have different signaling effects. The study is too crude to really figure out what those are, but I think it’s enough to show that they are there. And also enough to suggest that if your name is Keisha Marie Smith, you might want to sign your name K. Marie Smith in circumstances where your name is a major part of the initial information you are conveying, like on a resume or in an e-mail anything like this. Sad, but prudent.</p>

<p>

</p>

<p>Yup. And it is rare that I ever read down to the name of the sender before I hit the delete key. </p>

<p>Spam = auto delete.</p>

<p>btw: I am actually surprised at all of the response by the faculty members – much higher than I would have guessed, particularly since, according to the cc LAC supporters, most Uni Profs don’t even have time for their own paying undergrads.</p>

<p>I think it does show discrimination. If the statistical rules were truly followed. Of course it could be sheerly chance that certain requests are turned down and certain ones are granted for any one or small number of cases, but when we get into the big numbers, no more </p>

<p>I get essay requests too, and I don’t even look at the names, and i hardly ever have responded positively to them, though maybe… maybe once or twice or three times, I did just out of, something to do? I have no idea what the names of the posters were, and it would be a surprise if it turned out they were all of one particular type of student and the bulk of the those I turned down were all minorities. Would that mean, with three, at most yeses, and me, one person, that I have a bias, and it is coming through subliminally? I don’t think so. But multiply that by, say a 100 or 1000 and I think the conclusion would have to be different and I would come to the conclusion that there is some subconscious bias But three over however many years is too small of a sampling. We need a statistical analysis of the statistical analysis. </p>

<p>But as I wrote earlier, similar studies with names have shown that, yes, there are biases. Though I do not feel a shred of bias when I look at a name and a request, I have to admit, I was a bit surprised once when a woman named Rachel Cohen (not really, but similar) named to a certain position was introduced and she was clearly Asian, like most likely 100%. I had an expectation based on her name and it take me aback. But I would have just as likely have gone to the event even if her name were Mei Ling.</p>

<p>I agree with Hunt. Enough people were involved in the study - and the results are consistent enough with previous ones - that random chance (it just so happened that most of the x % of e-mails that these professors ignore were minorities) doesn’t seem to me a reasonable explanation for it. These biases still exist.</p>

<p>However, the title of this thread suggests how easy it is to misinterpret what the results actually show. “Faculty mentoring” is a very, very different thing than “meeting with a stranger based on a generic e-mail.” I suspect that plenty of people whose unconscious biases might lead them to overlook an e-mail from a Jamal would be happy to advise him once they actually got to know him as more than a name and race. Again, that doesn’t mean that the initial bias is OK, but it makes a difference. </p>

<p>

</p>

<p>If turning down or accepting to meet students (or discuss the matter with them) were what this study is all about, I might be more willing to accept the validity of its conclusions. But that is NOT what was analyzed here as the entire study is based on the response to the emails. As far as I know, they did not account for spam filters nor for … automated replies that cover absences of the recipient, or mere undeliverable emails. As far as I know, the same faculty member did not receive multiple emails. </p>

<p>The real test, and one that goes well beyond this study, to demonstrate discrimantion would have to go much deeper, and measure the QUALITY of the answers and the desire to actually provide an answer to the OBJECTIVE of the emal sender, which is to discuss the future application. </p>

<p>Again, the evidence is that a number of email were not answered, but no claims of discrimination (or the absence thereof) could be legitinately supported by this exercise. Fwiw, someone who did RESPOND to the email might very well be engaging in active discrimination and someone who failed to answer be a champion of equal rights. </p>

<p>The foundation of this study is too shaky to support the CONCLUSIONS presented by the researchers. At least with the terms they presented. Fwiw, the size of the study might support the statistical power of the study, but does not mask the poor foundation of the hypothesis. </p>

<p>In the end, one has to accept that the mere non-response can be attributed to a discriminating recipient, as opposed to a multitide of other scenarios. </p>

<p>I, for one, can see the correlation but find plenty of reasons to reject the conclusion. By the way, the title of the thread is “Evidence Of Racial, Gender Biases Found In Faculty Mentoring” and it represents the gist of the presentation at NPR, which I was IMHO far more misleading than the paper itself. Shoddy reporting and journalistic efforts, and that is charitable. </p>

<p>But wouldn’t those objections apply to any study? I mean, if I want to test the effectiveness of a certain drug by using two groups, one an experimental group and the other the control, I can make sure that the groups are equivalent in many ways, but can’t guarantee that the people from group A might not, just by chance, have also been more diligent about getting a good night’s sleep and plenty of exercise than the people in group B. But if my sample size is big enough and the correlation between receiving the drug and survival is strong enough, that’s enough to make me feel pretty confident that the drug, rather than diet an exercise, had an impact. </p>

<p>There’s no reason to believe that the e-mails from one group were any more likely to be weeded out as spam or to generate an auto-reply message than any other. You’re right that if it turns out that, while white males got more replies, minorities and women were more or as likely to get a positive reply, eliminating or reducing the disparity, that would be significant and lead us to a different conclusion, but I’m skeptical that that would have been the outcome, and don’t think this study is invalid for using responsiveness as a shorthand. </p>

<p>Again, the sheer number of emails sent and the results would make a difference as the spam filters will hit all of them equally as the rule of large numbers come into play. That’s why the size of this matters. With say 10 examples, it could be a number of coincidences. With 1000, less so. </p>

<p>

</p>

<p>You can’t achieve your goal of “making a difference” as a journalist(advocate) if you let evenhandedness and facts cloud the issues that need to be addressed.</p>

<p>But again, unless we have any evidence that the researchers did something unethical, there’s no reason to assume this is any more compromised than any other study that begins with some hypothesis - i.e, virtually all of them. </p>

<p>Questioning the results on these grounds isn’t intellectually fair, because there is really no way that the researchers could win. There’s not much defense to “I think your hypothesis is wrong, and if you show me evidence that it is right, I’ll assume you’re lying.” </p>

<p>

</p>

<p>The size does reinforce the statistical power of the results, but in no way overcomes the lacking quality of the hypothesis. And, again the conclusions is that there is discrimination in the mentoring of minorities, and this hypothesis was confirmed based on the mere lack of immediate response by the potential mentors to one email. In the eyes of the investigators, this was enough to conclude that this lack of response is discriminatory. </p>

<p>And for the investigators to intimate that the opposite (a response) would automatically indicate a lack of discrimination is silly, at least without measuring the final outcome. Do we have to believe that the people who are prone to discrimination are really that black on white amd that discrimiation isn’t more perverse and … subtle. </p>

<p>Oh well, the beauty of this type of academic research is that this little site probably "reviewed it more than it will ever be. It is bound to adorn a few academic résumés and perhaps finds its way to one those academic journals that will be read by a couple of peers before gathering dust in some obscure library. </p>

<p><a href=“Are 90% of academic papers really never cited? Reviewing the literature on academic citations. | Impact of Social Sciences”>http://blogs.lse.ac.uk/impactofsocialsciences/2014/04/23/academic-papers-citation-rates-remler/&lt;/a&gt;&lt;/p&gt;

<p>Regarding the above, I have little doubt that an equally poorly designed study might demonstrate that the lack of citing is due to the racial bias of the reader. :slight_smile: </p>

<p>

</p>

<p>Is there any other kind? :D</p>