Free link to article. The problem is that the software to identify use of ChatGPT is overly sensitive and not specific. In other words, it catches people who didn’t cheat.
I expect we will see more of this.
As the article states there is no 100% accurate AI detector.
It also doesn’t seem that the professor gave direction that using ChatGPT (or other AI language models) was not allowed. Profs are going to have to address this in their syllabi…e.g., yes you can use ChatGPT/AI if it’s cited as a source, or no use allowed, or whatever.
In classes where AI is not allowed, I would encourage students to run their final essay through an AI detector (plus they would have a history of their writing like the student in the article with the google doc history).
Schools going test optional to be holistic and then rate applicants based on essays they may or may not have written…… Makes more sense to go essay optional at this point.
In case anyone is confused by your post…
This post has nothing to do with test optional, or making application essays optional. This post is about a class of students being accused of cheating using AI.
And in fact, in this case, it seems that most did NOT use AI and DID actually write their assignments themselves, making the above point all the more nonsensical.
Posters are welcomed to start a new thread to talk about essays but please stay on topic on this thread. Thank you.
Was there more information released? I thought 2 students were exonerated, 1 admitted guilt, and the investigation continued on the rest.
I totally agree with AI detectors being far from accurate. I had to check something in a professional capacity, and ran it through a detector. It came back saying it was virtually certain the document was written with AI. Luckily, I checked several other detectors, and they all said it was highly likely that the document was actually authored by a human. Inconclusive results, to say the least.
One thing to consider is whether the AI detector itself retains a copy of the document uploaded to it. My understanding is that at least some plagiarism checkers do that, meaning they have access to what could could be a perfectly legitimate document and then who knows what is done with the document afterwards. I would carefully check privacy policies etc of these kinds of detectors to make sure that you know whether a copy of the document will be retained by the checker platform.
I am no expert in this kind of technology, so this is just a gut reaction. I would be curious to hear more about the document retention capabilities of these detectors/checkers from those more knowledgeable than me.
Sigh…technology is a good thing and an awful thing. AI can do so much for humanity and this world, but it can do so many damaging things too.
They did not state that all were exonerated, said that several were, but indicated that no student had failed the class - of course, a student could have cheated and managed to pass the class despite that, but it doesn’t specify beyond the one who admitted his guilt.
"Michael Johnson, a spokesman for Texas A&M University at Commerce, said in a statement that no students failed Mumm’s class or were barred from graduating. He added that “several students have been exonerated and their grades have been issued, while one student has come forward admitting his use of [ChatGPT] in the course.”