My son is a sophomore. He wrote an essay, which the professor thought was written using ChatGPT. The professor raised this issue to the Honor council, who are saying he can take an option of a) reduced punishment of multiple grade reduction or b) go to a hearing.
Any ideas about how to help? Can we hire a lawyer who specializes in such issues?
Our son loves the school, and chose it above many other options. IF you can provide any help, it will be much appreciated.
1)Did he use ChatGPT and not cite it as a source?
2)There is no 100% accurate AI writing/ChatGPT detector, so if the student didn’t use ChatGPT he should push back.
3)Has your son spoken with the prof? Academic Dean?
Thoughts @hanna?
Why jump to a lawyer first? Your son’s school surely has a process to appeal this. He needs to explore this first. Sounds like your son wishes to contest this…so a hearing would be next.
Do you know why the professor thought this? Was the paper significantly different than other papers he wrote in the past? It seems like a really random thing to charge someone with. I don’t know how the professor can prove this. I would worry though that if the paper was a very different style than past papers or significantly improved that the prof would say that if it wasn’t chatgpt then he cheated in some other way. It’s easy to look at two papers and say that there is no way they were written by the same person.
It’s too soon to get a lawyer. They have offered two alternatives and your son needs to avail himself of the hearing.
The college probably explains somewhere what that process is. I’d ask him first to come clean. Did he write it all himself?
Re ChatGPT detection, by now, I’m sure every professor has seen papers written using AI. They aren’t that difficult to spot, tbh. The main drawback of using it is that it’s almost flawless, but also quite generic. Real people make mistakes all the time. So if the professor made the claim, it’s likely he had a valid reason for doing so, but your son deserves a chance to defend himself too.
He needs to take that option if he’s being wrongly accused. Any lawyer would advise that route first, I’m sure.
If he did indeed use AI, a multiple grade reduction is a relatively minor punishment for the conduct and one I would encourage him to accept.
I disagree that profs/teachers can spot AI written essays…they can be extremely good, especially if the prompts fed into the AI tool are thoughtful.
I don’t understand how a prof or teacher can administer consequences without proof…and there can be no proof…AI detection tool can say it’s X% likely this is AI, but right now they aren’t all that accurate. So the next step would be the prof asking the student if they used AI or not, then the student chooses what to do from there. Seems better for profs/teachers to allow using AI writing tools, as long as they are cited…and many are doing this already.
Many courses explicitly state that use of AI is not permitted in coursework and will result in disciplinary action for plagarism. The professor is offering a mild sanction the student would be well advised to consider. I assume a formal hearing resulting in a plagarism finding might lead to suspension?
I agree if the prof stipulated that AI is plagiarism, and the kid used it (which we don’t know right now), they will be lucky to not fail the class. Not sure about suspension, depends on the school’s policies.
OTOH, without proof, there is nothing to stop any student from saying they didn’t use AI, even if they did. (I am not condoning lying, nor saying this is what OP did).
I think I am ending up in the camp that profs/teachers may as well let students use AI as long as they cite it as a source, and hopefully they also teach students how to use it, its advantages/disadvantages, but of course my views may change!!
I think it’s problematic for profs to use AI to write their grant proposals, to take one example, and then not let students use it. Same for teachers writing LoRs with AI, and then not letting students use it. Lots for everyone to figure out, and there are going to be stumbles all over the place.
The difference is that in the other situations, the submitter is not necessarily representing the material as his own work-many companies hire grant writers, and it is not unusual for subjects to draft their own reference letters. As long as the submitter endorses the material, it is fine.
In this case, the student allegedly submitted the paper, intended to be evaluated as evidence of a student’s mastery of a subject,as his own work, contrary to course guidelines. Whether the student paid ChatGPT a nominal fee to draft the work, or another student a more substantial fee to do so, is immaterial.
Does your son have any record (ex. Google doc) of his work and revisions on this paper?
I understand what you are saying, and don’t disagree. I was suggesting that teachers allow use of AI tools, and many already are. If the student uses these tools, it just becomes another source they cite, like Wikipedia, or National Geographic, or whatever.
Did you ask him to be honest with you? Did he use ChatGTP or did he write it himself?
I would hope that professors and schools would be clear with students as to what tools are permitted and are not permitted at the start of the semester. Some schools are ok with ChatGTP, some are not.
My DS attends a high school where ChatGTP is not permitted. They have a software program that checks for patterns that are detectable in AI-generated plagiarized work. I don’t know if the software works with ChatGTP (ChatGTP is very convincing), but the school might consider getting the software if they don’t already have it.
I see a lot of edits above re: tech tools and AI-detectors. The one our school uses is not listed in this thread. They seem to have worked well at our school from what I can see. I have never heard a complaint about them from another parent and we’ve had lots of students expelled for cheating. Our school has a 3-strikes-and-you’re-out policy.
These tech tools are like hydra. I’m sorry that your son is going through this, but it might be a good learning opportunity if he did in fact use ChatGTP.
Whether professors should allow AI is not the topic of this thread. If someone wants to start a new thread in the subject, feel free. For this thread, please focus on the OP.
But there are a couple of good ones out there, like GPTZero. It depends on whether the score that such a detector had was very high or not. If the score indicates that the likelihood is high, there is nothing to push back. The professor does not need to prove “beyond reasonable doubt”. They just have to show that the likelihood was high. The essay can also be compared to the student’s other essays. If is very different in style and “voice”, that indicates either ChatGPT or somebody else writing it.
However a major problem that ChatGPT has is in citing other work. ChatGPT does not actually know how to do this very well. You can get it to provide its sources, and then translate them all to citations, but that’s a lot of work, and requires higher levels of computer savvy than most students have.
The entire point of AI is that it goes through a LOT of previous work and figures out what needs to to be written It does not, however, have the ability to figure out primary and secondary and tertiary sources, to paraphrase a single source, etc. Nor is it very good at figuring out which of the sources is considered to be reliable or important.
Even if you get an AI to share its sources, you would have to put a lot of work into figuring out which of these needs to go into the paper. A paper which has 1,000 citations for a 1,200 word essay has been written by an AI bot.
So if the student has full and accurate citations, of a reasonable number, for their work, they have a strong defense against the accusation. If not, well, they may have an issue.
Sure there is…how can a school punish someone if they don’t have definitive proof and the student denies it. I can see many people (deans, appeal committees, etc.) being cautious about that. There have already been stories in the press about people wrongly accused of using an AI bot when they didn’t.
I didn’t mean asking the AI bots for their sources (as we know they can make them up, beyond being unable to accurately identify where their writing comes from). Some profs/teachers are allowing students to use the AI bots, and then just cite ChatGPT as a source.
ETA: It seems OP is gone, and what we are talking about is off topic for this thread. Hopefully OP comes back and gives some clarity to the situation.
Back to what this student should do…
- Either take the grade drop, or go to the hearing. If the hearing is the choice, the student needs to somehow be able to show that he did not use AI to complete this work.
Another thing the student might do is to run the paper through the various AI bot detectors (there are many) and see what they say (assuming the student didn’t use ChatGPT, which we still don’t know).
The student shouldn’t accept a grade drop if they didn’t use an AI bot.
A student doesn’t have to rely entirely on ChatGPT for an essay to be in violation of the school’s or the professor’s policy. For example, if s/he had used ChatGPT to generate the initial draft and supplied afterwards other necessary ingredients including all citations (by googling each assertion made in the ChatGPT-generated text), s/he would still be in violation of the policy. Also, I don’t think GPTZero or any other AI detector is able to keep up, now or ever, with the advances in generative AI.
GPTZero has too many false positives.