When does using ChatGPT become plagiarism?

Now that is an interesting thought. A little terrifying too. But maybe AI is the way to an effective match system.

I agree we are going to see increasing use of AI in admissions…in both holistic and stats based admission systems.

Some schools already use AI to rate video interviews that are part of the admission process on a number of factors like personality traits and communication skills.

There are also AI programs like Element 451 that schools use to rate various behaviors when students engage in with the school (emails, website, etc) and use those scores to inform their yield models (likelihood of that student enrolling).

U Edinburgh uses AI to prescreen apps for some majors, including CS.

The GRE essay is graded by AI. (ETS says AI grading is more consistent and shows less bias).

2 Likes

Yep - you just feed the AI a lot of essays by that age group, including the info that these are from 17 year olds, and it would do that.

Overall, people are not original, and teens even less so. Teens spend most of their time just trying to fit in and be like their friends, and that will come out in their writing as well.

Writing something that is both original and enjoyable to read is well beyond the powers of almost any 17 year old. If that were a requirement, there would possibly be enough students to fill the incoming class of one moderately sized Liberal Arts College, if that.

Thinking back on solutions, a possibility is that any essay has to be submitted with all drafts. While there are thousands and thousands of essays out there, there are maybe hundreds of examples of the first 10 drafts of an essay. Moreover, ChatGPT is probably not able to create 10 consecutive drafts which show changes, comments, and improvement.

Add to that an annually changing prompt, which will make it difficult for anybody to compile a database of tens of thousands of essays on that particular theme. Even if the prompt is leaked, there wont be a useful data set until after applications season is over.

There is also the possibility to use AI to identify essays written by AI. If there are specific characteristics of AI writing that can be identified by AI, based on a giant sampe of AI-generated essays, that would be the best tool.

Maybe humans cannot identify AI-written articles based on their experience in reading a few thousands essays. However, AI can go through millions of examples, and it is not limited by how humans read and process writing and language.

That assumes, of course that AI generated text does differ from human written text in ways that AI can detect.

I was curious so I asked ChatGPT itself. Here’s its response:

When I asked it that how it learned the patterns and structures of a certain age group when the underlying texts it learned from weren’t labeled as such, it responded further:

2 Likes

Yes. And this isn’t even a recent change.
My wife used to be a GRE essay grader, and they started using computers to grade the essay 20 years ago. For a while they had both humans and the AI work in parallel to compare results. My wife left soon after, but she thinks they completely switched to AI a year later.

1 Like

AI isn’t relying on past input from any particular essay prompt to produce its output. It’s not looking for/through a compilation of old essays to produce new ones. Instead, it is relying on a neural network of deep learning to scan a database of ever-increasing (all) knowledge to intelligently pull together something new each time using patterns and structures (see @1NJParent’s post). Changing prompts may stymie 17 year olds, but is no way to “fool” AI.

AI “thinks” synaptically the way the human brain thinks and has a vast database to draw from.

3 Likes

I teach at a university and already used software to catch two plagiarized papers. Sometimes the anti-bot checker will merely say “it’s likely” the essay was written by an AI chatbot, and from there the professor can at least start investigating further. But I’m sure there are students that are still getting away with it. I believe the national statistic is that over 90% of students cheating never get caught. I’m doing the best I can to at least decrease the incidents of cheating in my courses.

One of my colleagues in Philosophy has moved to oral exams in her smaller seminars, but that is time-consuming. While I still have paper assignments in all my classes, I also include in-class writing assignments and exams with short or long written responses. The exams are handwritten in blue books, but the in-class short papers are submitted electronically. My courses are 75 min./2x week, and because I teach in a Humanities department at a LAC, my courses are all capped at 22 students and the average class size is 16. This allows me to sit at the back of the room for exams and in-class writing assignments and view everyone’s screen or desk. So while I don’t have to worry about AI chatbots writing the exams where I use blue books, I can also guard against cheating for the in-class essays because I have a view of everyone’s laptop screen. For those essays, I let them bring any notes that are in hard copy form, but I inform them that their screen cannot leave the document page. Given the quality of the writing I’m seeing, they are clearly NOT cheating! :wink:

8 Likes

Gift link from the Washington Post - they did a lot of checking on the accuracy of ChatGPT. Mixed results. https://wapo.st/3A5lqU2

2 Likes

College Prof here. I think there is an easy and a hard answer to this…

The easy answer is that ChatGPT use is plagiarism if it supplants engagement with the course material. If you don’t do the work of the class and lean on ChatGPT, you are at the very least cheating yourself out of a learning experience.

The hard answer is that ChatGPT’s capability means that we have to rethink the purpose of assessment. If the goal is to assess students’ engagement with the class content, we have to think of different ways of doing it besides essays. Alternatively, if we could somehow ensure that students engage with the course, I am not sure that traditional assessments would be needed as much.

I’ve toyed with the idea of having a class where the assessment is for students to propose what grade they should get and support it with evidence of their engagement with the course material and the learning that they’ve achieved.

7 Likes

I’m not sure measuring engagement, if it can be done, is a substitute for assessment. Why couldn’t students be asked to write down their thoughts and rationals, step-by-step and in detail, in exchange for fewer problems to solve or papers to write?

1 Like

My son has had college courses assessed that way. He is a music Ed major and I believe these were courses on the education side. He hated it, but he would be fine as long as it was clear from day one how he would be assessed and what the course expectations were.

I work as a tutor and had a high school student last week who desperately wanted to use chat GPT to do his Japanese homework. I said no, but I bet he uses it on the days I’m not there. His parents don’t even know what it is.

2 Likes

This is an interesting approach that has been floated in higher ed circles for the past few years. It has also been met with considerable debate. One of the concerns stems from inequity and the “hidden curriculum.” This could put first gen students and students from underserved high schools and communities at a disadvantage. We already know that these students do better with more structure and guidelines, and they tend to struggle with the unspoken expectations and “best practices” of higher education. Students from private schools or affluent public schools could have a much easier time navigating this type of assessment. Given the rise in students with accessibility needs and numerous accommodations, I also wonder how this sort of grading would work on a practical level.

I worry how some students would do in a nontraditional grading system when they are still just trying to figure out how college works and what it means to be a college student.

I’m a parent and was a high school teacher, so take this for what it’s worth.

My children attended private schools for affluent people in a major metro for up until high school. My youngest is still attending such a school. He is in upper elementary. My oldest is in an unremarkable high school. His prior school, an affluent school that did self-assessments for their students, was very ineffective. Even the best prepared and qualified teachers whose students could demonstrate mastery in a presentation setting would fall apart when confronted with a traditional assessment. I saw this happen in my own child in middle school. He gave a presentation in a language arts topic in 7th grade that was easily upper-high-school content level and showed mastery, but when I put the same information in a traditional assessment format in front of him, he panicked and couldn’t do it. He developed crippling anxiety from avoiding traditional assessment and favoring performance-based assessment.

I have spent his high school years and his last year of middle school trying to cure him of that crippling fear. It’s been an incredible amount of work on my part. If I had just sent him to fend for himself in public school, I know his anxiety would have arrested his learning.

He’s used to both types of assessments now. He still gets nervous with standardized tests but not enough to meaningfully affect his performance. He’s a top 10% student, perhaps higher.

If my youngest child’s school did not do traditional assessments, quizzes, tests, papers, research, etc., along with performance based assessment, I would pull him out in a heartbeat.

This isn’t about social class. This is about exposure. Kids are relying on ChatGPT or other crutches because they are available, the adults use them or accept them, and they get dependent. If you want them in college to stop using these crutches, you just have to remove them. Obviously, if you are teaching in a huge lecture hall, that might be more than you can do. You can at least warn them not to use the crutches and then tell them what to use to do research instead. There’s no easy substitute for varied accountability, even but especially with so-called affluent students.

3 Likes

Not plagiarism exactly ….but could be an issue in art and media studies.

1 Like
2 Likes

Interesting article in the New Yorker exploring ways to think about and deal with the potential downsides of AI.

Excerpts from the lengthy article:

In a recent poll, half of A.I. scientists agreed that there was at least a ten-per-cent chance that the human race would be destroyed by A.I.

… when I ask my most fearful scientist friends to spell out how an A.I. apocalypse might happen, they often seize up from the paralysis that overtakes someone trying to conceive of infinity. They say things like “Accelerating progress will fly past us and we will not be able to conceive of what is happening.”

The most pragmatic position is to think of A.I. as a tool, not a creature. My attitude doesn’t eliminate the possibility of peril: however we think about it, we can still design and operate our new tech badly, in ways that can hurt us or even lead to our extinction. Mythologizing the technology only makes it more likely that we’ll fail to operate it well—and this kind of thinking limits our imaginations, tying them to yesterday’s dreams. We can work better under the assumption that there is no such thing as A.I. The sooner we understand this, the sooner we’ll start managing our new technology intelligently.

3 Likes

Scientists who actually have understanding of how social and physical processes work are not talking about issues some type of AI takeover of the world like Skynet, or esoteric fears of who know what.

One of th emajor issues of AI is the MASSIVE carbon footprint. AI may simply destroy the earth by accelerating climate change at rates that are far higher than anything caused by the present technologies.

There was actually a summit on AI recently. Here is some coverage of it by NBC:

2 Likes

My husband works in tech designing computer components. A few years ago they had a big boom in business when everybody was buying new servers and processors for cryptocurrency stuff. Now, they have another boom because everyone needs to build up their servers and processors for AI.

It is great for his business, but I keep thinking, “where is all the electricity that is powering all these exponentially increasing, constantly running computer going to come from?!”

2 Likes

Th carbon emissions are staggering.

Here is Stanford’s Artificial Intelligence Index Report for 2023:

Banned poster, spammer. Not worth a response