NY Times: Artificial intelligence and student essays

It makes sense to change teaching and testing methods, rather than try to block ChatGPT (and the like) from classrooms, or ignore the changes that AI is bringing to education.

These changes may also have another possible (desirable) consequence as fewer required papers will also decrease other means of cheating (students have always been able to pay for someone to write their papers and/or buy already completed papers from certain websites).

It seems professors will first have to understand what ChatGPT is/isnā€™t/how it was created. From the article:

Frederick Luis Aldama, the humanities chair at the University of Texas at Austin, said he planned to teach newer or more niche texts that ChatGPT might have less information about, such as William Shakespeareā€™s early sonnets instead of ā€œA Midsummer Nightā€™s Dream.ā€
The chatbot may motivate ā€œpeople who lean into canonical, primary texts to actually reach beyond their comfort zones for things that are not online,ā€ he said.

This might be a bad quote, but we donā€™t know exactly what the OpenAI team taught ChatGPT, and ChatGPT is not capable of searching the web (which that last quote seems to imply).

I also donā€™t understand why the new AI detector app, created by a Princeton student, has received so much praise/use. OpenAI provided an AI detection tool (based on GPT2) quite some time ago (there are other existing AI detectors as well): https://openai-openai-detector.hf.space/

I listened to this yesterday and thought is was really interesting. Iā€™d never heard of the guy before but now are subscribed to his YouTube channel.

If you donā€™t want to listen to the whole thing, at least listen to the intro. :flushed:

Iā€™ll preface this by saying I know very little about this type of AI. I donā€™t understand the purpose of it. I donā€™t understand what people are supposed to or actually use it for. When they talk about people using it to write, love letters, poetry, papers, what was the impetus for the development of it in the first place? Was it designed to supplement learning? Was it designed to allow people to ask questions and have them answered? Like an advanced version of Chegg? Iā€™m continually amazed at the laziness and willingness to cheat. Many students these days seem to be quite a pathetic bunch. It makes me wonder whether the declining test scores in this country are attributable mostly to student indifference and unwillingness to put in the work of learning rather than poor teaching.

That being said, how is using this ā€œtoolā€ to generate papers, problem sets and such not cheating in every sense of the word? Is a student turning in a paper or problem set or poem written by a computer program as his or her own materially different than turning in the same written by another student?

Also found this statement confusing:
ā€œIn higher education, colleges and universities have been reluctant to ban the A.I. tool because administrators doubt the move would be effective and they donā€™t want to infringe on academic freedom.ā€ How would banning use of this A.I. tool infringe on academic freedom?

Ultimately, these developments will assist in the acceleration of the dumbing down of the population, continued erosion of critical thinking skills and the replacement of people with robots and computers in the work force, as it becomes more and more apparent that we are going to innovate ourselves into obsolescence.

Here are some things one might use ChatGPT forā€¦there are many articles covering this, but hereā€™s one:

One interesting tidbit regarding AI in college admissions I havenā€™t seen discussed on CC (beyond essay writingā€¦and well, the majority of colleges donā€™t care about essays, so IMO things are a bit overblown wrt this) is at colleges that have art programs/require art portfolios for admission, the AOs are very concerned as some of the AI art-creating sites (not ChatGPT) are generating high quality artā€¦and thereā€™s no ā€˜AI detectorā€™ for the artwork, at least as of now (AFAIK).

1 Like

I will be repetitive in this case. What AI does is look at very large amounts of data, and find patterns and correlations that humans cannot. It can learn the elements of a ā€œgood essayā€ if there are thousands and thousands of good essays out there.

However, lacking a very large learning data set itā€™s useless.

An AI cannot provide new and original insights on any topic, nor can it create anything truly original. If academic journals are accepting articles written by AI, that does not mean that AI can replace academics, it means that the journals have deluded themselves and other for 100 years. It means that journals have never really looked for original writing, they have just looked for different versions of the same ideas and concepts, written in different ways.

It is not an indication that computers are taking over, only that some fields have grown stale and have been stagnating for years.

As for AI-generated art? It depends on what is being judged. If its technical skills, that is a problem. However, for that, they probably need to require proof that a work of art was created using a medium that is yet unusable by a computer. If it is on originality or meaning, if a person can do that from a piece of art, shouldnā€™t they be able to tell that an artist is not a human?

Iā€™ve got that one cued up. Huge fan of Jon Favreau and a lot of the Crooked Media pods.

Copy and pasting what I said on the overall AI Thread, a couple days ago.

I taught college composition for many years, and though iā€™m out of it now, I am close to all the discussions going on in response to chatgpt.

First of all, a student letting AI write their paper is NOT the same as using Google for sources, unless the student is not disclosing the use of sources. An AI written paper presented as student-written is plagiarism, just as much as buying a paper, or a student Googling and not documenting the source of their research or of the actual words.

At the same time, writing teachers are preparing how best to incorporate chatgpt. That might include things like asking it to do a task, and then critiquing what it does. Comparing AI and non-AI documents, etc. But they will also be discussing the ethics of when and how itā€™s used. And that misuse when detected (and there are ways to do so), will not be tolerated, like any academic dishonesty.

The rhetoric used in most sample papers Iā€™ve seen is pretty dreadful, though, so it worries me less than I thought it would. A whole lot of ā€œsome people say thisā€ and ā€œsome people say thatā€ ending with conclusions along the line of ā€œthere are many ways to look at this.ā€ Which would not cut it in a college class.

Editing to add: SO glad I have retired from teaching writing.

2 Likes