Some schools no longer allow students to do any homework at home that get graded because they can’t tell authenticity of their work.
For the self-motivated students, AI will actually help them. They’ll learn more efficiently and faster. They’ll think deeper and beyond what AI is capable of. For the intellectually lazy, why do they go to college in the first place? Jobs that require only lower caliber mental work will gradually disappear (even in currently the highest earning sectors of the economy). And those that require higher caliber mental work will require more than just a college degree.
I co-founded a tech-ish startup and a couple of years ago, we were using generative AI to create certain kinds of emails/letters and were able to create many variations on a them so no two emails were the same.
We sold the firm in December and the team has scattered so I don’t have details but this was not an issue a couple of years ago.
Two of the professions expected to be most affected are humanities professors and lawyers, particularly at the associate level. Any job involving academic style research is at particular risk.
I thought this was interesting, and scary!
https://www.cnn.com/2023/03/29/tech/ai-letter-elon-musk-tech-leaders/index.html
Also, participated in a webinar on ChatGPT -4, which is now out. In terms of essay writing it is MUCH better than GPT-3.5. And 3.5 produced a better first draft than many of my students do.
Yep. Adding there are many scientists/profs using ChatGPT to write their grants (many people are posting on twitter how they are using ChatGPT and examples). The grant proposals that ChatGPT writes are 90% there, it’s stunning. Of course, the user must do thorough fact checking, as ChatGPT does make up scientific studies and other references.
My son writes articles for an online site. Everything from the best beaches in Florida to rules about British succession. He uses ChatGPT as a starting point, but then he revises the output to make it his own. He said he’s heard that some of the other writers are using it without doing the extra work, and it shows. There can be errors and awkward phrasing.
He works remotely from Warsaw and gets paid more than his sister does as a photographer, so it’s not a bad gig.
(To give other parents hope, this is the kid who for awhile I was afraid might not graduate from high school. He ended up doing volunteer work with Syrian refugees in Lebanon and graduated with honors from the American University of Beirut. He always hated school and now makes a living at writing, go figure.)
Our son is thrilled with the quality of software coding produced by ChatGPT 4. Senior developers are now freed from mundane coding tasks and can focus on solving problems faster and more efficiently. He is an architect-level developer and can review Chat code that isn’t quite on target, feed the alternative back in, and ensure that that particular interpretation does not recur. This type of iterative learning improves both Chat and the user which is how AI machine learning is supposed to work.
AI is raising the bar across all disciplines, not just the humanities.
I agree it does free people from more mundane work and allows them to focus on higher level tasks. But a lot of people were employed at mundane work and fewer may be needed now.
But that’s been the case for many years, as new technologies develop. It seems like people always find something new to do.
True. AI will affect not only the currently employed but also future hiring needs. Business will be able to plan for a more streamlined future workforce that requires and rewards higher-level skills.
Our son’s (and the Army’s) concern is that AI will widen the skill/need disparity faster than current economies can absorb the excess. The speed will exacerbate an already threatening divide and is a national security concern.
Here’s a short piece (maybe written by AI) that provides a more fulsome list:
For those who don’t like to click, the following are listed as vulnerable:
-
Tech (coders, programmers, software engineers and data analysts)
-
Media (advertising, content creation, tech writing and journalism)
-
Legal (paralegals and legal assistants)
-
Market research analysts
-
Teachers
-
Finance (analysts and personal financial advisors)
-
Traders
-
Graphic designers
-
Accountants
-
Customer service agents
My zone is heavy on accountants, finance people, lawyers and paralegals. If I were to rank order the risk, I would put accounting at or near the top. We’ve already written several bots that eliminate X number of FTEs, and we’re not close to doing anything like that in the other functions. There is A LOT of low hanging fruit in the accounting profession in my opinion. The Big 4 employ people at six figure salaries to work on audit teams who, at the more junior levels, get paid to do pretty low level work. The mid- to more senior level managers aren’t immune either. Especially given their reliance on data sampling, it is not inconceivable that the entire audit staff can be AI’d at some point, though the government will have to say that’s OK before it can happen. There will certainly be corporate pressure because nobody likes paying the auditors. Finance and the paralegals are a little less clear to me, but I can see some room for AI there. Lawyers are little further off IMO because starting pretty early on in one’s career judgment is involved. But no doubt there are legal functions that can be AI’d.
I’ll repost what I posted in another thread. The impact of AI on various industries/professions according to a recent Goldman report:
Calling for a pause is unworkable. The genie is already out of the box. OpenAI has the lead but many other labs are working on similar technologies. OpenAI isn’t going to agree to a pause so its competitors can catch up.
There’s another area that I haven’t seen much discussion. It may be disruptive to K-12 and college education. GPT-4 has the capability to teach. Not only can a student ask GPT-4 to solve a problem, but s/he can also ask it to show how it solves the problem, step by step, and potentially interrupts it along the way to ask questions, not unlike what is taught in school (but with probably more patience than from most human teachers ).
My initial gut reaction it to both love and loathe the idea. Everyone knows the damage a bad teacher can do, but on the other hand, a good one that inspires has soft skills and presence that would be hard to replicate with AI (unless of course the student somehow didn’t know it was AI teaching him or her and the AI had been written to have a personality).
I don’t believe it’s going to replace a great teacher. Not so great teachers, on the other hand, can probably use some help from AI.
How about and AI teacher that teaches each student according to their needs, optimizing each students learning and ability to reach their specific capabilities?
LLMs are only a part of a much much bigger picture.
Yes, if large numbers of people become an unemployable “useless class”, how will society deal with the situation?
Universal Basic Income is one option.