I don’t think the risks are confined to journalism
Most CNET articles are so badly written that AI would be an improvement.
Of greater import will be what chatgpt will do to the legal profession – writing briefs, contracts etc
CBS did a story on it this week. Students are writing papers with it. Ugh.
It is important that students learn how to use chatgpt to be more productive in their jobs. So this is good training – writing papers with chatgpt. Not that much different than students being able to use Google.
Many contract documents are already mostly copy-pasta. All is good and well until the proverbial “it” hits the fan and there is a lawsuit. Then every word, including “it,” gets examined under a microscope.
But people will stop paying $600/hr for copy paste. These tools may be available at a small subscription fee.
Sure, one can already buy a will template from Legal Zoom for $200 or find a freebie. For the vast majority, that might work. After all, the deceased will not care that the heirs might be splitting hairs after his death.
A legal template works fine until it doesn’t. And that’s the problem, you never know when there will be issues. I’ll stick to using an attorney.
We would have thought the same thing about TurboTax 10-15 years ago.
But a live AI legal bot should be better than a static template anyway.
But filing your taxes is lower risk, because there’s not much chance of getting audited for most filers. Legal issues involve other PEOPLE who are much less predictable. We’ve learned that the hard way in our business this year.
i can already use ChatGPT to write code, scripts, etc. It’s enormously helpful, but it honestly makes it easier to not hire junior developers.
That is a poor comparison. I’ve used TurboTax for close to 20 years now, before it could handle ISOs. Never doubted that simple taxes could be automated (even most stock awards could be handled in this manner). Most tax disputes do not arise from math errors. The squabbling is always about the inputs. Was the home office deduction legit or not.
An interesting read:
AI can only reproduce stuff that exists in large quantities. It can produce a good paper on Shakespear because there are so many papers on Shakespeare out there that is can “learn” what a good paper on Shakespeare looks like. However, if you ask a kid to write a paper on a book that is not as popular, or a book that has been written in the past five years and on which there are not thousands of papers, AI will fail miserably at creating a coherent paper, much less a good one.
Since the vast majority of schools keep on teaching the same set of books that have been assigned for the past 70 years, and keep on asking for the same type of papers, this works. However, when teachers assign different books, or different types of assignments, AI will fail.
I would guess that AI would do a good job for much of the grunt work, especially creating things like standard contracts, as well as initial searches for rulings that could be used as precedent. However, AI is, again, not good at anything new or innovative, which is, in many cases, how good attorneys work.
It’s the same for replacing physicians. An AI would be good at diagnosing common illnesses. However, it is far more likely to miss a less common illness. Since “less common” can mean “less than 5% of the cases”, these do add up, which means that AI will likely miss a large number of serious illnesses.
Of course, tax preparation companies’ lobbying means that the IRS itself is not allowed to send a pre-filled tax return based on information it already gets like W-2 forms so that taxpayers who do not have any additional things to add can accept it as is without any additional effort (of course, taxpayers who have additional things to add would still have to do tax preparation). Most taxpayers do not have ISOs or other such things to complicate their tax returns.
I taught college composition for many years, and though i’m out of it now, I am close to all the discussions going on in response to chatgpt.
First of all, a student letting AI write their paper is NOT the same as using Google for sources, unless the student is not disclosing the use of sources. An AI written paper presented as student-written is plagiarism, just as much as buying a paper, or a student Googling and not documenting the source of their research or of the actual words.
At the same time, writing teachers are preparing how best to incorporate chatgpt. That might include things like asking it to do a task, and then critiquing what it does. Comparing AI and non-AI documents, etc. But they will also be discussing the ethics of when and how it’s used. And that misuse when detected (and there are ways to do so), will not be tolerated, like any academic dishonesty.
The rhetoric used in most sample papers I’ve seen is pretty dreadful, though, so it worries me less than I thought it would. A whole lot of “some people say this” and “some people say that” ending with conclusions along the line of “there are many ways to look at this.” Which would not cut it in a college class.
Who may have their paralegal do the copy and pasting for them…
No pockets
Even if a paralegal does the work, I consult with an attorney and he or she is available if any issues come up. Very different from using a canned document with no backup.
People in AI had previously thought the greater difficulty in AI was to replicate what human brains can do than to replicate what humans can do physically. As it turns out, the opposite is true. As ChatGPT demonstrated, replicating human general mental capability (far from the highest levels, of course) is relatively more easily achievable, comparing to replicating general physical dexterity. The implication is that lower level mental works will likely be more replaceable in the not-too-distant future. This includes many jobs in tech, legal, medicine, etc. Top system designers will still be in demand, but the run-of-the-mill programmers may not be. Top lawyers will still be needed, of course, but many others in the profession may not be. Surgeons will still be in demand (even with the advances in robotic surgeries), but many internists may not be. A nurse may enjoy much greater job security than a doctor in this new world.