CALIFORNIA JUDGE SANCTIONED LAWYERS WHO SUBMITTED A TOTALLY FAKE AI GENERATED LEGAL BRIEF
I have read a lot in the last month or so about how AI is ruining education in the form of ChatGPT. A LOT. And I get it, as kids these days suddenly have a ready tool to cut every possible corner in their education, while literally not learning how to use their own brain tool. But apparently, kids these days is made of more than literal, legal kid. And to be honest, I’m a little shocked learning that lawyers, at least in California, have been caught using AI to make a fake legal brief that they submitted in court. And when the judge realized he was middle school kids, he didn’t hesitate and sanction them.
Read More: The Coolest Private Jet Is Electric, and a Rich Geek’s Dream
IF THE JUDGE HADN’T CONFIRMED FAKE LEGAL BRIEF DETAILS, HIS RULING WOULD HAVE BEEN INHERENTLY FLAWED
Judge Michael Wilner was reviewing a 10-page supplemental brief and decided to follow up on several cases cited in it that he wasn’t familiar with. And it’s a good thing that he did before making a ruling. Because he quickly discovered that some of these supposed cases didn’t exist at all. In other words, the cases were created by an AI that had been programmed to create them in support of the fraudulent lawyers case. And if the judge hadn’t checked, he would have created legal precedent on the record based on a literally fraudulent legal brief.
Related:
Judge Murdering Kentucky Sheriff Won’t Resign, Pleads Not Guilty
ASKED TO CLARIFY THE LEGAL BRIEF, LAWYERS THEN SUBMITTED MORE FAKE CITATIONS!
So Judge Wilner’s immediate reaction was reasonable, and he simply asked the lawyers for some clarification. But things got much worse, when the clarification turned out to be “more made-up citations and quotations beyond the two initial errors.” Wilner said, “this was a collective debacle.” In legal terms, that’s strong condemnation. And it seems that this fake AI legal brief wasn’t an isolated incident. AI is a problem all over the place. In this “case,” the AI these irresponsible lawyers used was Google Gemini and Westlaw Precision’s CoCounsel AI service.