Do Lawyers Have a Duty to Supervise Legal AI?

21 June 2025Vincent AIvLex Insights
duel-phone

Imagine walking into court with a brief that cites non-existent case law. Sound like a nightmare? For a rising number of lawyers, this petrifying situation has become a reality.

In both federal and state court systems, attorneys are facing sanctions for filing hallucinated cases with courts. This crisis highlights a critical question: Do lawyers have a duty to supervise the work of legal AI?

While lawyers arguably have an ethical duty to supervise legal AI under ABA Model Rule 5.3, “Responsibilities Regarding Nonlawyer Assistance,” even without an explicit ethical obligation, they absolutely should be doing so.

When AI Supervision Fails

In 2023 when Steven Schwartz and Peter LoDuca submitted a brief containing completely fabricated ChatGPT-generated cases in federal court, they became infamous in the legal industry. The New York attorneys’ unprecedented $5,000 sanctions for what the court called “bogus judicial decisions with bogus quotes and bogus internal citations” sent shockwaves through the profession.

But that was just the beginning. Colorado attorney, Zachariah Crabill, went beyond submitting AI-hallucinated cases. Crabill compounded his error by lying to the court about using an intern for research. His punishment—a 90-day active suspension—was one of the first attorney suspensions specifically for AI-based misconduct.

In 2024, a federal judge delivered a scathing rebuke to Benjamin Kopp’s AI-generated attorney fee justification, criticizing it as “utterly and unusually unpersuasive.” The sharp words highlight how AI misuse undermines professional credibility.

These aren’t isolated incidents. As of the date of this article’s publication, AI hallucinations have appeared in at least 157 lawsuits worldwide, creating an epidemic of fabricated legal authority that threatens judicial confidence in the legal profession.

Does ABA Model Rule 5.3 Apply to Legal AI?

ABA Model Rule 5.3 is titled “Responsibilities Regarding Nonlawyer Assistance” and states that supervising lawyers must “make reasonable efforts to ensure that the person’s conduct is compatible with the professional obligations of the lawyer.” The rule further specifies that lawyers “shall be responsible for conduct of such a person that would be a violation of the Rules of Professional Conduct if engaged in by a lawyer if the lawyer orders or, with the knowledge of the specific conduct, ratifies the conduct involved.”

The ABA’s 2012 amendment to Rule 5.3 provides a crucial clue about AI’s place in legal practice. The organization changed the rule’s title from “Nonlawyer Assistants” to “Nonlawyer Assistance,” effectively bringing non-human legal AI assistance into the rule's scope.

The American Bar Association gave weight to this interpretation in Formal Opinion 512, issued on July 29, 2024. The opinion states that “managerial lawyers must establish clear policies regarding the law firm’s permissible use of [GenAI]” and that “supervisory obligations also include ensuring that subordinate lawyers and nonlawyers are trained, including in the ethical and practical use of the [GenAI] tools relevant to their work as well as on risks associated with relevant [GenAI] use.”

This framework for nonlawyer supervision becomes particularly relevant as AI systems advance to become closer to human assistants than mere tools. For example, as part of the Vincent AI Spring ‘25 Release, the platform is expanding its agentic AI capabilities. Daniel Hoadley, vLex’s Director of Research & Development, describes the Workflow Engine’s agentic AI as “a suite of agents under the hood.”

These agents reason independently about task selection, making Vincent feel “less like operating a tool and more like collaborating with an expert colleague,” Hoadley explains. This human-like collaborative dynamic makes supervision essential.

The Rules Don't Need to Change

The reality is that attorneys have always been required to verify what they submit to courts and read the cases they cite. Under ABA Model Rule 3.3, “Candor Toward the Tribunal,” lawyers have an ethical duty to not “make a false statement of fact or law to a tribunal.” Under ABA Model Rule 1.3, “Diligence,” lawyers “shall act with reasonable diligence.”

The cases involving hallucinated citations aren’t solely about AI use. Every sanctioned attorney in the AI cases cited above violated their fundamental ethical duties. Their misconduct wasn’t using AI—it was failing to verify AI outputs before presenting it to the courts. The technology has changed, but the professional obligations have remained constant.

This perspective reframes the AI supervision debate. Rather than asking whether new rules are needed, we should ask how existing rules apply to emerging technologies. The answer? Lawyers must exercise the same diligence and candor with AI assistance as with any other form of legal support.

Why It’s Easier to Supervise AI

Contrary to popular belief, supervising AI assistance can be more straightforward than supervising human assistants:

Technological Constraints Enable Better Control: Secure AI platforms can be configured to prevent confidentiality breaches and privilege violations—protections impossible with human assistants who might inadvertently disclose sensitive information.

Consistency Eliminates Human Variables: AI platforms produce consistent outputs without the personality variables, mood changes, or judgment lapses that complicate human supervision.

Complete Audit Trails Ensure Accountability: AI platforms save every interaction and can always show and explain their reasoning process, providing transparency that human assistants can’t match.

Built-in Source Verification Streamlines Review: Vincent AI’s hyperlinked citations enable instant source verification, eliminating the time-consuming process of tracking down and checking human-generated research citations.

How to Verify Legal AI Assistant Work Product

Effective AI verification requires a systematic, hands-on approach to checking AI-generated legal work. Most obviously, lawyers should always confirm that AI-cited cases actually exist by searching legal databases for them independently. Once a case’s existence has been verified, read the full case to ensure it stands for the legal proposition the AI claims.

Next, confirm all quotes and legal language. Review every direct quote from cases, statutes, or regulations by pulling from the original source. AI may paraphrase incorrectly or fabricate quotes that sound authentic. Also ensure that AI hasn’t altered key legal language that could change the meaning or interpretation.

Finally, evaluate your AI assistant’s legal logic and reasoning. Assess whether its legal arguments make logical sense and follow established legal principles. Check that the AI hasn’t made unsupported logical leaps, misapplied legal standards, or ignored important counterarguments.

Turning Obligation into Opportunity

The duty to supervise AI assistance doesn’t have to be a burden. You can make it your professional advantage.

AI technology is advancing rapidly, now outperforming human law students and showcasing next-gen agentic AI capabilities. It is becoming clear that legal AI cannot be avoided and the legal profession’s future lies in mastering AI supervision. That supervision is made easy by selecting the right AI platform.

While trust is crucial in the legal field, verification is part of its foundation. AI doesn’t change this principle; it simply gives us new tools with which to uphold it. Outputs from GenAI platforms are advanced enough to be trusted, but you should verify everything you submit to a court—every single time—AI-generated or otherwise.

Ready to start making AI supervision easy? Start your free trial of Vincent AI today and experience AI engineered for lawyers.

Start Your Free Trial

Authored by

Sierra Van Allen