06.05.2023

|

Updates

In what may be the beginning of a trend, Judge Brantley Starr of the U.S. District Court for the Northern District of Texas recently issued a new mandatory rule regarding the use of artificial intelligence (AI) in legal briefings.[1] The directive, known as the “Mandatory Certification Regarding Generative Artificial Intelligence” rule, stipulates that “[a]ll attorneys . . . appearing before the Court must file on the docket a certificate attesting either that no portion of any filing will be drafted by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard) or that any language drafted by generative artificial intelligence was checked for accuracy, using print reporters or traditional legal databases, by a human being.”[2] Similarly, Magistrate Judge Gabriel A. Fuentes of the U.S. District Court for the Northern District of Illinois recently adopted a standing order providing that “any party using any generative AI tool in the preparation of drafting documents for filing with the Court must disclose in the filing that AI was used” with the disclosure including the specific AI tool and the manner in which it was used.[3]

Generative AI and Machine Learning

While these new requirements offer a clear recognition of AI’s utility in legal research, drafting memoranda, summarizing briefs and other filings, drafting and responding to discovery requests, and anticipating questions at oral argument, they are a response to the issues inherent in the current state of generative AI. For example, in what the court termed an “unprecedented circumstance” in Mata v. Avianca, a legal brief submitted by Roberto Mata’s lawyers was found to contain “bogus judicial decisions.”[4] This case illustrates some concerns that can arise from unverified reliance on AI-generated content, including the proclivity of generative AI for “hallucinations” (or fabrications) or potential bias. Judge Starr’s and Magistrate Fuentes’ orders aim to balance the many possible beneficial uses of these platforms while curtailing the potential for misuse in the drafting of legal briefs.

With concerns over the use of generative AI in the legal profession continuing to emerge, some initial points to consider are provided below.

What Is Generative AI?

Not all AI or machine learning (ML) is generative AI. Broadly speaking, AI is software that can undertake problem-solving. One subset of AI is ML, distinguished by computer systems that can learn and adapt without following explicit instructions. One such type of ML is deep learning, which aims to parallel algorithms with the human brain’s neural network processing. As the name suggests, deep learning algorithms perform tasks repeatedly, “learning” each time to tweak and improve the outcome. Lastly, generative AI could be considered a subcategory of deep learning that uses the deep ML process to create new and original content, such as images, videos, text, and audio.

Generative AI can generate text, such as a legal argument or research, by predicting the text that should follow a given input based on patterns learned from large amounts of data. This ability to generate text based on large data sets makes generative AI a powerful tool in a number of areas, including the legal profession. While some generative AI tools are based on a “closed” universe of information, other generative AI tools are “open” and have broader access to data, such as through web plugins or other connections to the internet.

While These Tools Can Be Incredibly Helpful, They Come With Risks.

As mentioned above, Judge Starr’s primary concerns with using generative AI in the legal profession revolve around the tendency for some types of generative AI to have “hallucinations”—i.e., creating fictitious information—and the potential for bias.

It is crucial to remember that generative AI models are trained on vast amounts of data and can generate highly realistic and relevant responses. However, tools incorporating these models can generate plausible but not factually correct outputs because the tools, in some respects, are designed to produce output similar to, but not exactly the same as, source information used by the tool. Indeed, in the Mata case, the AI tool repeatedly confirmed to Mata’s lawyer that its cited cases and legal citations were real and could be found in reputable legal databases. Despite such assurances, Mata’s lawyers are now facing potential court sanctions for relying on nonexistent cases provided by the AI tool. Magistrate Fuentes’ standing order, specifically referencing Mata, notes that “one way to jeopardize the mission of federal courts is to use an AI tool to generate legal research that includes ‘bogus judicial decisions’ cited for substantive propositions of law.”

In his new Mandatory Certification Regarding Generative Artificial Intelligence directive, Judge Starr emphasized that AI has no legal or ethical duty to a client or the rule of law. Those using generative AI for legal purposes should be wary that, as AI learns from existing data, any bias within that data could be replicated in the AI’s outputs. Outside of the content generated, the use of AI also presents potential risks for client confidentiality and data privacy—if the person using such a tool submits sensitive client information into certain generative AI applications, that data may be indefinitely stored and utilized to produce responses for other users.

How Should Lawyers Using Generative AI Proceed?

Despite the considerable promise of generative AI, it requires care and oversight when used for legal practice. Lawyers using generative AI should do the following:

  • Always cross-verify the data provided by AI tools. For example, cross-reference the AI-produced information with traditional legal databases or seek expert human opinions.
  • Strive for a balanced integration of AI within the workflow, where AI complements skills without substituting them.
  • Avoid submitting sensitive data (including attorney-client privileged and client confidential information) to generative AI applications unless appropriate data security measures and contractual terms are in place to prevent the use of such information for AI training or loss of privilege or confidentiality.
  • Stay updated with the latest AI applications in the field and their potential pitfalls.

Key Takeaways

The utility of generative AI in tasks like assisting with legal research or suggesting potential questions for depositions or oral argument cannot be discounted, but using it requires an appropriate layer of vigilance. This is not only due to the complexity and critical nature of litigation tasks but also due to risks of generating content that appears to be existing case law (but is not) or disclosing privileged or client confidential information, especially when using “open” generative AI tools.

Although Judge Starr’s directive and Magistrate Fuentes’ standing order are the first of their kind, the continued emergence of rules governing the use of generative AI in legal processes is likely. Continued awareness of new rules is an important first step.

The authors wish to acknowledge Summer Associate Emma Donnelly’s contributions to this Update.

Endnotes

[1] In re: Mandatory Certification Regarding Generative Artificial Intelligence, Misc. Order No. 2 (N.D. Tex. 2023).

[2] Id.

[3] Standing Order for Civil Cases Before Magistrate Judge Fuentes at 2 (N.D. Ill. May 31, 2023) (Requiring disclosure of the use of generative AI, including the tool and the manner in which it was used, and warning that reliance on an AI tool may not constitute reasonable inquiry under Federal Rule of Civil Procedure 11).

[4] Order to Show Cause at 1. Roberto Mata v. Avianca, Inc., No. 1:22-cv-01461-PKC (S.D. N.Y. May 4, 2023), ECF No. 31 (Castel, J.).

© 2023 Perkins Coie LLP


 

Sign up for the latest legal news and insights  >