Copyright

(c) 2010-2025 Jon L Gelman, All Rights Reserved.

Saturday, March 23, 2024

A Cautionary Tale on the Use of AI

Judge Desai addressed the misuse of Artificial Intelligence (AI) in Larry Grant v. City of Long Beach, 22-56121 D.C. No.2:21-cv-06666 JVS-JEM, in the Ninth Circuit Court of Appeals opinion filed on March 22, 2024. The analysis highlights the potential pitfalls of using unverified AI for legal pleadings and emphasizes the need for caution among attorneys.

Misuse of AI in Grant v. City of Long Beach

Details on the specific AI tool used or the nature of the misuse are likely unavailable from the opinion alone. However, based on the court's ruling, we can infer that the AI employed in Grant v. City of Long Beach resulted in pleadings with deficiencies such as:

  • Inaccurate legal citations or factual assertions: AI models trained on vast datasets may contain errors or biases leading to factually incorrect or legally irrelevant information in pleadings.
  • Missing crucial arguments or evidence: AI, if not carefully guided by human expertise, might overlook vital legal arguments or fail to identify relevant evidence that could strengthen the case.
  • Unprofessional or unethical language: AI language models might generate text that lacks the formality and nuance expected in legal documents, potentially raising ethical concerns.



Judge Desai's Opinion and Cautionary Points for Attorneys

Judge Desai's opinion likely emphasizes the importance of attorney oversight and verification when using AI for legal tasks. Key takeaways for attorneys likely include:

  • Do not rely solely on AI for legal research and writing: AI should be a supplement, not a replacement, for thorough legal research and critical analysis by qualified attorneys.
  • Verify the accuracy and validity of all AI-generated content: Attorneys must meticulously examine AI outputs to ensure factual and legal accuracy before submitting them to the court.
  • Maintain professional responsibility: The ultimate responsibility for the content of pleadings lies with the attorney. Using unverified AI could lead to ethical violations and sanctions.

Why Caution is Necessary

The legal profession is built on trust and adherence to ethical codes. Using unverified AI can undermine both:

  • Erosion of Trust in the Legal System: Inaccurate or misleading information in pleadings can jeopardize the integrity of the legal system and cast doubt on the professionalism of attorneys who rely on such tools.
  • Unethical Conduct: Attorneys have a duty to present truthful information to the court. Submitting pleadings containing errors or omissions generated by unverified AI could be construed as unethical conduct.

Conclusion

The Larry Grant v. City of Long Beach case serves as a cautionary tale for attorneys considering using AI for legal tasks. While AI holds promise for increased efficiency, it should never be a substitute for sound legal judgment and attorney verification. Attorneys should use AI with caution and prioritize accuracy and ethical considerations when incorporating this technology into their practice.

Recommended Citation: Gelman, Jon L.,  A Cautionary Tale on the Use of AI, www.gelmans.com (03/23/2024)
......

ORDER NOW 


....

*Jon L. Gelman of Wayne, NJ, is the author of NJ Workers’ Compensation Law (West-Thomson-Reuters) and co-author of the national treatise Modern Workers’ Compensation Law (West-Thomson-Reuters). For over five decades, the Law Offices of Jon Gelman  1.973.696.7900 
jon@gelmans.com 
 has represented injured workers and their families who have suffered occupational illnesses and diseases.


Blog: Workers' Compensation

LinkedIn: JonGelman

LinkedIn Group: Injured Workers Law & Advocacy Group

Author: "Workers' Compensation Law" West-Thomson-Reuters

Mastodon:@gelman@mstdn.social

Blue Sky: jongelman@bsky.social



© 2024 Jon L Gelman. All rights reserved.


Attorney Advertising

Prior results do not guarantee a similar outcome.


Disclaimer

Download Adobe Reader