Guidelines for Ethical Use of Generative AI in Legal Practice
July 31, 2024
Guidelines for Ethical Use of Generative AI in Legal Practice
In the article “Ethical Rules for Using Generative AI in Your Practice,” Steve Herman of Fishman Haygood LLP discusses the emerging ethical challenges posed by integrating AI technologies like ChatGPT in legal services. As AI revolutionizes various sectors, formal guidance on its ethical use remains limited.
Key issues highlighted in the article include maintaining general competence with AI tools to ensure reliable work products and safeguarding client confidentiality.
Competence, as per Model Rule 1.1 of the ABA Model Rules of Professional Conduct, mandates that lawyers understand the benefits and risks of AI tools. The ABA advises staying updated on technological changes relevant to legal practice.
Ethical concerns often arise from AI’s potential to produce unreliable or factually inaccurate outputs, known as “hallucinations.” The case Mata v. Avianca, Inc. highlights the dangers of over-reliance on AI, where lawyers were sanctioned for submitting fictitious court opinions generated by ChatGPT.
Candor to the court is another critical issue. Model Rule 3.3 prohibits making false statements to the tribunal. The Mata v. Avianca, Inc. case again demonstrated the consequences of failing to correct AI-generated inaccuracies, emphasizing the need for lawyers to verify AI outputs rigorously.
Supervisory responsibilities under Model Rules 5.1 and 5.3 require that supervising attorneys ensure compliance with ethical standards among associates and non-lawyer assistants. Firms should establish and enforce policies governing AI use to mitigate risks.
Confidentiality concerns, under Model Rule 1.6, are paramount. AI systems like ChatGPT can pose risks to privileged information, especially given their data storage and learning processes. Ensuring the security of client information involves scrutinizing AI providers’ policies and incorporating measures like anonymization of data.
Additional concerns noted by Herman include copyright and patent issues related to AI-generated content, potential conflicts of interest under Model Rule 1.7, and the unauthorized practice of law. The article stresses the importance of ongoing vigilance and adaptation of ethical practices to navigate the evolving landscape of generative AI in legal practice.
Get the free newsletter
Subscribe for news, insights and thought leadership curated for the law firm audience.