AI-Enhanced Legal Work: Ethics and Obligations Law Firms Should Know

By Theodore Brown and Jeremy Kahn

July 10, 2025

AI-Enhanced Legal Work: Ethics and Obligations Law Firms Should Know

Theodore (Teddy) Brown serves as the Managing Director of Damages at iDS, leading the charge in forensic accounting and commercial damages. Brown transforms complex financial data into insights that empower clients and counsel alike.

Jeremy Kahn is Principal at Berman Fink Van Horn and advises a broad range of clients in business disputes, including in trial and appellate courts. Kahn has emerged as an expert in the rapidly changing area of generative AI.

Artificial Intelligence (AI) has emerged as a powerful tool for accelerating legal workflows. From document review to legal research and litigation support, AI-enhanced legal work is boosting efficiency and accuracy across the legal field. However, attorneys and expert practitioners must navigate an evolving landscape of ethical and professional responsibilities as they embrace the technology.

While AI significantly reduces the time spent on routine, repetitive tasks, the ultimate responsibility for judgment and decision-making should remain squarely with the practitioner. Legal professionals must carefully balance leveraging AI’s advantages with preserving their professional skepticism, recognizing AI as a supportive—not substitutive—resource.

Ethical Compliance

The American Bar Association’s (“ABA”) Model Rule 1.1 requires attorneys to provide “competent representation” to their clients. The comments to the rule explain that part of satisfying this requirement means that “a lawyer should keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology.” So, avoiding AI or burying your head in the sand is not an option. Attorneys must maintain technological proficiency relevant to their practice areas. With AI increasingly being utilized in both legal practices generally and in the practitioners’ clients’ industries, attorneys should pursue continuous education and training in emerging AI technologies.  

Attorneys’ use of AI also implicates other ethical rules. For example, ABA Model Rule 1.4, regarding an attorney’s communications with a client, requires that the attorney “reasonably consult with the client about the means by which the client’s objectives are to be accomplished.” Some clients might be happy about their attorney’s use of AI. After all, a more efficient attorney may spend fewer hours on the matter and therefore charge lower fees. Other clients may be less tolerant of using AI and the inherent risks. What matters, though, is that the attorney is transparent with the client about the potential use of AI while working on the client’s matter. While not a hard requirement (yet), it is a best practice to obtain a client’s consent to use AI on a particular matter, such as in a provision in the attorney’s engagement letter. 

Billing Issues

ABA Model Rule 1.5 prohibits attorneys from charging unreasonable fees or unreasonable amounts for expenses. Hopefully, this is obvious, but an attorney billing hourly cannot use AI to draft a brief and then bill for the time saved or for the amount of time it “would have taken” to write the brief without any assistance from AI. The efficiencies of using AI should benefit the client. 

The attorney may charge the client for overhead expenses related to the use of AI. The comments to ABA Model Rule 1.5 state that a lawyer may seek reimbursement for certain costs “by charging a reasonable amount to which the client has agreed in advance or by charging an amount that reasonably reflects the cost incurred by the lawyer.” For example, if the attorney pays a monthly fee for a subscription to an AI platform, the attorney could ethically charge a client a pro rata portion of that fee attributable to the costs incurred on that client’s behalf. Charging every client the full amount of the monthly fee, on the other hand, would be unreasonably excessive. 

Confidentiality Concerns

One of the most important rules to keep in mind when using AI is ABA Model Rule 1.6 regarding confidentiality. With some exceptions, Rule 1.6(a) prohibits an attorney from “reveal[ing] information relating to the representation of a client” without the client’s informed consent. Notably, prohibiting disclosure of “information relating to the representation” is much broader than merely protecting attorney-client privileged communications. Additionally, Rule 1.6(c) further requires attorneys to “make reasonable efforts to prevent the inadvertent or unauthorized disclosure of, or unauthorized access to, information relating to the representation of a client.”

The use of AI implicates this rule because many platforms store—and, in some cases, train on—the content that users input. When an attorney enters information into a platform such as ChatGPT, it may constitute a disclosure to OpenAI LLC (the company that deploys ChatGPT), and, in some instances, could even be surfaced in responses to other users worldwide. While some “closed” AI systems attempt to solve for this issue (even ChatGPT now has settings to prevent training on certain inputs), attorneys must proceed with caution when entering client-related information into any AI tool. A good practice is to use placeholders such as “Acme Corp.” or “John Doe” in queries, then replace actual names in the final draft outside the AI system. 

Finally, ABA Model Rule 3.3 requires candor to the tribunal, which includes not making “a false statement of fact or law to a tribunal.” Unfortunately, there have already been numerous instances of attorneys facing sanctions for citing “hallucinated” cases (i.e., fictitious legal citations generated by AI tools). These systems are far from foolproof. They make mistakes and are known to “hallucinate”—or create–information that appears plausible but is entirely fabricated. Attorneys must remain vigilant and avoid placing blind trust in AI-generated content. When using AI to draft a legal document, attorneys have an affirmative duty to independently verify the accuracy of the legal and factual statements produced by the tool. 

Supervisory Responsibilities

Professional rules such as ABA Model Rules 5.1 and 5.3 outline expectations regarding the supervision of staff and personnel. As AI becomes more integrated into legal practices, this duty of supervision should extend to the AI systems attorneys and their staff employ. Effective oversight includes understanding the capabilities and limitations of these technologies, verifying outputs for accuracy, and ensuring compliance with ethical obligations. 

Some attorneys have unfortunately learned the hard way (by facing sanctions) that the responsibility for supervision extends to ensuring that subordinates’ use of AI complies with ethical standards. For example, if a partner asks an associate to prepare a first draft of a motion, the partner is responsible for ensuring that the associate’s use of AI, if any, is appropriate. If the associate cites hallucinated cases and the partner fails to catch it, the partner may be held accountable. 

Law firms should develop an AI policy tailored to the needs of the firm and its clients. Of course, even the best policy is only as effective as the training the firm provides to its employees. It is important to not just have a well-drafted AI policy, but also to regularly train employees on its provisions and the reasons behind its importance. 

Can You Explain It?

Transparency in the use of AI within legal workflows is crucial. Both clients and the trier of fact expect to understand the role and impact of AI in legal work. Attorneys must be able to clearly explain AI-generated work products and effectively articulate how AI-assisted outcomes were utilized. This transparency not only strengthens client trust but also aligns with broader professional obligations, such as the communication requirements outlined in Model Rule 1.4.

Addressing Bias and Ensuring Fairness

The most common implementation of AI in legal practice is through Large Language Models (“LLMs”). While powerful, LLMs can inadvertently introduce biases reflecting underlying data or assumptions embedded during development. Professionals who use AI bear the responsibility to identify, understand, and mitigate these biases to ensure fairness and impartiality. Whenever professionals and their staff rely on LLM-generated output, it should be carefully reviewed before use. 

Navigating Evolving Standards and Regulations

The regulatory landscape is rapidly evolving to keep pace with advances in AI technology. Practitioners must remain vigilant, continually updating their knowledge of emerging judicial decisions, professional guidelines, and statutory regulations governing AI use. Creating and maintaining an up-to-date AI policy—and regularly providing training on it—are best practice for legal professionals.

Top 5 Questions to Ask Yourself Before Using AI in Your Legal Practice:

  1. Am I Maintaining Competence in AI Technology?
    Ensure you understand the strengths, limitations, and risks of AI tools to fulfill your duty of technological competence under ABA Model Rule 1.1.
  2. Have I Clearly Communicated AI Use to My Client?
    Verify that your client is informed about, and consents to, your use of AI technology in their matters, consistent with ABA Model Rule 1.4.
  3. Am I Protecting Client Confidentiality?
    Carefully consider confidentiality under ABA Model Rule 1.6 by using placeholders or anonymized information when inputting data into AI platforms to prevent inadvertent disclosures.
  4. Have I Independently Verified AI Outputs?
    Always independently confirm the accuracy of AI-generated content, ensuring you fulfill your obligations of candor and avoid citing “hallucinated” information per ABA Model Rule 3.3.
  5. Do I Have Effective Supervisory Policies for AI Use?
    Implement and regularly update firm policies and training regarding AI use to meet supervisory obligations under ABA Model Rules 5.1 and 5.3, ensuring ethical compliance by all staff.

As AI reshapes legal workflows, the professional obligations of attorneys and experts grow equally complex. Proactively embracing these obligations allows legal professionals to fully leverage AI’s potential without compromising ethical responsibilities, ultimately strengthening both practice efficiency and client outcomes.

Get the free newsletter

Subscribe for news, insights and thought leadership curated for the law firm audience.