BriefCatch’s Ross Guberman Talks Challenges, Rewards of Legal Writing with AI
February 5, 2026
Ross Guberman is the founder and CEO of BriefCatch, a research-based legal writing and editing platform. He is the author of “Point Made: How to Write Like the Nation’s Top Advocates,” the trainer of all new federal judges, and an expert on the responsible use of AI in legal practice. He is also a new columnist with Today’s Managing Partner. His first column is here.
In this interview, Ross Guberman of BriefCatch discusses the advantages, disadvantages, and misconceptions about legal writing with AI. His company will be hosting a webinar on this topic with Today’s Managing Partner on February 19. You can register here.
AI is suddenly everywhere in the legal practice, especially in writing. What concerns you most about how lawyers are currently using generative AI? And where do you see the biggest misconceptions?
Ross Guberman: A few things concern me. One is that AI can stunt your learning, training and development very easily if you’re not careful. It becomes like a reflex. At the very moment when you’re pushing through something in your mind or on the screen—it’s really tempting, and I do it too—to run to ChatGPT or Claude. You kind of miss those opportunities to really develop when times get tough.
Another problem is flawed output that has all the makings and markings of first-rate output. That’s not generally true in life, right? When you read something that seems sloppy, your instincts are usually guiding you correctly. At the same time, when you read a brief and it looks like the lawyer was very painstaking, and the table of authorities is beautifully formatted, and you sense attention with the sentence structure, there, too, your instinct again is usually right that the substance is going to be strong as well. Unfortunately, those signals don’t work the same way with AI and can give you a false sense of security.
The biggest misconception, though, is in using AI to generate written work from scratch. There is all this terror about how people are not going to learn how to compose and edit. When the truth is, one of the absolute best uses of GenAI for a writer is sort of querying your own drafts.
You know, asking things like, “Hey, if you were a skeptical reader and you read this argument, what would be the three things that you really needed convincing of, and how can I preempt those counter arguments in my draft?” or “Is there any part of this that seems nervous or defensive, or am I skipping logical steps?” It can be an absolutely fantastic tool when it comes to helping you think and edit. But people often don’t think of that use case nearly as much as they think of the somewhat problematic use case of generating a draft from scratch.
You’ve been very clear that AI should support lawyers and not replace their judgment. Where do you see the line being crossed most often in legal writing today?
Ross Guberman: It’s tough to draw the line between daily lawyer work product and lawyer judgment or strategy. AI can actually be very helpful for both. So the line probably shouldn’t be “routine task versus real judgment.” The better question is whether I’m doing something repeatable, where past work and the data already in the document management system (DMS) can guide me, or something unusual that needs to be highly customized to a particular client’s needs. That’s the important dividing line. If it’s the second, you should be very cautious about using GenAI.
A three-paragraph letter can be very high stakes and needs to be tailored in a very particular way. By the same token, you could have a 25-page motion that’s quite generic, with very few risks in using AI. Some of the most successful experiences I’ve had with GenAI have been in what you might call judgment or strategy. It’s helped me think things through at our startup, in my life, and with writing. It’s enabled me to have better judgment than I would have had on my own.
We’re seeing courts sanction lawyers for AI-generated errors and fake citations. From your perspective, what do these cases tell us about professional responsibility in the AI era?
Ross Guberman: There’s a site that gathers these cases, and the last time I looked there were close to 900. Most are from the US, though you hear all these stories, in some countries, I think England, for example, that sanctions are actually more severe. There is even talk about making it criminal for a lawyer to cite a hallucinated case.
The problem we’re talking about is actually quite old. People have been citing the wrong cases, or citing the right cases but saying the wrong things about them, either purposely or negligently, for a century. Inaccurate citations are nothing new.
Frankly, I don’t always understand why there’s so much attention to these hallucinated case stories. Why do they resonate so much? I think it taps into a broader anxiety we have in the profession. These hallucinated cases look so official: you get the pin cite, a real reporter number, a year. The anxiety is that a very diligent lawyer who really tries hard could still get trapped.
The other thing I’d say though, is that professional responsibility goes in many directions. The American Bar Association (ABA)’s guidance says you need technological competence. Meanwhile, the headline-making stories are always about hallucinated case law, but how about all the people who can’t even afford a lawyer or have lawyers who are overwhelmed and submit a really shoddy work product?
Once the case hallucination problem is solved, I think people are going to be squeamish about the opposite: Why aren’t we using GenAI right? Why aren’t we using it in ways that could help our actual clients or just help with access to justice in general?
Going back to something that you said earlier about using AI, you advocate for editing-based AI rather than drafting-based AI. Could you elaborate a little bit more on why that distinction is so important for accuracy, ethics, and credibility?
Ross Guberman: When you have a draft—and I mean a real draft, not necessarily polished, but with your own ideas, your own thoughts, your own authorities, your own facts—and you give whatever you’re using, ChatGPT, Gemini, Claude, the guidance you would give a friend or colleague willing to read it before you send it up the food chain, there’s a 100% chance you’re going to get helpful feedback. It’s not necessarily exhaustive, and you can’t just do whatever it tells you to do and think it’s perfect. But you will always get some really good ideas or suggestions.
You can do this in an open-ended way, asking for ways this passage could be tighter or how the writing could be better. But it’s often better to be more ambitious. One of my favorite queries is asking the tool to imagine a skeptical or hostile reader’s perspective: “What are three questions that person would likely have after reading this? How can I preempt and answer those questions?”
That kind of prompt, where you’re assigning the AI a role—opposing counsel, your own client, a court, a clerk—is incredibly effective and satisfying. Clients are often skeptical of what their lawyers want them to do, like settling when they want to fight. Not only can you give it different personas, but you can say things like, “I only want wording suggestions,” or “I don’t want wording suggestions, I want structural suggestions,” or “Do you think I’ve gone on for too long in any of my case discussions?”
One thing a partner pointed out to me is that the younger generation—Gen Z, early millennials—often pushes back on human feedback and takes it personally. They feel like they’re being criticized. But this partner noted that it doesn’t happen with AI feedback. People don’t take it personally. They don’t get defensive. They don’t feel like their core skills are being questioned. I’ve noticed that too. It doesn’t feel as personal when the feedback is coming from an algorithm.
So for lawyers who are hesitant or overwhelmed, what’s one mindset shift that they need to make to use AI responsibly without putting their clients or reputation at risk?
Ross Guberman: If you’re really nervous or skeptical, your entry point probably shouldn’t be something as personal as writing or editing. Usually the best thing is to pick something that is drudgery: analyzing financial statements, or going through a deposition transcript to find internal contradictions. Pick something that’s time-consuming and tedious, but also high stakes—the kind of thing where you have to really concentrate because it’s prone to human error. When people start with that, they’re usually blown away by both the time AI can save them and the accuracy. Studies show AI can outperform humans on tasks like summarization and transcript analysis, though it still requires supervision.
Another almost foolproof use case is any kind of summary. “Can you summarize this deposition transcript or purchase agreement?” That kind of thing is usually a safer space at the beginning of the AI journey than anything involving editing and writing.
Is there anything else that we didn’t cover that you wanted to touch on?
Ross Guberman: Managing partners are often concerned about security and data privacy, and also how to pick tools versus doing things internally. It’s really important not to buy into rumors about security. You cannot say that ChatGPT is safe or not safe for legal work—it all depends on the account and settings you have. With the right account and the right settings, there’s no more risk than using Outlook or Microsoft Word. Without them, you’re not only risking privilege and waiver issues, but OpenAI and Anthropic can use your clients’ documents and your work product for training. I hear a lot of blanket declarations from law firms about security. It just doesn’t work that way.
There’s a huge range of security parameters with these large language models (LLMs). Reputable third-party legal tech vendors typically offer strong security, but firms still need to verify encryption, System and Organization Controls 2 (SOC 2) framework compliance, and contractual protections. Don’t just take a vendor’s word for it.
For more insights on this topic, register for BriefCatch’s February 19 webinar, “Legal Writing With AI: Practical Benefits, Ethical Use, and Best Practices.”
Get the free newsletter
Subscribe for news, insights and thought leadership curated for the law firm audience.