Skip to Content

AI Won't Replace Lawyers — But It Might Save the Practice

Artificial intelligence can help lawyers solve problems, support people and do interesting work

Digital illustration of a person pushing a button with the scales of justice indicating activation of AI.

What’s the difference between a lawyer and a catfish? One’s a bottom-dwelling scum-sucker, and the other’s a fish.”

Artificial intelligence won't replace lawyers — but it might save the practice by freeing us from the tasks that earned us jokes like this one. While AI isn't new to law (think expert systems, predictive coding, and contract analysis tools), today's large language models (LLMs) like GPT, Claude, and Gemini offer unprecedented capability. These systems become even more valuable through retrieval augmented generation (RAG), which allows them to search your firm's documents and precedents before responding — essentially combining the pattern recognition of AI with your institutional knowledge.

From a self-interested perspective, lawyers are right to feel uneasy: the legal industry has, by and large, been doing things basically the same way for hundreds of years. But every skilled profession has faced a moment like this. Architects weren’t replaced by CADD — they just stopped hand-drawing blueprints. Radiologists didn’t lose their jobs to machine learning — they started using it to catch things the human eye missed. Pharmacists weren’t displaced by automated dispensers — they are focusing more on counselling and complex prescriptions.

In every case, the professionals who adapted didn’t just survive — they got better.

LLMs are poised to change the workflow, the business model, and the client experience. Embracing the opportunity that tools like these provide can lead to increased efficiency, reduced costs, and enhanced access to legal services. Yet, harnessing this opportunity demands new judgment, new skills, new leadership — and yes, new caution.

The Upside: Less Drudgery, More Lawyering

But once risk is managed, the payoff is real. LLMs can free lawyers to focus on strategy, judgment, and client relationships. For individuals, that can mean fewer late nights rewriting boilerplate. For firms, it’s a prompt to rethink pricing models: if a task takes one hour instead of five, maybe it’s time to charge for value, not time.

This isn’t just about efficiency — it’s about alignment. Most of us didn’t go into law because we love redlining clauses at midnight — we wanted to solve problems, help people, and work on interesting stuff.

LLMs can help us spend less time doing the things we have to do — and more time doing the things we care about. They also open doors to pro bono work and access to justice projects that were once too resource-intensive to touch. It’s a chance to spend more time on the work we trained for, and less on the work we quietly dread. Used well, they’re not just a time-saver. They’re a quality-of-life upgrade.

Of course, realizing these benefits requires ensuring that we can do so, in consonance with our role as lawyers.

Managing Ethical Considerations

The Law Society of BC outlines ethical considerations for lawyers using LLMs, including:

  • Competency: Lawyers must ensure they are competently applying their own intellectual capacity, judgment, and deliberation to client matters, regardless of the technology you may use to render services (in other words, “the buck stops with the lawyer”).
  • Confidentiality: It should not be taken for granted that uploading client data to an LLM maintains client confidence. Data uploaded to a web-based LLM is uploaded to cloud storage, and cloud computing considerations apply.
  • Honesty and Candour: LSBC recommends that if you are going to apply an LLM in your practice, “it is prudent to make your client aware of how you plan to use generative AI tools in your practice, generally, and on their specific file(s).”
  • Disclosure to Adjudicators: Increasingly, decision-making bodies are requiring disclosure of AI-use in preparing submissions.

The existence of risk doesn’t mean we freeze in place. It means we move forward with structure. Both individual lawyers and firm leadership have responsibilities here. A lawyer’s professional obligations do not get outsourced to the machine — or to the vendor providing it. Whether you’re using an LLM to draft a memo or assess case strategy, competent supervision is the key issue.

But first, a quick note on why LLMs behave the way they do — and why your supervision matters more than it might with other legal tech. Most of the AI tools lawyers have used historically — expert systems, predictive coding, contract flaggers — are built on rules, structure, and domain-specific logic. They’re narrow tools, doing a narrow job, and the results are mathematically determined.

LLMs are different. They don't "know" facts — they predict likely word sequences based on patterns. This makes them remarkably fluent but prone to "hallucinations": confidently inventing case names or mischaracterizing precedent. And that’s what makes relying on them without oversight so risky. It also raises an interesting existential question: if a probability engine can draft passable client letters, perhaps our true value lies not in document creation but in judgment, empathy, and ethical reasoning — uniquely human qualities beyond AI's reach.

Creating Effective LLM Policies

Whether you're managing a firm or just managing your own files, now is the time to get clear on how LLMs fit into your work by making smart, deliberate choices about what tools are permitted, in what contexts, and under what conditions. Consider:

  • Which LLMs can be used (ensuring they meet confidentiality standards)
  • When and how they can be used (e.g., precedent generation vs. file-specific drafting)
  • Which firm members can use them, and in what circumstances (e.g., paralegals, assistants, associates, etc.)
  • What guardrails you will impose
  • Expectations around communication and disclosure to clients, courts, and regulators
  • Training or onboarding to ensure consistent, competent use

At the individual level, the obligations don’t disappear just because there’s a firm-wide policy. Each lawyer is responsible for supervising the tools they use, understanding how those tools work, and applying legal judgment to everything they touch.

And let’s be honest — many lawyers are already using LLMs under the radar. That’s not a reason to panic. It’s a reason to formalize, educate, and supervise. Policies shouldn’t just limit — they should enable better practice.

If anything, keeping things lean — fewer tools, clearer protocols, targeted use cases — means less training, lower risk, and less confusion. In this context, “the perfect is the enemy of the good” isn’t just a proverb — it’s a solid risk-management strategy with a bonus reduction in overhead.

Transforming the Profession

AI won't fix the profession, but it might free us from being overworked, overbilled, and too busy to reflect. Imagine: more time for client counselling, deeper analysis, and creative problem-solving. Junior lawyers learning faster. Clients receiving attention to their needs, not just their paperwork. And AI won't replace you — it might just help you become the lawyer your clients hoped for and help the profession live up to what you hoped it would be when you were in law school.

And if that means fewer catfish jokes? All the better.