Lawyers are trained to be good at what machines can’t do.


Will the world still need lawyers once AI gets really good?

The short answer is yes—and I believe it will still be yes no matter how good AI gets.  My view is not universally accepted, so I will need to lay it out, and that will involve some claims about what humans are and whether a machine can ever be like that.  This will shed considerable light on what lawyers essentially do, and help us to see how machines can help us to be better lawyers.

All of this will need to go well beyond the scope of one post.  Over the course of this summer, when all is said and done, this four-part series will have dabbled in microeconomics, considered the debate between phonics and whole language reading pedagogy, compared the cockpit designs of the Sopwith Camel and the Avro Arrow, assessed AI’s impact on legal supply chain structure, and critically evaluated the central epistemological claim of a children’s board book.

That excitement all lies ahead.  For this post, I will limit myself to three questions:

  1. How are lawyers like humans?
  2. Can AI do what humans do?
  3. Will the world still need lawyers once AI gets really good?

Two really important books

Artificial intelligence (AI) has arrived in the legal profession.  Whether we should call it “intelligence” should, I think, be contested—but that is for a later post.  Whatever we call it, machines have matured to the point where every legal leader must urgently consider what advanced machines mean for his or her business.

At the same time, of course, AI has arrived in the world.  Few corners of society, economy and culture will be unaffected.

Fortunately, there are two books to meet this moment.  They can be easily read in self-contained chapters; you can put them both on the nightstand and alternate.  Your bedtime reading will foretell some of the most consequential legal and societal developments of the next decade.

Noah Waisberg and Alexander Hudek, Co-Founders of Kira Systems

The first book is AI For Lawyers by Noah Waisberg and Alexander Hudek, co-founders of Kira Systems, an AI legal tech firm.  Waisberg and Hudek have brought together eight legal-AI entrepreneurs from four different firms to reflect on the present and future of AI in law.  Their writing is largely practical: what can the machine now do for the lawyer?  How will AI affect productivity, accuracy, the codification of expertise, and fundamental workflows like contracting, legal research, and discovery?

Even these practical topics will provoke profound thoughts about what AI means not just for lawyers, but for humans.  Don’t face these questions alone: Possible Minds: 25 Ways of Looking at AI, edited by prominent science writer John Brockman, is another excellent compilation work, bringing together 25 leading lights of AI to offer varied and often directly competing conceptions of what AI is, what it can and cannot do, and what it means for the future of humanity.

The future of humanity and the future of lawyers are intertwined.  I do not mean to be pessimistic, just descriptive.  A lawyer’s work is an intensification of one core human activity: the placement of facts into categories.  We can see how central this is to being human by how easily we do it; we can notice a core difference between humans and machines by noticing that machines can barely do it at all.

What machines can and cannot do

Massively powerful machines are now able to store, recall and manipulate a seemingly infinite set of facts.  This has created truly super-human capabilities.  But even the best machine is no alternative to the human brain.  What are machines still missing?

The answer has to do with what humans intuitively and always do with facts.  From the youngest age, we use them to build mental models, heuristics, or frameworks.  An infant knows her mom.  A toddler has developed a category for moms generally, and for parents, and for authority figures.  A child has a mental model for hierarchy, sources of love, sources of food, things that are dangerous.  As pointed out by Israeli historian Yuval Harari in Sapiens, perhaps the turning point for our species was when we created categories for things that are unknown, imagined, and potentially not even real—but somehow they are, to us, still “things.”  Even young humans readily leap within, between, and across categories, associating them to create narrative strings—to turn fact into story, and to tell those stories to others.

When humans imagine things that do not exist in any corporeal sense, and orient to those “things” as though they are real, we are doing what we now call “law.”

Machines are, as of now, deeply limited by their inability to generate, sustain and manipulate a model of the universe.  Judea Pearl, director of the Cognitive Systems Laboratory at UCLA, writes the following in Chapter 2 of Possible Minds (pg. 17):

Current machine-learning systems operate almost exclusively in a statistical, or model-blind, mode, which is analogous in many ways to fitting a function to a cloud of data points.  Such systems cannot reason about “What if?” questions and, therefore, cannot serve as the basis for Strong AI—that is, artificial intelligence that emulates human-level reasoning and competence.  To achieve human-level intelligence, learning machines need the guidance of a blueprint of reality, a model—similar to a road map that guides us in driving through an unfamiliar city.

Even very young humans can take a single data point and plug it into a vision of what the world is like, effortlessly making further inferences about the thing observed to reach a series of downstream conclusions about what the world must contain.  Machines can’t do this at all.  As a result, a machine can test a hypothesis, but it cannot create one.  It can win at chess, but it cannot invent chess.  And when chess’s premises are questioned – when the dog’s tail knocks over half the pieces, or your young opponent suggests allowing the rook to move in circles – a machine cannot imagine its way to an equitable resumption of chess, and it certainly cannot suggest that perhaps it’s time for another juice box.  Those actions would require the machine to imagine what its opponent would think is fair, or what she might like to do instead of chess.

Pearl outlines a three-level hierarchy of reasoning to illustrate what machines can and can’t do.

  1. Statistical: What can one fact tell us about another?  Machines can do this.
  2. Causal: What happens to system outputs if an input changes?  Machines can do this, but not as reliably.
  3. Counterfactual: What if we tried something different?  Or, what if we’re wrong?  As of now, machines cannot do this at all because machines lack a model of the universe, a heuristic or framework within which to manipulate facts and ideas.

Pearl concludes that “human-level AI cannot emerge solely from model-blind learning machines; it requires the symbiotic collaboration of data and models.”

What humans are

In January, 1992 I took an intensive four-week course in ethnomethodology, and for two weeks I was unable to think straight.  Ethnomethodology’s core observation is that the only thing humans reliably do with each other is to classify facts into a mutually acceptable order.  In other words, whatever else we may be, humans are classifiers and story-tellers, and we do it all day long.  There’s a good argument that we do little else.

This insight, once fully processed, can be unsettling – perhaps especially at the tender age of 19.  The point however is not to undercut the “realness” of our stories and classifications, but only to observe that constructing and maintaining those classifications is the essential human activity.  We (“ethnos”) have ways (“methods”) of creating our reality, and those methods involve classification, narrative, rules, and a good deal of fudging around the edges to make everything “make sense.”  (And yes, ethnomethodologists know that they are doing what they describe.)

We are arguably at our most human—and machines at their greatest disadvantage—when we manipulate the most abstract and creative categories and then apply them to our corporeal world.  Consider the following familiar passage:

You are the light of the world—like a city on a hill that cannot be hidden. No one lights a lamp and then puts it under a basket. Instead, a lamp is placed on a stand, where it gives light to everyone in the house. In the same way, let your good deeds shine for all to see, so that everyone will praise your heavenly Father.

Our ability to take this and go apply it by, say, volunteering at a homeless shelter or doing pro bono work for an Appalachian entrepreneur is light years beyond what a machine can do precisely because it requires an abstract world-model that includes, among many other things, a conjecture about what the speaker might have thought we would think he meant.  This is no calculation of probabilities; it is something else.

Nick Chater, a professor of behavioral science, makes a related and more radical argument about the nature of humans in The Mind is Flat: The Remarkable Shallowness of the Improvising BrainHe presents evidence that our perception of mental depth is an illusion; that humans are masters of serial improv, conjuring and renegotiating stories and their constituent categories at all times.  Chater claims that we do not do this on deep psychological footings, but as a continuous string of thoughts and experiences.  We have no id, ego or superego: we just supply connective tissue to facts by conjuring frameworks.  We are a city on a hill; we are deeply hurt; we are seed scattered on different kinds of soil; we are thinking we need a change of pace; we are going to get in shape this year; we are going to turn it around in the fourth quarter.

All of this requires an abstract mental world-model, and machines don’t have it.  Or, they don’t have it yet?

Humans, frameworks and facts

Lawyers take this basic human behavior of putting facts into categories and intensify it to a professional level.  In this lies our special ability to ruin every pleasant conversation: “What’s my favorite chocolate bar?  Well, what constitutes a chocolate bar?  Does Reese’s count?  It’s a circle.”  For the truly committed lawyer, nothing is safe.  Sharpening the edges of each category and renegotiating its contents based on a withering consideration of facts is why lawyers might just be safest to marry each other.  Regular humans just can’t take it.

Some facts fit easily into categories, without much fuss from judges, regulators or counter-parties.  That’s “commodity legal work.”  More serious pedantry is trained on facts that clients ask us to jam into unclear or competing frameworks, like how to treat indigenous fishing rights under the Fisheries Act when there’s also the Treaty of 1752 to consider.  And as we all know, lawyers get paid the real money when facts are worth a lot and categories are ill-defined: for example, what is the “correct” treatment of the payment rules in Apple’s App Store under the 768 words that comprise the Sherman Act?

Nowadays, machines are the undisputed champions of fact storage and recall – as long as the facts are formatted just so.  The promise of AI is to take machines beyond this, and many examples are discussed in AI For Lawyers.

  • Increasing the percentage of target contracts analyzed in due diligence at different price points (pg. 31);
  • Instructing AI software on the characteristics of relevant documents during discovery—which is a lot like instructing an associate, minus the feigned enthusiasm and social awkwardness (pg. 101)
  • Analyzing a million complex documents (permits, authorizations, contracts, etc.) in the course of an epic multinational reorganization using only ten FTEs and some custom-trained AI software (pg. 36)

Each of these examples reflects a labor-saving application of AI, but the last bullet has a telling postscript.  In the transaction in question, Daimler AG believed they had a million active documents to review.  They planned to sample a small percentage of them because humans couldn’t possibly review them all.  Once the work was AI-enabled, the team set out to analyze the whole million, which they did.  But there actually turned out to be four million active documents, so even the AI-enabled team only hit a quarter of the documents in the end.

There’s vastly more law out there than we appreciate, and I will return to this when we consider the future market for lawyers.

The rocket-assisted lawyer

As mentioned above, certain well-established AI platforms have incredible power in analyzing facts, and an increasing array of AI tools now analyze law.  The core of a lawyer’s job is at the intersection of these two: we get paid to apply facts to law.  Can AI improve efficiency and accuracy at this intersection?

As discussed above, putting facts into an abstract framework is something machines can’t do, even though preschoolers are great at it.  Because we are creative, re-crafting our categories in real-time, I don’t believe machines will supplant this core human behavior.  This is why plugging messy facts into imperfect categories is the durable core of a human lawyer’s work.

But in a well-defined space, and with a human hand on the controls, AI can provide a transformative boost to a lawyer.  The example below inspired the title for this piece: Legal’s AI Rocket Ship Will be Manned.

Darren Meale

Darren Meale, an IP partner at the UK-based law firm Simmons and Simmons (full disclosure: Simmons is an AdvanceLaw firm) has created an AI-based product called Rocketeer.  Rocketeer is meant for trademark lawyers since, as you will see, an experienced lawyer is still very much needed.  But Rocketeer does what the lawyer herself could never do by instantly analyzing a well-organized set of facts against a very large but well-organized corpus of case law.

Here’s the basic functionality.  Where a lawyer is assessing whether one trademark will be held by the European Union Intellectual Property Office (EUIPO) to create a likelihood of confusion with another, Rocketeer provides a rather powerful assist to the lawyer by instantly reading all 10,000 reported decisions of the EUIPO in light of the two marks in question.  The tool predicts how the EUIPO would rule, but it does a good deal more than that.

AI tools are normally trained to do this sort of work based on pure probabilities.  Rocketeer could, using existing AI techniques, read all 10,000 EUIPO cases and make its prediction with no explanation of why the EUIPO would come out this way or that.  If the AI is, say, 93% accurate (as Rocketeer is), we would be left with an “explainability problem,” which is to say, we have no idea why the machine is wrong 7% of the time.  The machine is simply observing all available data, doing its statistical work, and getting it 93% right.

In many areas, this explainability problem doesn’t matter.  It’s fine to have an unexplainable 7% error rate if you’re using an AI tool to invest in the stock market or to make a bet on which chemotherapy regimen to pursue.  93% accuracy is awfully good, and the outcome doesn’t depend on why it’s not 100% accurate.  In those contexts, the value of explaining a tool’s inaccuracy is mainly so engineers can make it more accurate in the future; the user doesn’t care.

Unexplainable results are distinctively unacceptable in law, however.  As a human endeavor of crafting and negotiating categories, law is centrally about the ability to explain—to have a rationale, to tell a story, to fit facts into categories.  Unlike stock market bets or cancer treatments, a legal answer is only “right” when other humans—counter-parties, judges, regulators—accept the explanation of why it is “right.”  Contentious matters exist, by definition, where humans disagree on what constitutes the “correct” explanation, so it is no answer to say to a judge, “Your honor, my client’s trademark does not create a likelihood of confusion here because the computer is 93% sure of it.”

Explainability problems are ubiquitous with AI. So, what to do?

Rocketeer is a lawyer’s assistant, not her replacement.  The software is designed with the knowledge that it is 7% imperfect— which, by the way, is likely a good deal more accurate than any human lawyer in this context.  Rocketeer shows its lawyer-master the work and points to data that contextualizes its potential accuracy or inaccuracy.  The lawyer can then dig in and directly assess why Rocketeer’s prediction is right or wrong.  In the end, the tool improves explainability over the work of a human-only legal team, since humans can’t read all 10,000 EUIPO decisions to begin with.  That’s the rocket-assist potential of AI.


(On the rocket-assist theme, if you’ve made it this far, you deserve a thematic video break: what does a rocket-assist look like in real life?  This is a nice image for the transformative power of AI, supplied by a rocket-assisted C-130 in U.S. Marines Blue Angels livery.  Didn’t know a transport plane could do that, did you?)


Rocketeer is a helpful example of AI at the core of what lawyers do for three reasons:

  1. It no more attempts to replace lawyers than Henry Ford attempted to replace auto workers.
  2. It nevertheless takes a first step into the core of what lawyers do by narrowly tailoring the fact-and-law intersection to allow a machine to analyze a legal problem in full, rather than analyzing only facts, or only law.
  3. It approaches AI’s great explainability weakness directly by requiring a human to do the explaining—which, at least in the field of law, only humans can do to the satisfaction of other humans in truly marginal cases.  And marginal cases are where lawyers get paid.

Efforts like Rocketeer could lead, innovation by innovation, to AI tools that match facts to law in far more complicated fields.  But the path to that end goal starts with clean facts and well-demarcated law, just as the path to a fully autonomous (“Level 5”) self-driving car starts on well-marked streets.  Teaching AI to drive off-road will be a much more complicated task; so will teaching it to assess a getaway driver’s legal exposure after the aborted heist of a bank full of nuns.  Facts are messy: real life is like driving off-road, and that’s why lawyers get paid.

Demand for human lawyers: a Jevons Paradox

If we observe that AI has become a powerful labor-saving device not only for analyzing facts and law separately but also for analyzing them together, we can return to the opening question of this piece: Will the world still need lawyers once AI gets really good?

Without returning to the auto industry for long (as I have been known to do, see Post 195, and Bill recently did in Post 231), let me remind us all of a familiar story.  In 1907, most automobile trips were never taken because, given the eye-watering expense of a hand-made car, there were only 1.65 cars per 1,000 people in America.  In 1908, Ford introduced the Model T.  By 1930, Ford and his competitors had grown the number of cars per person over 100-fold, making automobile trips a daily occurrence in the fast-growing middle class.  (For the record, there are now over 800 vehicles per 1,000 people in America – indeed, a “‘chicken in every pot,’ and a car in every backyard to boot.”)

This caused an explosion of employment in the auto industry, catapulting Detroit from 13th-largest city in America in 1907 to fourth-largest by 1930.  This massive increase in employment happened because of the application of a raft of labor-saving devices.

When a labor-saving device results in an increase in the amount of labor employed, it is known as a Jevons paradox.  This is discussed in Chapter 2 of AI For Lawyers.  Ford made cars affordable and demand for cars exploded. There’s nothing paradoxical about this: demand curves slope down (well, almost always).  So the question is whether the increase in demand that results from a more affordable offering is greater than the decrease in unit labor caused by the labor-saving device.

In plain language, a Jevons paradox can happen when something is prohibitively expensive—and then it gets cheap.  At that point demand explodes, and employment explodes, because of labor-saving devices.

So, is there a lot of unrealized demand for law?

Recall the Daimler AG reorganization story above: the team reviewed the “full universe” of a million documents only to discover that there were actually four million.  If the team had known this upfront, they would have used AI to review all four million.

Closer to the Model T example, is there unrealized demand for legal services in the broader population?  Over half of Americans have no will.  Fifty to ninety percent of litigants are unrepresented.  Entrepreneurs regularly go without incorporating their businesses or papering their contracts, and employees rarely negotiate or contest their terms of employment.  The list goes on.

When is the last time you had something that could be a legal issue?  Did you hire a lawyer?  Very few lawyers could afford to hire themselves, just as the laborers who hand-made cars prior to the advent of the Model T couldn’t remotely afford to buy one of the cars they made.  Part of the magic of Henry Ford was that he could afford to pay his employees enough to buy the very products they made.  That created much of the middle class that by 1930 was driving cars all over America.

So, AI should increase demand not only for legal services, but possibly also for lawyers.  If that happens, legal’s AI rocket ship will not just be manned—it could employ throngs of new rocketeers.