Hal, Val, and the lawyer governance problem that’s hindering AI in law


Oscar Reutersvärd is the “father of the impossible figure.”  Some of his impossible figures are captured on the Swedish stamps shown above.  The figures are, of course, quite possible — they’re just ink on paper.  But our brains turn quickly from seeing some shapes to the “realization” that they are “impossible” because the 3-D world our minds are trying to construct cannot exist.

Our powerful, broken minds

The problem is in our brains, of course.  Not only do humans use analogy and inference to build world models, as I discussed in the first two installments of this book review series on AI (Posts 232 and 237), we do it involuntarily.  (Part III of this four-part series is Post 250, which focused on opportunities and challenges of expert systems.)

Nick Chater, a professor of behavioral science at University of Warwick Business School, articulates this masterfully in The Mind is Flat: The Remarkable Shallowness of the Improvising Brain (2018). Our brains take disconnected slivers of data through a surprisingly narrow perceptive field and use them to build a world. 

Most of what we believe we are experiencing directly is an illusion, an analogy to reality that we hold in our minds.  Reutersvärd’s drawings highlight the involuntary nature of our world-building by isolating rare instances where our brains continue to insist on trying to extend the analogy even after we have concluded that it can’t be done.

Computers aren’t confused by Reutersvärd’s drawings because they don’t have a world model.  Their missing world models account for the chasm between machines and lawyers since analogy, inference, and world model-building are core to what lawyers do.

The mental glitch illustrated above reflects both an asset and a liability: we fill in the blanks creatively, preconsciously inserting things that may or may not be there at all.  That’s our great gift, and it’s what humans have to contribute to any process involving people and machines.  But it comes at the expense of reliability: because we creatively build a world that our own minds find totally convincing, we are capable of engaging and sustaining major errors without dissonance.  We also have terrible memories.  

Our minds are broken, and yes also extraordinarily capable.

Our powerful, broken machines

Computers don’t think the way we do in any respect, and they therefore lack the ability to do many things that seem simple to us.  But a machine’s way of “thinking” is highly reliable.  A computer can capture and return information with near-absolute fidelity, which is very different from my brain – and probably yours, too.  Even the iPhone MCMLXXXIV won’t be able to play Pictionary, but it will be able to store and recall with fidelity every picture ever drawn.  (For an illustration of the difficulty our brains have with even one instance of image recall: first, imagine a picture of a tiger.  Now, count the stripes on her tail.)

As AI techniques like deep neural networks are brought to bear, they can achieve things we simply couldn’t, like finding patterns in millions of pages of documentation or predicting rain with precision.  Computers have great data recall, and they may come to have great data precall, if you will, in the right instances. But they do so by applying totally different “thinking” techniques.  Among other things, machines have no ability to check their conclusions against a mental image or analogy of the world.  Their inability to run this sort of “common sense” audit of conclusions partly explains why even the best image recognition tools can be manipulated to think a tiny insect is a manhole cover or conclude with 99% certainty that a university building is a triceratops dinosaur.  There’s no ability to error-correct by imagining a context into which these surprising conclusions can be placed, because computers can’t imagine anything.  

Our machines are broken, and yet also extraordinarily capable.

Hal, Val and the absence of process design

Computers will replace associates when a partner can talk about an issue in her office for six minutes, scrawl twelve words onto a yellow legal pad, hand it to Hal the machine associate, and know that the project will be carried out just as badly as it would have been by Val, a recent graduate of Yale Law School.

Kidding – not kidding, as the kids say.  Hal would screw up the assignment differently than Val would.  We can expect Hal’s achievements to be more impressive and his mistakes far more bizarre.  Val’s mistakes will be more acceptable under the standards of common sense because they will happen when she fills in the rest of that yellow legal pad with a framework that doesn’t turn out to align to what the partner will have said she clearly wanted.

The problem here is not with Hal or Val, of course.  It’s with the partner.  And we can go one level up and say that the source of the problem is with the system, or absence of one, for specifying process and outcome in legal work.  Hal and Val are both broken machines, neither entirely suited to the task — Val for her imprecise recall and inability to read thousands of pages per second, and Hal for his total lack of analogical reasoning and the resulting absence of all common sense.  

But since Val has at least a prayer of “correctly” construing the assignment by filling in the blanks, legal leaders remain shockingly lazy about design. Lawyers build first and design later, preferring to see how it goes with the first few rows of bricks before pausing to decide whether the client needs a cathedral or an outhouse.

To the extent we’re only working with human lawyers, and perhaps because they are getting paid for hours rather than outcomes, we can get away with scrawling a few words on the yellow legal pad and calling it a plan.  It’s a terrible practice that gives rise to all the Keystone Cops episodes at law firms. But its continuing ubiquity suggests that the practice isn’t fatal.

Remarkably, for all their power, AI tools have no ability to self-correct, and an associate’s fidelity to a partner’s vision is certainly improved when a design is in place before she is asked to start laying bricks.  The winners in AI implementation won’t be the firms that are “good with technology” or even necessarily “innovative.”  The winners will be those that master the act of pairing Hal’s genius with Val’s genius while steering around each of their stark shortcomings.  They will be the firms with strong process design and process control. Cf. Post 248 (Rob Saccone making point that AI has the potential to be a new and powerful form of leverage with enormous cost and quality advantages that would wipe out late adopters).

Weaving together these dissimilar capabilities won’t be easy, but the design principles might be as simple as this:

  1. If analogy and metaphor is essential to the task, it’s for the human
  2. If analogy and metaphor can be eliminated from the task, it’s for the computer

Working out what precisely that means in a response to a FTC Second Request under Hart-Scott-Rodino is likely a very granular exercise.  And it’s one that law firms aren’t generally great at.

In fact, law firms might not even be capable of it.

Law’s broken governance and the future of AI

Lawyers are, on average, bad at process design and control in the way lawyers are, on average, bad at everything: some are great at it and the rest pretend they’ve never heard of it.  

The trouble with this kind of variability, beyond the obvious, is that process is a team sport, and the process is only as strong as its weakest link.  (For a good business novel — no, really — that explores this idea in detail, check out Eliyahu Goldratt’s The Goal.)  A lawyer might be great at process control, but a group of lawyers is certainly terrible at it.  See Post 241 (Alex Hamilton of Radiant Law arguing that only a greenfield law firm will be capable of fully embracing process, and literally betting his career on it).

Couldn’t a group of lawyers all pinky-swear to commit themselves to the disciplines of process design and control?  Well, in theory, yes.  But in practice, law firms cannot govern themselves the way any other sort of business can.  A law firm is like a shopping mall whose stores can leave, taking their customers with them, when the mall’s rules no longer suit them.  The mall is careful to keep its best tenants happy.  This is euphemized as “collegiality.”   

For the advancement of the profession, this kind of “collegiality” might need to be euthanized.

Decentralized power and centralized process control cannot coexist, and this may be lethal to law firms’ effective adoption of advanced technologies. Advanced technologies require well-articulated and consistently enforced processes to control the variability that comes from pairing them with humans — and law firms are not built for the consistent enforcement of anything.  

Could a firm’s partners vest real authority in a Chief Process Officer?  Sure. They might even abide by those rules, even after realizing how much autonomy they had given up.  But at some point, a partner with a lot of clients would leave for a firm with no such constraints, and a reckoning would shortly follow.

 Perhaps this creates an opening for non-law firms, or perhaps for law firms with very different governance.  Cf. Post 241 (Alex Hamilton exhorting patience to those looking for a rise of the vertically integrated law firm that combines law with data, process, and technology).  Either way, I expect the winners in an AI-enabled legal market to be the organizations that can figure out and agree to governance that can sustain the kind of process necessary to fully exploit a portfolio of powerful, broken machines.