Q. In the simplest terms possible, what is the core functionality of AI in legal?


It often seems that most of the legal profession thinks that AI is a box of pixie dust that you can sprinkle over any data set with the right incantation and voila, produce your desired result.

Most lawyers are confused about Artificial Intelligence (AI). While typically experts at adopting new language and using it correctly in context to convey sophistication around concepts not squarely in their wheelhouses, AI tends to be the exception. As evident in industry conversations, the confusion boils down to what AI does, and how, as applied to legal practice.

We should clear this up.  To do so, let’s take all the sex and rock n’ roll out of it, and strip AI to its core…

Are you ready? I’ll make this simple…

AI in legal practice is better search and find.

That’s it. AI in legal is “Control + F” on steroids.

The steroid aspect is that this better search and find functionality is trained on various kinds of data, housed in various formats. We are no longer limited to the world of text in Microsoft Word, or search and find within one document at a time.

For example, CARA A.I. uses the facts and legal issues in documents you upload to search a database and find relevant cases and other authorities. In concept, it operates as if you entered your facts and issues in a “Control +F” box and hit “Enter,” but instead of searching your documents this initiates a search across legal opinions.

Kira is a volume “Control +F” option for agreements. Relativity is a well-known option for discovery documents.

These “AI-powered” products require different training mechanisms on different data sets that produce different applications, but they all rely on the same core functionality – search and find.

Q. But what about document automation? Isn’t that different than search and find?

 Nope, AI’s functionality for document automation is still search and find.

Robotic Process Automation (which copies the way humans perform tasks) is the “brawn” of compiling documents based on predefined tasks. In this context, AI (which mimics the way humans think) can be used to identify and recommend templates and clauses, which is just another way of saying that AI’s contribution is to search a document or clause bank and find relevant results.

 Q. And legal analytics? Surely AI in that context is different than search and find?

 No, AI’s functionality for legal analytics is still search and find.

The most well-known example of legal analytics is with respect to litigation outcomes — accumulating data points from past case law on win/loss rates specific to judges, parties, and law firms involved. Companies use AI to search and find metadata (ex. judge name, counsel name) within opinions. That data is extracted and placed into a structured data set (think Excel) to tally results.

Visualization software is then used on that structured dataset to produce the analytics dashboards you’ve likely seen. The “so what?” is then left to a human’s intelligence to decipher. AI could play a more significant role in this “so what” analysis, but again, it would do so by searching data and, based on training and learning inputs, find and offer relevant insights.

Q. How does AI confusion play out in practice?

Now that we’ve defined AI in legal in its simplest form, I offer an illustration of how AI confusion plays out in practice.

The example starts with a conversation with a politically powerful and smart senior partner at a top law firm. He was mesmerized by the shiny promises of AI, with daydreams of press releases, awards, and banners touting a sophisticated tool with his name on it. Imagine all the client calls that would follow. He wanted his AI tool now.

He asked: “I have a professor friend at MIT who says that AI technology is sophisticated enough to do this quarterly client alert that associates are doing now, could we find the AI to do it?”

The client alert required associates to spend hundreds of hours to identify relevant court decisions not always captured in your mainstream research engines (including at the local level), make a judgment call on a specific issue, and tally those results. While a great training mechanism, this process was a grind on associate time and they knew they weren’t catching everything. There was room for quality improvement through more expansive data capture.

The MIT Prof was right: AI can search and find relevant results, identify a specific issue, extract it and make a judgment call based on inputs (which would presumably get better over time a la Spotify-type training) and tally those results.

Where this gets lost in translation is in the high-level HOW?

The MIT Prof did not have enough context to understand that the relevant court decisions were not all digitized and neatly stored in a central research engine to draw from. You can not solve this problem with AI. You need people (albeit not trained lawyers) to contact the courts without digitized documents and you need web scraping or APIs for the courts that digitize but are not centralized, all just to aggregate the relevant documents.

The MIT Prof also did not explain that while AI technologies are capable, that does not mean that a trained and out-of-the-box technology addressing this specific issue exists. This has two implications as it relates to the buy vs. build dynamic.

First, if you find an existing product then you have a decision to make: do we acquire this product and edge out the competition, or do we buy a license to it and match the competition? Only the buy approach creates a substantive differentiator. But is also a major proposition.

Second, if there is no existing product, then you must build the tool, presumably using AI technologies. This requires time, investment, and patience, but holds out the potential for a sustainable competitive advantage.

For this partner’s use case, there was no existing product and he wanted accolades that made his practice stand out from the competition. He would have to build.

Nevertheless, misunderstanding prevailed. After meeting with a third-party who walked through their detailed proposal on how they would build the proposed solution, the partner concluded that “the tech wasn’t ready.”

Numerous opportunities for learning and clarification appear in this example. I offer it here for context and as a possible trigger so that, the next time you encounter this kind of confusion around AI in the wild, you are prompted to identify it and provide helpful clarifications.

Through better understanding, we can have more productive conversations and collaborations that accelerate legal evolution.


Editor’s note: For an additional lucid explanation of AI for the newly acquainted, see Dan Currell/s recent book review in Post 263.  wdh.


NewLaw Fundamentals Q&A is published on the first Wednesday of each month.