Template:M intro work Large Learning Model

From The Jolly Contrarian
Revision as of 17:01, 21 July 2023 by Amwelladmin (talk | contribs)
Jump to navigation Jump to search

LLM
/ɛl ɛl ɛm/ (also “large language model) (n.)
Once upon a time, an LLM was a “Master of Laws”: the postgraduate mark of the sensei in the society of legal services. Well — either of that, or of the indolence of one not prepared to strike out and put what she has learned into practice — but still: it spoke to perseverance, depth, comprehension and mastery, however pigeon-hearted its motivation.

Now all one needs for that kind of expertise, we are told, is a different kind of “LLM”: a “large language model”. Artificial intelligence rendered by a pattern-recognising, parallel-processing chatbot.

The legal profession is to ChatGPT, we hear, as poor old Chrissie Watkins was to Jaws.

But there have been contumelious rumours of its demise before. In the manner of a blindfolded dartsman, Professor Richard Susskind OBE has been tossing them around for decades. Just by random chance you would expect one to hit the wall at some point.

Is this it? Will it be ChatGPT that does for our learned friends what the meteor did to the dinosaurs?

We are not convinced. Before going nap on this prediction first ask: “cui bono?”

Cui bono

Who benefits, primarily, from this emergent technology?

It remains to be seen. But experience should tell us that usually, in situations like this, the first person to benefit — and the last — is the lawyer.

Now.

It is a truism that she who has a tool uses it, firstly, to improve her own lot.

A commercial lawyer’s “lot” is predicated on two things:

(1) time taken.
(2) ineffability: the sense that what she does “passeth all muggle understanding”.

It is a happy accident that, generally, (2) begets (1): the more ineffable something is, the longer it takes to write, and the harder it is to work with. The longer it takes, the more you can charge.

Commercial legal contracts take a long time to write and, once they have calcified into templates, are hard to work with. This is a capital state of affairs. Hence, no commercial law firm on the planet really cares for plain English. Oh, they all say, they do, of course — but come on. Have you ever read law firm boilerplate?

This also is, in itself, a neat “simplification defeat device”: if you make a contract template sufficiently convoluted, the one-off cost of simplifying it so vastly outweighs the cost of just “tweaking” it that no-one ever takes that first step to simplify. Even though they dwarf the upfront costs, the long-term cost-savings are always over that short term hump.

But here’s the thing: it will be lawyers who start to use ChatGPT as a tool, not their clients. Why? Because of that ineffability. ChatGPT is a pattern-matching device. It understands nothing. It cannot provide unmediated legal advice. It can only ever be a “back-breaker”: the “last mile” needs a human who knows what she is doing, understands the context and complicated human psychology at play in the cauldron of commercial negotiation. An LLM can draw pretty figures, but it cannot do that. Nor can it write legal opinions — well, meaningful ones — and nor, unmediated, does it have the insurance policy or deep, suable pockets for which a client is paying when it asks for one.

An LLM can only be deployed, that is to say, by someone with skin in the game; who puts herself in jeopardy by accepting the assignment, which jeopardy she defends by the simple expedient of knowing what she is doing.

That someone will be a lawyer.

Now such a “last mile” lawyer could use an LLM to simplify documents, accelerate research and break legal problems down to significant essences, thereby reducing the cost, and increasing the value, of her service to her clients. And sure, in theory, she could give all this value up for nothing.

But she could, just as easily, use an LLM to further complicate the documents: to overengineer, to convolute language, invent options and cover contingencies of only marginal utility: she could set her tireless symbol-processing engine to the task of injecting infinitesimal detail: she could amp up the ineffability to a level beyond a normal human’s patience.

Which of these, realistically, do we expect a self-respecting lawyer to do? Simplify, or complicate? To sacrifice time and ineffability, for the betterment of her clients and the general better comprehension of the unspecialised world? Or would she plough her energy into using this magical new tool generate more convolution, ineffability, and recorded time?She would do the latter with only the best intentions, of course; this is not lily-gilding so much as a noble outreach toward perfection: using the arsenal at her disposal to reach ever closer to the Platonic form.

Cynical, or just realistic? Foretellers of Armageddon must explain away some difficult facts: that the commercial-legal industrial complex has stubbornly resisted all attempts at simplification and disintermediation for a generation, notwithstanding the thought-leadership, regulatory prompting, appeals to logic 40 years of technology — Microsoft Word, mainly — which the world’s lawyers could have used, powerfully, to simplify and minimise the legal work product.

Not only did the industry not simplify, it made everything more complicated. Documents became longer. Boilerplate blossomed. Templates flowered. Every contract has a counterparts clause. If in doubt.

Why should a difference engine designed to generate plausible-sounding but meaningless text be used do anything different? You can see the effect LLMs are having on legal work product. NDAs are getting longer, worse, and are being systematically riven with the same generic convolutions. These are usually fripperies, but in some cases are misconceived, but as they are recurring so frequently now, as the LLMs hone their model, they become harder and harder for the meatware to resist. The meatware, remember, has limited patience with NDAs, understanding in a way that a chatbot never will how much of a pantomime they are. Algorithms, on the other hand, have unlimited patience and boundless energy. If negotiation comes down to who blinks first, we should bear in mind that LLMs don’t blink.

Chat GPT may disrupt a lot of things, but it won’t be disrupting the legal profession any time soon.

Who’s client? Oh, right: she’s a lawyer, too.

“But, JC, come on. Be realistic. It is dog-eat-dog out there. Any lawyer keeps the bounty of the LLM from her clients will soon have her lunch eaten by others who won’t. You cannot fight the invisible hand.” We race to the bottom.

But do we? Ignoring how impervious to the invisible hand all other recent technologies have been, just remember who the clients are, and what their interests are. Consumers of high-end commercial legal services are not, generally, the permanently bamboozled muggles of common myth. Most are themselves lawyers, inhabiting weaponised legal departments mainly comprised of veteran deal lawyers.

These people also take pride in their ability to work with difficult, complicated things. This is how they prove their worth to their employers too. Lawyer and clients, that is to say, has a common interest in convolution for its own sake. Lawyers — inhouse or out — are the jazz aficionados of text; cinéastes of syntax. They expect overwrought contracts: nothing says “prudent management of existential risk” like eighty page of 10pt Times New Roman.

Plain English is not for serious people.

Meet the new boss —

We don’t doubt that LLM is coming, nor that the legal industry will find a use for it: just that there is a useful, sustained use for it. It feels more like a parlour trick: surprising at first, diverting after a while, but then the novelty wears off, and the appeal of persevering with what is basically a gabby but unfocussed child wears pales.

The traditional legal model faces existential challenges for sure, but they are not presented, and will not be addressed by random word generators.