Template:M intro work Large Learning Model

Revision as of 11:07, 21 July 2023 by Amwelladmin (talk | contribs)

LLM
/ɛl ɛl ɛm/ (also “large language model) (n.)
Once upon a time, an LLM was a “Master of Laws”: the postgraduate mark of the sensei in the society of legal services. Well — either of that, or of the indolence of one not prepared to strike out and put what she has learned into practice — but still: it spoke to perseverance, depth, comprehension and mastery, however pigeon-hearted its motivation.

Now all one needs for that kind of expertise, we are told, is a different kind of “LLM”: a “large language model”. Artificial intelligence rendered by a pattern-recognising, parallel-processing chatbot.

The legal profession is to ChatGPT, we hear, as poor old Chrissie Watkins was to Jaws.

But there have been contumelious rumours of its demise before. In the manner of a blindfolded dartsman, Professor Richard Susskind OBE has been tossing them around for decades. Just by random chance you would expect one to hit the wall at some point.

Is this it? Will it be ChatGPT that does for our learned friends what the meteor did to the dinosaurs?

We are not convinced. Those making this prediction do not ask the question: “cui bono?”

Who benefits, primarily, from this emergent technology?

It remains to be seen. But experience should tell us that usually, in situations like this, the first person to benefit — and the last — is the lawyer.

Now.

It is a truism that she who has a tool uses it, firstly, to improve her own lot.

A commercial lawyer’s “lot” is predicated on two things:

(1) time taken.
(2) ineffability: the sense that what she does “passeth all muggle understanding”.

It is a happy accident that, generally, (2) begets (1): the more ineffable something is, the longer it takes to write, and the harder it is to work with. The longer it takes, the more you can charge.

Commercial legal contracts take a long time to write and, once they have calcified into templates, are hard to work with. This is a capital state of affairs. Hence, no commercial law firm on the planet really cares for plain English. Oh, they all say, they do, of course — but come on. Have you ever read law firm boilerplate?

This also is, in itself, a neat “simplification defeat device”: if you make a contract template sufficiently convoluted, the one-off cost of simplifying it so vastly outweighs the cost of just “tweaking” it that no-one ever takes that first step to simplify. Even though they dwarf the upfront costs, the long-term cost-savings are always over that short term hump.

But here’s the thing: it will be lawyers who start to use ChatGPT as a tool, not their clients. Why? Because of that ineffability. ChatGPT is a pattern-matching device. It understands nothing. It cannot provide unmediated legal advice. It can only ever be a “back-breaker”: the “last mile” needs a human who knows what she is doing, understands the context and complicated human psychology at play in the cauldron of commercial negotiation. An LLM can draw pretty figures, but it cannot do that. Nor can it write legal opinions — well, meaningful ones — and nor, unmediated, does it have the insurance policy or deep, suable pockets for which a client is paying when it asks for one.

An LLM can only be deployed, that is to say, by someone with skin in the game; who puts herself in jeopardy by accepting the assignment, which jeopardy she defends by the simple expedient of knowing what she is doing.

That someone will be a lawyer.

Now such a “last mile” lawyer could use an LLM to simplify documents, accelerate research and break legal problems down to significant essences, thereby reducing the cost, and increasing the value, of her service to her clients. And sure, in theory, she could give all this value up for nothing.

But she could, just as easily, use an LLM to further complicate the documents: to overengineer, to convolute language, invent options and cover contingencies of only marginal utility: she could set her tireless symbol-processing engine to the task of injecting infinitesimal detail: she could amp up the ineffability to a level beyond a normal human’s patience. She will do this with only the best intentions, of course; this is not lily-gilding so much as a noble outreach toward perfection: using the arsenal at her disposal to reach ever closer to the Platonic form.

Which of these, realistically, do we expect a self-respecting lawyer to do? Simplify, or complicate? To sacrifice time and ineffability, for the betterment of her clients and the general better comprehension of the unspecialised world? Or would she plough her energy into using this magical new tool generate more convolution, ineffability, and recorded time?

Remember: for 40 years we’ve had technology — Microsoft Word, mainly — which the world’s lawyers could have used, powerfully, to simplify and minimise the legal work product. Did any of them do that.

You can already see the effect LLMs are having on legal work product. NDAs are getting longer, and worse.

Chat GPT may disrupt a lot of things, but it won’t be disrupting the legal profession any time soon.

Bear in mind who ChatGPT would be disrupting in this case. Two things about consumers of high-end commercial legal services.

(1) Most of them — us — are lawyers
(2) As lawyers they — we — take pride in the ability to work with difficult, complicated things. Convolution is a measure of our worth. The love of convolution for its own sake, for what it says about us, is a strong common value between lawyers and their commercial clients.

Lawyers — inhouse or out — are the jazz aficionados of text; cineastes of syntax. Overwrought contracts are expected: nothing says “prudent management of existential risk” like forty page of 10pt Times Roman. Plain language is not for serious people.

That is to say, neither fee-earning lawyers nor their immediate clients want plain contracts. If they did, we would already have them.