Template:M intro work Large Learning Model: Difference between revisions
Amwelladmin (talk | contribs) Tags: Mobile edit Mobile web edit |
Amwelladmin (talk | contribs) No edit summary Tags: Mobile edit Mobile web edit |
||
Line 106: | Line 106: | ||
The ''last'' thing a legal drafter wants is to cede control of the narrative to the reader. Rather, a draft seeks to squash exactly the ambiguity that the metaphors of good literature require. | The ''last'' thing a legal drafter wants is to cede control of the narrative to the reader. Rather, a draft seeks to squash exactly the ambiguity that the metaphors of good literature require. | ||
Legal drafting is not literature in any sense: it reduces the reader to a machine: it hard codes unambiguous meaning. It leaves as little room as possible for interpretation. This is why [[legalese]] is so laboured. It is designed to remove all doubt, ambiguity and ''fun'' out of | Legal drafting is not literature in any sense: it reduces the reader to a machine: it hard codes unambiguous meaning. It leaves as little room as possible for interpretation. This is why [[legalese]] is so laboured. It is designed to remove all doubt, ambiguity and ''fun'' out of reading. It renders it mechanical, precise and reliable. It bestows [[certainty]] at the cost of [[possibility]] regardless of the cost, and to hell with literary style and elegance. | ||
We should regard legal drafting as closer to computer code | If law is literature then it is unique in that it is the only one to favour certainty over possibility. Unique in that it is the one in which we must still Grant soul jurisdiction to authorial intent for Scott the problem is, a large language model has no authorial intent. | ||
We should regard legal drafting as closer to computer code: a form of symbol processing where the meaning resides wholly within and is fully limited by the text. | |||
====Meet the new boss —==== | ====Meet the new boss —==== |
Revision as of 12:11, 24 July 2023
LLM
/ɛl ɛl ɛm/ (also “large language model”) (n.)
Once upon a time, an LLM was a “Master of Laws”: the postgraduate mark of the sensei in the society of legal service providers — either of that, or of the indolence of one not prepared to strike out and put what she has learned into practice — but still: it spoke to perseverance, depth, comprehension and mastery, however pigeon-hearted its motivation.
If the thoughtleaderati are to be believed, now all one needs for that kind of expertise is a different kind of “LLM”: a “large language model”. Artificial intelligence rendered by a pattern-recognising, parallel-processing chatbot.
The legal profession is to ChatGPT, we hear, as poor old Chrissie Watkins was to Jaws.
But there have been contumelious prophecies of its demise before. In the manner of a blindfolded dartsman, Professor Richard Susskind OBE has been tossing them around for decades. Just by random chance, you would expect one to hit the wall at some point.
Is this big law’s Waterloo? Will ChatGPT do for our learned friends what the meteor did to the dinosaurs?
Or will the lawyers, like cockroaches, survive? Might they even turn this to their advantage?
Cui bono?
Who benefits, primarily, from this emergent technology? Experience should tell us that the first — and often the last — to benefit from legal productivity tools are the lawyers. Should we expect this time be different?
Now, it is a truism that she who has a tool uses it, firstly, to improve her own lot. A commercial lawyer’s “lot” is predicated on two things: (1) time taken, and (2) ineffability: the sense that what she does “passeth all muggle understanding”.
It is a happy accident that, generally, (2) begets (1): the more ineffable something is, the longer it takes, and the harder it is to work with. The longer it takes, the more you can charge.
Commercial legal contracts are like that. Long, and once they have calcified into templates, fiddly. For lawyers, this is a capital state of affairs. It is why no commercial law firm on the planet really cares for plain English. Oh, they all say, they do, of course — but come on.
This is, in itself, a neat “simplification defeat device”: if you make a contract template sufficiently convoluted, the one-off cost of simplifying it so vastly outweighs the cost of just “tweaking” and living with it that few clients will ever take that first step to simplify. Even though the the ongoing costs of not rationalising dwarf the one-off costs of doing so, the long-term savings are always over that hump.
And bear in mind it will be the lawyers who deploy LLMs as a tool, not their clients. Why? Because of that ineffability. An LLM is a pattern-matching device. It understands nothing. It cannot provide unmediated legal advice. It can only ever be a “back-breaker”: the “last mile” needs a human who knows what she is doing, understands the context and complicated human psychology at play in the cauldron of 1 negotiation. An LLM can draw pretty, impressive-at-a-distance doodles, but it cannot do that. Nor can it write legal opinions — well, not meaningful ones — and nor, unmediated by a law firm, does it have the insurance policy or deep, suable pockets for which a client is paying when it seeks legal advice in the first place.
An LLM can only be deployed, that is to say, by someone with skin in the game; who is prepared to put herself in jeopardy by accepting the assignment, which jeopardy she defends by the simple expedient of knowing what she is doing and checking her LLM’s output.
That someone will be a lawyer.
Now such a “last mile” lawyer could use an LLM to simplify documents, accelerate research and break legal problems down to their essences, thereby reducing the cost, and increasing the value, of her service to her clients. And, sure: in theory, she could give all this value up to her clients for nothing.
But she could, just as easily, use an LLM to further complicate the “work product”: to overengineer, to convolute, to invent options and cover contingencies of minimal utility: she could set her tireless symbol-processing engine to the task of injecting infinitesimal detail: she could amp-up the ineffability to a level beyond a normal human’s patience.
Which of these, realistically, should we expect a lawyer to do? Simplify, or complicate? Sacrifice time and ineffability, for the better comprehension of the unspecialised world? Or plough the energy this magical new tool bestows into generating more convolution and ineffability, racking up more recorded time, and building up the bulwark against the muggles?
She would do the latter with only the best intentions, of course; this is not lily-gilding so much as a noble outreach toward perfection: using the arsenal at her disposal to reach ever closer to the Platonic form.
Cynical, or just realistic? Foretellers of legal Armageddon must explain away some difficult facts: that the commercial-legal industrial complex has stubbornly resisted all attempts at simplification and disintermediation for a generation, notwithstanding the thought-leadership, regulatory prompting, appeals to logic and 40 years of enabling technology — Microsoft Word, mainly — which the world’s lawyers could have used, powerfully, to simplify and minimise the legal work product.
Not only did they not do that, they used their tools to make everything more complicated. Boilerplate blossomed. Templates flowered. Even trivial contracts acquired wording dealing with counterparts, governing the form of amendments and excluding third party rights that weren’t there in the first place.[1]
This is a perfect job for ChatGPT. Why should a difference engine designed to generate plausible-sounding but meaningless text be used do anything different?
You can see the effect is is having on legal work product. NDAs grow ever longer, increasingly riven with the same generic ornamentations that usually range between harmless and misconceived but which are now so prevalent — they recur as the LLMs hone their model — as to become hard for the meatware to resist.
The meatware, remember, has limited patience with NDAs, understanding in a way an algorithm cannot how much of a pantomime they are. Algorithms, on the other hand have unlimited patience and boundless energy. If negotiation comes down to who passes out first, we should bear in mind that LLMs don’t pass out.
Who’s client? Oh, right: she’s a lawyer, too.
“But, JC, come on. Be realistic. It is dog-eat-dog out there. Any lawyer keeps the bounty of the LLM from her clients will soon have her lunch eaten by others who won’t. You cannot fight the invisible hand. We are in a race to the bottom.”
But are we?
Ignoring how impervious to the invisible hand all other recent technologies have been, remember who the clients are. Consumers of high-end commercial legal services are not, generally, the permanently bamboozled muggles of common myth. Most are themselves lawyers, inhabiting weaponised legal departments mainly comprised of veteran deal lawyers. These are people also take pride in their ability to work with difficult, complicated things. This is how they prove their worth to their employers.
Lawyer and their clients, that is to say, have a common interest in convolution for its own sake. They are the jazz aficionados of text; cinéastes of syntax. They expect overwrought contracts: nothing says “prudent management of existential risk” like eighty page of 10pt Times New Roman.
Plain English is not for serious people.
Conservative motivation
Nor should we underestimate the overwhelming power of the lawyer’s intuition that what has gone before is sacrosanct.
Lawyers are the last great positivists: they understand instinctively that what has been already laid down by someone else — “posited” — is safer and than anything new that they might themselves contribute. The common law with its doctrine of precedent, after all, is to all intents a divine commandment: in times of doubt, to do what has been done before.
The more authoritative the source, the more sacred it will be.
Thus, lawyers will assiduously “track the wording of legislation” to ensure their drafting matches it with utmost fidelity, notwithstanding any private reservations they may have about how it was drafted. The more ambiguous, or just difficult the source text, the more assiduously should we expect lawyers to replicate it, because they fear it. They fear the limits of their own mastery.
This “positivism-through-fear” extends with equal force to established market precedents. It doesn’t matter how manifestly unfit for purpose it is, the resistance to change will be strong.
Literary theory, legal construction and LLMs
Fittingly, the first chatbot was a designed as a parlour trick. In 1966 Joseph Weizenbaum, a computer scientist at MIT created “ELIZA” to explore communication between humans and machines. ELIZA used pattern matching and substitution techniques to generate realistic conversations. By today’s standards, ELIZA was rudimentary, simply regurgitating whatever was typed into it, reformatted as an open-ended statement or question, thereby inviting further input. As a session continued, the user’s answers became more specific and elaborate, allowing ELIZA to seem ever more perceptive in its responses.
ELIZA was a basic “keepy uppy” machine.
Still, it proved surprisingly addictive — even to those who knew how it worked. Weizenbaum was famously shocked how easily people, including his own secretary, were prepared to believe ELIZA “understood” them and contributed meaningfully to the interaction.
This is, of course, how any kind of mind-reading works: with the right kind of clever questions the conjurer extracts from the subject herself all the information needed to create the illusion.
LLMs work the same way. Like all good conjuring tricks, generative AI relies on misdirection: its singular genius is that it lets us misdirect ourselves, into wilfully suspending disbelief, never noticing who is doing the creative heavy lifting needed to turn its output into magic: we are.
Yet again we are suborning ourselves to the machines. We need to kick this habit.
The irony is that we are neuro-linguistically programming ourselves to be wowed by LLMs. By writing prompts, we create our own expectation of what we will see, and when the pattern-matching machine produces something roughly like it, we use our own imaginations to frame and filter and sand down rough edges to see what we want to see, to render that output as commensurate as possible to our original instructions.
We say, “fetch me a tennis racquet”, and the machine comes back with something resembling a lacrosse stick, we are far more impressed than we would be of a human doing the same thing. We would think the human useless. Dim. But with generative AI we don’t, at first, notice we are not getting what we asked for. We might think, “oh, that will do.” or, perhaps, “try again, but make the basket bigger, the handle shorter, and tighten up the net.”
This is most obvious in AI-generated art, which famously struggles with hands, eyes, and logically possible three-dimensional architecture. First impressions can be stunning, but a closer look reveals an absurdist symphony. Given how large learning models work, this should bit suprise us. They are all trees, no wood.
It is just as true of text prompts. AI-generated text looks fabulous at first blush; inspect it more closely and the photorealistic resonance emerges out of logical cul-de-sacs and two-footed hacks. The fact that they so beguile us is because much of what humans write comprises logical cul-de-sacs and two-footed hacks, but that is another story.
In either case, that emergent creative act — the thing that renders King Lear as ageless cultural landmark, but Dracula: The Undead as forgettable pap[2] doesn’t subsist in the text. It is between our ears. We are oddly willing to cede intellectual eminence to a machine.
But the novelty soon wears off: as we persevere, we begin to see the magician’s wires. There are familiar tropes; we see how the model goes about what it does. It has its tics. It becomes progressively less surprising eventually settling into an entropic quotidian. It loses lustre.
As does the appeal of generating targeted specific work product iteratively using a random word generator. The first couple of passes are great, they go from zero to 0.5, but the marginal improvement in each round diminishes, and the machine reaches asymptotically towards in upper bound of what you had in mind, which is about 75% of it.
As generative AI evolves that threshold may move towards 100, — there are some indications it may get worse, see below — but it will not get there, and each round becomes increasingly time-consuming and fruitless, human enthusiasm will wane long before the singularity.
Also why when we are targeting a specific outcome, it is frequently frustratingly not quite what we had in mind
We refine and elaborate our query; we learn how to better engineer prompts, and this becomes the skill, rather than the process of pattern matching to respond to it. so ours is the skill going in, and ours is the skill narratising the output. That is true of all literary discourse: just as much of the “world-creation” goes on between the page and the reader as it does between writer and text.
Letting the reader do the imaginative work to fill in the holes is fine when it comes to literature — better than fine, in fact: it characterises the best art. But it is not how legal discourse works.
The last thing a legal drafter wants is to cede control of the narrative to the reader. Rather, a draft seeks to squash exactly the ambiguity that the metaphors of good literature require.
Legal drafting is not literature in any sense: it reduces the reader to a machine: it hard codes unambiguous meaning. It leaves as little room as possible for interpretation. This is why legalese is so laboured. It is designed to remove all doubt, ambiguity and fun out of reading. It renders it mechanical, precise and reliable. It bestows certainty at the cost of possibility regardless of the cost, and to hell with literary style and elegance.
If law is literature then it is unique in that it is the only one to favour certainty over possibility. Unique in that it is the one in which we must still Grant soul jurisdiction to authorial intent for Scott the problem is, a large language model has no authorial intent. We should regard legal drafting as closer to computer code: a form of symbol processing where the meaning resides wholly within and is fully limited by the text.
Meet the new boss —
We don’t doubt that LLM is coming, nor that the legal industry will find a use for it: just that there is a useful, sustained use for it. It feels more like a parlour trick: surprising at first, diverting after a while, but then the novelty wears off, and the appeal of persevering with what is basically a gabby but unfocussed child wears pales.
The traditional legal model faces existential challenges for sure, but they are not presented, and will not be addressed by random word generators.
Coda: is ChatGPT getting worse?
In other news, scientists are concerned that ChatGPT might be getting worse. Studies indicate that its accuracy at tasks requiring computational accuracy, like playing noughts and crosses or calculating prime numbers, is rapidly diminishing.
Perhaps ChatGPT is getting bored, or might it have something to do with the corpus increasingly comprising nonsense text generated on the hoof by some random using ChatGPT?