Template:M intro work Large Learning Model: Difference between revisions
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) Tags: Mobile edit Mobile web edit Advanced mobile edit |
||
(53 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
{{d|LLM|/ɛl ɛl ɛm/ (''also “[[large | {{d|LLM|/ɛl ɛl ɛm/ (''also “[[large language model]]”'')|n|}}{{drop|O|nce upon a}} time, an [[LLM]] was a “Master of Laws”: the postgraduate mark of the ''sensei'' in the society of legal service providers — either of that, or of the indolence of one not prepared to strike out and put what she has learned into practice — but still: it spoke to perseverance, depth, comprehension and mastery, however pigeon-hearted its motivation. | ||
If the [[Thought leader|thoughtleaderati]] are to be believed, now all one needs for that kind of expertise is a different kind of “LLM”: a “[[large language model]]”. [[Artificial intelligence]] rendered by a pattern-recognising, parallel-processing [[chatbot]]. | |||
The legal profession is to [[ChatGPT]], we hear, as poor old Chrissie Watkins was to Jaws. | The legal profession is to [[ChatGPT]], we hear, as poor old Chrissie Watkins was to Jaws. | ||
But there have been contumelious | But there have been contumelious prophecies of its demise before. In the manner of a blindfolded dartsman, Professor Richard Susskind OBE has been tossing them around for decades. Just by random chance, you would expect one to hit the wall at some point. | ||
Is this | Is this big law’s Waterloo? Will [[ChatGPT]] do for our learned friends what the meteor did to the dinosaurs? | ||
Or will the lawyers, like cockroaches, survive? Might they even turn this to their advantage? | |||
====Cui bono?==== | |||
{{drop|W|ho benefits, primarily}}, from this emergent technology? Experience should tell us that the first — and often the ''last'' — to benefit from legal productivity tools are the ''lawyers''. Should we expect [[this time is different|this time to be different]]? | |||
Now, it is a truism that she who has a tool uses it, firstly, to improve her own lot. A commercial lawyer’s “lot” is predicated on two things: (1) ''[[time and attendance|time]]'' taken, and (2) ''[[Ineffable|ineffability]]'': the sense that what she does “passeth all [[muggle]] understanding”. | |||
It is a happy accident that, generally, (2) begets (1): the more ''ineffable'' something is, the longer it takes, and the harder it is to work with. The longer it takes, the more you can charge. | |||
It is | Commercial legal contracts are like that. Long, and once they have calcified into templates, fiddly. For lawyers, this is a capital state of affairs. It is why no commercial law firm on the planet ''really'' cares for [[plain English]]. Oh, they all ''say'', they do, of course — but ''come on''. | ||
This is, in itself, a neat “[[simplification|simplification defeat device]]”: if you make a contract template sufficiently convoluted, the one-off cost of simplifying it so vastly outweighs the cost of just “tweaking” and living with it that few clients will ever take that first step to simplify. Even though the the ongoing costs of ''not'' rationalising dwarf the one-off costs of doing so, the long-term savings are always over that hump. | |||
And bear in mind it will be the lawyers who deploy [[LLM]]s as a tool, not their clients. Why? ''Because of that [[ineffability]]''. An [[LLM]] is a pattern-matching device. It understands nothing. It cannot provide unmediated legal advice. It can only ever be a “back-breaker”: the “last mile” needs a human who knows what she is doing, understands the context and complicated human psychology at play in the cauldron of [[negotiation]]. An [[LLM]] can draw pretty, impressive-at-a-distance doodles, but it cannot do that. Nor can it write [[legal opinion]]s — well, not ''meaningful'' ones — and nor, unmediated by a law firm, does it have the insurance policy or deep, suable pockets for which a client is paying when it seeks legal advice in the first place. | |||
An [[LLM]] can only be deployed, that is to say, by someone with skin in the game; who is prepared to put ''herself'' in jeopardy by accepting the assignment, which jeopardy she defends by the simple expedient of ''knowing what she is doing'' and checking her [[LLM]]’s output. | |||
That someone will be a ''lawyer''. | |||
Now such a “last mile” lawyer ''could'' use an [[LLM]] to simplify documents, accelerate research and break legal problems down to their essences, thereby reducing the cost, and increasing the value, of her service to her clients. And, sure: in theory, she ''could'' give all this value up to her clients for nothing. | |||
But she could, just as easily, use an [[LLM]] to further ''complicate'' the “work product”: to overengineer, to convolute, to invent options and cover contingencies of minimal utility: she could set her tireless symbol-processing engine to the task of ''injecting infinitesimal detail'': she could amp-up the ineffability to a level beyond a normal human’s patience. | |||
Which, realistically, should we expect a | Which of these, realistically, should we expect a lawyer to do? [[Simplify]], or ''[[Complicated system|complicate]]''? Sacrifice time ''and'' [[ineffability]], for the better comprehension of the unspecialised world? Or plough the energy this magical new tool bestows into generating ''more'' convolution and ineffability, racking up more recorded time, and building up the bulwark against the [[Muggle|muggles]]? | ||
She would do the latter with only the best intentions, of course; this is not lily-gilding so much as a noble outreach toward perfection: using the arsenal at her disposal to reach ever closer to the Platonic form. | |||
Cynical, or just realistic? Foretellers of legal Armageddon must explain away some difficult facts: that the commercial-legal industrial complex has stubbornly resisted all attempts at simplification and disintermediation for a generation, notwithstanding the thought-leadership, regulatory prompting, appeals to logic and 40 years of enabling technology — [[Microsoft Word]], mainly — which the world’s lawyers ''could'' have used, powerfully, to simplify and minimise the [[attorney legal work product|legal work product]]. | |||
Not only did they not do that, ''they used their tools to make everything more complicated''. [[Boilerplate]] blossomed. Templates flowered. Even trivial contracts acquired wording dealing with [[counterparts]], governing the [[No oral amendment|form of amendments]] and [[Contracts (Rights of Third Parties) Act 1999|excluding third party rights]] that weren’t there in the first place.<ref>Contracts don’t confer rights on third parties accidentally. Where is is deliberate, it is obtuse to exclude them.</ref> | |||
This is a perfect job for [[ChatGPT]]. Why should a [[difference engine]] designed to generate plausible-sounding but meaningless text be used do anything different? | |||
You can see the effect is is having on legal work product. [[NDA]]s grow ever longer, increasingly riven with the same generic ornamentations that usually range between harmless and misconceived but which are now so prevalent — they recur as the LLMs hone their model — as to become hard for the [[meatware]] to resist. | |||
The [[meatware]], remember, has limited patience with [[NDA]]s, understanding in a way an [[algorithm]] cannot how much of a pantomime they are. [[Algorithm]]s, on the other hand have unlimited patience and boundless energy. If negotiation comes down to who passes out first, we should bear in mind that ''LLMs don’t pass out''. | |||
====Who’s client? Oh, right: she’s a lawyer, too.==== | |||
{{drop|“B|ut, [[JC]], come}} on. Be realistic. It is dog-eat-dog out there. Any lawyer keeps the bounty of the [[LLM]] from her clients will soon have her lunch eaten by others who won’t. You cannot fight the invisible hand. We are in a race to the bottom.” | |||
But are we? | |||
Ignoring how impervious to the invisible hand all other recent technologies have been, remember who the clients are. Consumers of high-end commercial legal services are not, generally, the permanently bamboozled [[Muggle|muggles]] of common myth. Most are ''themselves'' lawyers, inhabiting weaponised [[Legal|legal department]]s mainly comprised of veteran deal lawyers. These are people ''also'' take pride in their ability to work with difficult, complicated things. This is how they prove their worth to their employers. | |||
Lawyer and their clients, that is to say, have a common interest in convolution for its own sake. They are the jazz aficionados of text; ''cinéastes'' of syntax. They ''expect'' overwrought contracts: nothing says “prudent management of existential risk” like eighty page of 10pt Times New Roman. | |||
[[Plain English]] is not for [[serious people]]. | |||
====Conservative motivation==== | |||
{{drop|N|or should we}} underestimate the overwhelming power of the lawyer’s intuition that ''what has gone before is sacrosanct''. | |||
Lawyers are the last great [[positivist]]s: they understand instinctively that what has been already laid down by someone else — “posited” — is ''safer'' and than anything new that they might themselves contribute. The [[common law]] with its [[doctrine of precedent]], after all, is to all intents a divine commandment: ''in times of doubt, to do what has been done before''. | |||
The more authoritative the source, the more sacred it will be. | |||
Thus, lawyers will assiduously “track the wording of legislation” to ensure their drafting matches it with utmost fidelity, notwithstanding any private reservations they may have about how it was drafted. The more ambiguous, or just ''difficult'' the source text, the more assiduously should we expect lawyers to replicate it, because they ''[[fear]]'' it. They fear the limits of their own mastery. | |||
This “positivism-through-fear” extends with equal force to established market precedents. It doesn’t matter how manifestly unfit for purpose it is, the resistance to change will be strong. | |||
====Literary theory, legal construction and LLMs==== | |||
{{quote|“What an astonishing thing a book is. It’s a flat object made from a tree with flexible parts on which are imprinted lots of funny dark squiggles. But one glance at it, and you’re inside the mind of another person, maybe somebody dead for thousands of years. Across the millennia, an author is speaking clearly and silently inside your head, directly to you. | |||
Writing is perhaps the greatest of human inventions, binding together people who never knew each other, citizens of distant epochs. Books break the shackles of time. A book is proof that humans are capable of working magic.” | |||
:— Carl Sagan, ''Cosmos'' | |||
“I think you underestimate the power of ''reading'', Professor Sagan.” | |||
:—[[Jolly Contrarian|JC]]}} | |||
=====Theres’ a hole in my model, ELIZA===== | |||
{{drop|F|ittingly, the first}} [[chatbot]] was a designed as a parlour trick. In 1966 Joseph Weizenbaum, a computer scientist at MIT created the programme “[[ELIZA]]” to explore communication between humans and machines. [[ELIZA]] used pattern matching and substitution techniques to generate realistic conversations. You can try it out [https://web.njit.edu/~ronkowit/eliza.html here]. | |||
By today’s standards, [[ELIZA]] was rudimentary, simply regurgitating whatever was typed into it, reformatted as an open-ended statement or question, thereby inviting further input. | |||
Even though [[ELIZA]] was little more than a basic “keepy uppy” machine, it proved surprisingly addictive. Weizenbaum was famously shocked how easily people were prepared to believe [[ELIZA]] “understood” them and contributed meaningfully to the interaction. When trying it out, Weizenbaum’s own secretary, who had watched him build the programme for months and knew how it worked, shooed him out of the room so she could have privacy with the machine. | |||
This is, of course, how all “mind-reading” works: by asking the right kinds of question, the conjurer extracts from the subject all the information needed to create the illusion of telepathy. | |||
Here is the thing: ''[[LLM]]s work the same way''. | |||
=====LLMs as conjuring tricks===== | |||
{{quote| | |||
''POLONIUS'': What do you read, my lord? <br> | |||
''HAMLET'': Words, words, words. <br> | |||
''POLONIUS'': What is the matter, my lord? <br> | |||
''HAMLET'': Between who? <br> | |||
''POLONIUS'': I mean the matter that you read, my lord. <br> | |||
:—''Hamlet'', II, ii | |||
}} | |||
{{drop|L|ike all good}} conjuring tricks, [[generative AI]] relies on misdirection: its singular genius is that it lets us misdirect ''ourselves''. We wilfully suspend disbelief, never noticing who is creatively re-rendering machine-made screed as magic. ''We are''. We are neuro-linguistically programming ''ourselves'' to be wowed by a {{plainlink|https://en.wikipedia.org/wiki/Digesting_Duck|mechanical duck}}, when the clever part is really happening inside our own heads. | |||
By writing “prompts”, we create an expectation of what we will see. When the pattern-matching machine produces something roughly like that, we use our own imaginations to backfill, frame, filter, correct, boost, render, sharpen and polish the output into what we ''wanted'' to see. We ''construe'' the output to conform to our original instructions. | |||
When we say, “fetch me a tennis racquet”, and the machine comes back with something more like a lacrosse stick, we are far more impressed than we would be of a human who did the same thing: we would think such a human a bit ''dim''. But with [[generative AI]] we don’t, at first, even notice we are not getting what we asked for. We might think, “oh, nice try!” or “well, that will do,” or perhaps, “ok, computer: try again, but make the basket bigger, the handle shorter, and tighten up the net.” We can iterate this way until we have what we want — though note all the refining intelligence is coming from ''us'' — or we ''could'' just Google for a conventional photo of a tennis racquet. | |||
Yet an alarming number of people seem to want to believe that human ingenuity pales against a magic box. This involves quite a lot of motivated forgetfulness. For example, it was claimed of Google glass that it could read facial expressions to detect underlying emotional states: | |||
{{quote|The app detects emotion in the faces it sees using an algorithm generated by a machine learning system. This AI system was trained on large datasets of faces to decode the emotions from facial expressions.<ref>[https://www.cbc.ca/radio/quirks/march-30-erasing-memories-biggest-t-rex-and-the-smell-of-parkinson-s-and-more-1.5075050/google-glasses-could-help-kids-with-autism-read-emotional-cues-in-people-s-faces-1.5075055 As reported on CNBC]</ref>}} | |||
Sounds amazing, huh? But ''humans don’t need AI goggles to read each others’ emotions''. They do it naturally. They are really good at it — far better than any [[algorithm]] — already. Even ''dogs'' can read facial expressions.<ref>[https://www.rspca.org.uk/-/blog_how_dogs_know_what_were_feeling#:~:text=In%20recent%20times%2C%20research%20has,human%20emotions%20by%20smell%20alone. Dogs know what we are feeling], RSPCA.</ref> | |||
Why are we aspiring to have machines do badly things we already do effortlessly? Who benefits from that? | |||
So it is with [[generative AI]]. AI image generation struggles with hands, eyes and logical three-dimensional architecture. It is just as true of text prompts: on close inspection we see the countless logical ''cul-de-sacs'', bad guesses and ''non sequiturs'' from which a clever story miraculously [[Emergence|emerge]]s. (To be sure, many human authors write in logical ''cul-de-sacs'', bad guesses and ''non sequiturs'', but that is another story.) Again, where is the magic in this process? ''Inside the [[meatware]]''. | |||
Either way, as we persevere, we see more and more of the magician’s wires. We sense how the model works: its familiar tropes, tics and consistent ways of doing things, which are never quite what you had in mind. The first couple of passes are great: they get from zero to 50% of what we wanted. But the marginal improvement in each following round diminishes. The piquant surprise at what the machine can produce dampens at each go-round, reaching asymptotically towards an upper capability, well short of 100%, in “producing what you had in mind”. | |||
Eventually we settle for an [[Entropy|entropic]], vaguely dissatisfying quotidian. “Okay, it isn’t quite what I had in mind. but in the interests of time it will do.” | |||
Again, we subordinate our own interests to the machine’s. We accept a mediocre job we would not take from a human. Should we not be more exacting? Should we not hold an LLM at least to the a standard of a human? | |||
Ours is the skill going in, and ours is the skill construing the output. The machine does the boring bit: the bit that machines have ''always'' done: applies uninspired strength, speed, reliability, and economy. | |||
''Not ingenuity''. | |||
=====LLMs and literary theory===== | |||
{{quote| | |||
''HORATIO'': O day and night, but this is wondrous strange. <br> | |||
''HAMLET'': And therefore as a stranger give it welcome.<br> | |||
There are more things in heaven and earth, Horatio, <br> | |||
Than are dreamt of in your philosophy. <br> | |||
:— ''Hamlet'', I, v}} | |||
{{drop|N|ow, in all}} kinds of literature, ''bar one'', “reading” is where the real magic happens. ''Construal''. It is the [[Emergent|emergent]] creative act and community consensus that renders ''Hamlet'' a timeless cultural leviathan and {{br|Dracula: The Undead}} forgettable pap.<ref>Maybe not ''that'' forgettable, come to think of it: it has stayed with me 15 years, after all.</ref> | |||
A literary work may start with the text, but it stays there barely a moment. The “meaning” of a literary work is ''necessarily personal'' to the reader: it lives between the reader’s ears, and within the cultural milieu that interconnects the reading population over the generations — ''all as construed by the apparatus between the reader’s ears''. | |||
Call him [[post-modern]] — go on, do — but the [[JC]] doesn’t hold with [[Carl Sagan]]’s idea, above, that a book teleports its author “inside our heads”. That is to equate ''construal'' with ''symbol-processing''. It absolutely isn’t, and that [[metaphor]] — that the brain is a glorified Turing machine — gravely underestimates the human brain when in construction mode. | |||
This, by the way, is not in any way to diminish Shakespeare’s towering genius, but rather to observe that, however impossibly brilliant it is, it is swamped by the flood of exposition, analysis, interpretation, re-rendition and performance that has gone on since he published it, all as narratised and framed by the apparatus between the reader’s ears. The quote above — as Hamlet deals with a ghost — is apposite because — well, because ''thinking makes it so''.<ref>''Hamlet'', II, ii.</ref> | |||
“Construal” — interpreting and assigning meaning to — and “construction” — building — have the same etymology. They are interchangeable in this sense: over time that cultural milieu takes the received corpus of literature and, literally, ''constructs'' it, into edifices its authors can scarce have imagined. | |||
It is because a reader is ''not'' a simple “symbol processor”, that she does more than merely decrypt text to reveal a one-to-one assembly of the author’s intention in her own head, that ''Hamlet'' can speak, still, to the human dilemmas of the twenty-first century. | |||
=====On the lack of metaphors in foxholes===== | |||
{{Drop|T|here is one}} kind of “literature” that ''is'' like a computer programme: where the ''last'' thing the writer wants is for the reader use her imagination, , or bring her cultural baggage in from the hall and use it to “construct” a meaning. In this singular domain, clarity of the writer’s intention is paramount: the only priority is divining the what those who wrote the text meant by it. | |||
This is, of course, ''legal'' literature. | |||
Rather than ceding interpretative control to the reader, a legal drafter seeks to squash any chance for improvisation, to pre-emptively stomp on all ambiguity. Just as there are no atheists in foxholes, ''[[there are no metaphors in a trust deed|there are no metaphors in a Trust Deed]]''. | |||
Legal drafting seeks to be as “[[finite]]” as it can be. It strives as far as it can to readers to mere symbol-processing machines; to allow them only the job of extracting the author’s incontrovertible meaning. But, in a natural language constructed out of dead [[metaphor|metaphors]], this is very, very hard to do: that such a comfortable living is to be made conducting commercial litigation shows that. | |||
It is one reason why [[legalese]] is so laboured. To render text as mechanical, precise and reliable as it can be, lawyers are compelled to chase down all blind alleys, previsualise all phantoms and prescribe outcomes for all logical possibilities. Where the best “literary” language prefers possibility to certainty, ''legal'' language sanctifies [[certainty]] above all else, at the cost of [[possibility]]. To hell with literary style and elegance. | |||
Where literary language is, in [[James Carse]]’s sense, ''[[Finite and Infinite Games|infinite]]'', legal language is ''finite''. | |||
=====LLMs okay for literature: bad for legal drafting===== | |||
{{drop|N|ow: the punchline}}. Given how integral the reader and her cultural baggage are to the creative act in normal literature, we can see how, in that domain, a [[large language model]], freely spitting out text ripe with interpretative possibilities — text that positively ''begs'' for someone to “construct” it, just to make sense of it — is a feasible model. | |||
The interpretative task is paramount: to move from a model where most of the creative work is done by the reader to one where all of it is, is no great step. We can imagine doing without a human writer altogether. The Writers Guild of America can certainly imagine that. Who cares what the text is meant to say, as long as it is coherent enough for an enterprising reader to make something out of it? | |||
''But that does not work for legal language''. Legal language is code: it must say exactly what the parties require: nothing more or less, and it must do it in a way that leaves nothing open to later creative interpretation. Legal drafting is a form of symbol processing, where the meaning resides wholly within and is fully limited by the text. | |||
A random-word generator that creates plausible sounding legal text, but whose singular meaning is incoherent, or not exactly reflective of the parties’ intent, is not much use. If skilled lawyers must sit with it while it iterates, the question arises: why persist with it at all? Just have the lawyers write the stuff in the first place. | |||
We have seen it suggested that one might invert the process instead, letting humans do the first cut, then pressing the machine into action to critique the human drafts, to find potential errors and omissions. | |||
But this is to get the [[division of labour]] exactly backwards, using expensive, context-sensitive [[meatware]] to do the legwork and a dumb machine to provide the “magic”. And besides, there is a design flaw in any legal process which supposes that the risk in a legal contract is distributed evenly throughout its content, and that therefore the legal proposition is one of handling volume. | |||
Boilerplate is [[boilerplate]] for a reason. It is pinned down; tried, tested and done: it takes those things that should go without saying out of the equation. There is nothing to be gained from having a [[large language model]] drafting boilerplate from scratch each time — especially if, randomly, it ''changes'' things: ''[[The quotidian is a utility, not an asset|boilerplate is a utility, not an asset]]''. It is the part of the deal the lawyers already spend the least time on. The parts that matter — that they do spend time on — are typically discrete, manageable parts of drafting. It is hard to see how an LLM really assists in the process. | |||
We come back to that question of basic human self-respect. We ''need to stop subordinating ourselves to machines''. We should not compromise just to optimise for machine processing, to make it easier for machines to manage what we do. | |||
====Meet the new boss —==== | |||
{{drop|J|C does not}} doubt that LLM is coming, nor that the legal industry will find a use for it: just that there is a ''useful'', sustained use for it. It feels more like a parlour trick: surprising at first, diverting after a while, but then the novelty wears off, and the appeal of persevering with what is basically a gabby but unfocussed child wears pales. | |||
The traditional legal model faces existential challenges for sure, but they are not presented, and will not be addressed by random word generators. | |||
==== False memory syndrome ==== | |||
{{drop|L|arge learning models}} already can execute this “research assistant” mode. It seems less flashy but — if we can trust it — this is a compelling use case that will remain once our inflated expectations has collapsed into the trough of despair. We could ask Bing AI to track down a long-lost childhood story.{{Quote|1=Bing, do you know a children’s story involving a fox or a wolf or something being chased by a pack of hunting dogs and it hides under a rock. The hiding place is too small, and it has to poke one part of its body out. It has an argument with each of its body parts to see which is the most deserving of being saved and concludes its tail is the least deserving, so it sticks its tail out. The hunting dogs arrive and grab it by the tail and catch it. Moral of the story is something like you are only as good as your weakest link.}}At first, Bing shrugged its shoulders and said, “sorry”. When we persevere with a few more details, success!{{Quote|1=I found the story you were looking for! It’s called “The Wolf and His Tail”. In this story, a wolf is being chased by a pack of hunting dogs and hides under a rock, The hiding place is too small, and it has to poke one part of its body out. It has an argument with each of its body parts to see which is the most deserving of being saved and concludes that its tail is the least deserving so it sticks its tail out. The hunting dogs arrive and grab it by the tail and catch it. The moral of the story is that you are only as good as your weakest link.}}Magnificent! We can ask who wrote the story:{{Quote|“The Wolf and His Tail” is a children’s story that was written by Barbara Juster Esbensen. It was first published in 1972.}}There is only one problem: ''this is not true''. Barbara Juster Esbensen was indeed a children’s author, but she wrote no such book. It is as if Bing has been badgered into false memory syndrome. | |||
Perhaps we should see what Google’s Bard has to say. We can input the same prompt. This is the result:{{Quote|The story you’re describing sounds like the Aesop fable “The Fox and the Hunter.” In this fable, a fox is being chased by a pack of hunters. The fox finds a narrow crevice in a rock and hides inside. However, the crevice is too small, and the fox’s tail is sticking out. The hunters arrive and see the fox’s tail, and they grab it and pull the fox out of the crevice. The fox is then killed by the hunters. | |||
The moral of the story is that a chain is only as strong as its weakest link. This means that if any part of a system is weak, the entire system can fail. The fox’s tail was its weakest link, and as a result, the fox was caught by the hunters.}}We can be a bit embarrassed to have forgotten about Aesop, until we go looking for the Aesop’s fable and discover that ''it doesn’t exist either''. Bard just made this up. | |||
==== Whatever the use caseCoda: is ChatGPT getting worse? ==== | |||
{{drop|I|n other news}}, scientists are concerned that [[ChatGPT]] might be getting worse. Studies indicate that its accuracy at tasks requiring computational accuracy, like playing noughts and crosses or calculating prime numbers, is rapidly diminishing. | |||
Perhaps [[ChatGPT]] is getting bored, or might it have something to do with the corpus increasingly comprising nonsense text generated on the hoof by some random using ChatGPT? |
Latest revision as of 15:05, 2 October 2024
LLM
/ɛl ɛl ɛm/ (also “large language model”) (n.)
Once upon a time, an LLM was a “Master of Laws”: the postgraduate mark of the sensei in the society of legal service providers — either of that, or of the indolence of one not prepared to strike out and put what she has learned into practice — but still: it spoke to perseverance, depth, comprehension and mastery, however pigeon-hearted its motivation.
If the thoughtleaderati are to be believed, now all one needs for that kind of expertise is a different kind of “LLM”: a “large language model”. Artificial intelligence rendered by a pattern-recognising, parallel-processing chatbot.
The legal profession is to ChatGPT, we hear, as poor old Chrissie Watkins was to Jaws.
But there have been contumelious prophecies of its demise before. In the manner of a blindfolded dartsman, Professor Richard Susskind OBE has been tossing them around for decades. Just by random chance, you would expect one to hit the wall at some point.
Is this big law’s Waterloo? Will ChatGPT do for our learned friends what the meteor did to the dinosaurs?
Or will the lawyers, like cockroaches, survive? Might they even turn this to their advantage?
Cui bono?
Who benefits, primarily, from this emergent technology? Experience should tell us that the first — and often the last — to benefit from legal productivity tools are the lawyers. Should we expect this time to be different?
Now, it is a truism that she who has a tool uses it, firstly, to improve her own lot. A commercial lawyer’s “lot” is predicated on two things: (1) time taken, and (2) ineffability: the sense that what she does “passeth all muggle understanding”.
It is a happy accident that, generally, (2) begets (1): the more ineffable something is, the longer it takes, and the harder it is to work with. The longer it takes, the more you can charge.
Commercial legal contracts are like that. Long, and once they have calcified into templates, fiddly. For lawyers, this is a capital state of affairs. It is why no commercial law firm on the planet really cares for plain English. Oh, they all say, they do, of course — but come on.
This is, in itself, a neat “simplification defeat device”: if you make a contract template sufficiently convoluted, the one-off cost of simplifying it so vastly outweighs the cost of just “tweaking” and living with it that few clients will ever take that first step to simplify. Even though the the ongoing costs of not rationalising dwarf the one-off costs of doing so, the long-term savings are always over that hump.
And bear in mind it will be the lawyers who deploy LLMs as a tool, not their clients. Why? Because of that ineffability. An LLM is a pattern-matching device. It understands nothing. It cannot provide unmediated legal advice. It can only ever be a “back-breaker”: the “last mile” needs a human who knows what she is doing, understands the context and complicated human psychology at play in the cauldron of negotiation. An LLM can draw pretty, impressive-at-a-distance doodles, but it cannot do that. Nor can it write legal opinions — well, not meaningful ones — and nor, unmediated by a law firm, does it have the insurance policy or deep, suable pockets for which a client is paying when it seeks legal advice in the first place.
An LLM can only be deployed, that is to say, by someone with skin in the game; who is prepared to put herself in jeopardy by accepting the assignment, which jeopardy she defends by the simple expedient of knowing what she is doing and checking her LLM’s output.
That someone will be a lawyer.
Now such a “last mile” lawyer could use an LLM to simplify documents, accelerate research and break legal problems down to their essences, thereby reducing the cost, and increasing the value, of her service to her clients. And, sure: in theory, she could give all this value up to her clients for nothing.
But she could, just as easily, use an LLM to further complicate the “work product”: to overengineer, to convolute, to invent options and cover contingencies of minimal utility: she could set her tireless symbol-processing engine to the task of injecting infinitesimal detail: she could amp-up the ineffability to a level beyond a normal human’s patience.
Which of these, realistically, should we expect a lawyer to do? Simplify, or complicate? Sacrifice time and ineffability, for the better comprehension of the unspecialised world? Or plough the energy this magical new tool bestows into generating more convolution and ineffability, racking up more recorded time, and building up the bulwark against the muggles?
She would do the latter with only the best intentions, of course; this is not lily-gilding so much as a noble outreach toward perfection: using the arsenal at her disposal to reach ever closer to the Platonic form.
Cynical, or just realistic? Foretellers of legal Armageddon must explain away some difficult facts: that the commercial-legal industrial complex has stubbornly resisted all attempts at simplification and disintermediation for a generation, notwithstanding the thought-leadership, regulatory prompting, appeals to logic and 40 years of enabling technology — Microsoft Word, mainly — which the world’s lawyers could have used, powerfully, to simplify and minimise the legal work product.
Not only did they not do that, they used their tools to make everything more complicated. Boilerplate blossomed. Templates flowered. Even trivial contracts acquired wording dealing with counterparts, governing the form of amendments and excluding third party rights that weren’t there in the first place.[1]
This is a perfect job for ChatGPT. Why should a difference engine designed to generate plausible-sounding but meaningless text be used do anything different?
You can see the effect is is having on legal work product. NDAs grow ever longer, increasingly riven with the same generic ornamentations that usually range between harmless and misconceived but which are now so prevalent — they recur as the LLMs hone their model — as to become hard for the meatware to resist.
The meatware, remember, has limited patience with NDAs, understanding in a way an algorithm cannot how much of a pantomime they are. Algorithms, on the other hand have unlimited patience and boundless energy. If negotiation comes down to who passes out first, we should bear in mind that LLMs don’t pass out.
Who’s client? Oh, right: she’s a lawyer, too.
“But, JC, come on. Be realistic. It is dog-eat-dog out there. Any lawyer keeps the bounty of the LLM from her clients will soon have her lunch eaten by others who won’t. You cannot fight the invisible hand. We are in a race to the bottom.”
But are we?
Ignoring how impervious to the invisible hand all other recent technologies have been, remember who the clients are. Consumers of high-end commercial legal services are not, generally, the permanently bamboozled muggles of common myth. Most are themselves lawyers, inhabiting weaponised legal departments mainly comprised of veteran deal lawyers. These are people also take pride in their ability to work with difficult, complicated things. This is how they prove their worth to their employers.
Lawyer and their clients, that is to say, have a common interest in convolution for its own sake. They are the jazz aficionados of text; cinéastes of syntax. They expect overwrought contracts: nothing says “prudent management of existential risk” like eighty page of 10pt Times New Roman.
Plain English is not for serious people.
Conservative motivation
Nor should we underestimate the overwhelming power of the lawyer’s intuition that what has gone before is sacrosanct.
Lawyers are the last great positivists: they understand instinctively that what has been already laid down by someone else — “posited” — is safer and than anything new that they might themselves contribute. The common law with its doctrine of precedent, after all, is to all intents a divine commandment: in times of doubt, to do what has been done before.
The more authoritative the source, the more sacred it will be.
Thus, lawyers will assiduously “track the wording of legislation” to ensure their drafting matches it with utmost fidelity, notwithstanding any private reservations they may have about how it was drafted. The more ambiguous, or just difficult the source text, the more assiduously should we expect lawyers to replicate it, because they fear it. They fear the limits of their own mastery.
This “positivism-through-fear” extends with equal force to established market precedents. It doesn’t matter how manifestly unfit for purpose it is, the resistance to change will be strong.
Literary theory, legal construction and LLMs
“What an astonishing thing a book is. It’s a flat object made from a tree with flexible parts on which are imprinted lots of funny dark squiggles. But one glance at it, and you’re inside the mind of another person, maybe somebody dead for thousands of years. Across the millennia, an author is speaking clearly and silently inside your head, directly to you.
Writing is perhaps the greatest of human inventions, binding together people who never knew each other, citizens of distant epochs. Books break the shackles of time. A book is proof that humans are capable of working magic.”
- — Carl Sagan, Cosmos
“I think you underestimate the power of reading, Professor Sagan.”
- —JC
Theres’ a hole in my model, ELIZA
Fittingly, the first chatbot was a designed as a parlour trick. In 1966 Joseph Weizenbaum, a computer scientist at MIT created the programme “ELIZA” to explore communication between humans and machines. ELIZA used pattern matching and substitution techniques to generate realistic conversations. You can try it out here.
By today’s standards, ELIZA was rudimentary, simply regurgitating whatever was typed into it, reformatted as an open-ended statement or question, thereby inviting further input.
Even though ELIZA was little more than a basic “keepy uppy” machine, it proved surprisingly addictive. Weizenbaum was famously shocked how easily people were prepared to believe ELIZA “understood” them and contributed meaningfully to the interaction. When trying it out, Weizenbaum’s own secretary, who had watched him build the programme for months and knew how it worked, shooed him out of the room so she could have privacy with the machine.
This is, of course, how all “mind-reading” works: by asking the right kinds of question, the conjurer extracts from the subject all the information needed to create the illusion of telepathy.
Here is the thing: LLMs work the same way.
LLMs as conjuring tricks
POLONIUS: What do you read, my lord?
HAMLET: Words, words, words.
POLONIUS: What is the matter, my lord?
HAMLET: Between who?
POLONIUS: I mean the matter that you read, my lord.
- —Hamlet, II, ii
Like all good conjuring tricks, generative AI relies on misdirection: its singular genius is that it lets us misdirect ourselves. We wilfully suspend disbelief, never noticing who is creatively re-rendering machine-made screed as magic. We are. We are neuro-linguistically programming ourselves to be wowed by a mechanical duck, when the clever part is really happening inside our own heads.
By writing “prompts”, we create an expectation of what we will see. When the pattern-matching machine produces something roughly like that, we use our own imaginations to backfill, frame, filter, correct, boost, render, sharpen and polish the output into what we wanted to see. We construe the output to conform to our original instructions.
When we say, “fetch me a tennis racquet”, and the machine comes back with something more like a lacrosse stick, we are far more impressed than we would be of a human who did the same thing: we would think such a human a bit dim. But with generative AI we don’t, at first, even notice we are not getting what we asked for. We might think, “oh, nice try!” or “well, that will do,” or perhaps, “ok, computer: try again, but make the basket bigger, the handle shorter, and tighten up the net.” We can iterate this way until we have what we want — though note all the refining intelligence is coming from us — or we could just Google for a conventional photo of a tennis racquet.
Yet an alarming number of people seem to want to believe that human ingenuity pales against a magic box. This involves quite a lot of motivated forgetfulness. For example, it was claimed of Google glass that it could read facial expressions to detect underlying emotional states:
The app detects emotion in the faces it sees using an algorithm generated by a machine learning system. This AI system was trained on large datasets of faces to decode the emotions from facial expressions.[2]
Sounds amazing, huh? But humans don’t need AI goggles to read each others’ emotions. They do it naturally. They are really good at it — far better than any algorithm — already. Even dogs can read facial expressions.[3]
Why are we aspiring to have machines do badly things we already do effortlessly? Who benefits from that?
So it is with generative AI. AI image generation struggles with hands, eyes and logical three-dimensional architecture. It is just as true of text prompts: on close inspection we see the countless logical cul-de-sacs, bad guesses and non sequiturs from which a clever story miraculously emerges. (To be sure, many human authors write in logical cul-de-sacs, bad guesses and non sequiturs, but that is another story.) Again, where is the magic in this process? Inside the meatware.
Either way, as we persevere, we see more and more of the magician’s wires. We sense how the model works: its familiar tropes, tics and consistent ways of doing things, which are never quite what you had in mind. The first couple of passes are great: they get from zero to 50% of what we wanted. But the marginal improvement in each following round diminishes. The piquant surprise at what the machine can produce dampens at each go-round, reaching asymptotically towards an upper capability, well short of 100%, in “producing what you had in mind”.
Eventually we settle for an entropic, vaguely dissatisfying quotidian. “Okay, it isn’t quite what I had in mind. but in the interests of time it will do.”
Again, we subordinate our own interests to the machine’s. We accept a mediocre job we would not take from a human. Should we not be more exacting? Should we not hold an LLM at least to the a standard of a human?
Ours is the skill going in, and ours is the skill construing the output. The machine does the boring bit: the bit that machines have always done: applies uninspired strength, speed, reliability, and economy.
Not ingenuity.
LLMs and literary theory
HORATIO: O day and night, but this is wondrous strange.
HAMLET: And therefore as a stranger give it welcome.
There are more things in heaven and earth, Horatio,
Than are dreamt of in your philosophy.
- — Hamlet, I, v
Now, in all kinds of literature, bar one, “reading” is where the real magic happens. Construal. It is the emergent creative act and community consensus that renders Hamlet a timeless cultural leviathan and Dracula: The Undead forgettable pap.[4]
A literary work may start with the text, but it stays there barely a moment. The “meaning” of a literary work is necessarily personal to the reader: it lives between the reader’s ears, and within the cultural milieu that interconnects the reading population over the generations — all as construed by the apparatus between the reader’s ears.
Call him post-modern — go on, do — but the JC doesn’t hold with Carl Sagan’s idea, above, that a book teleports its author “inside our heads”. That is to equate construal with symbol-processing. It absolutely isn’t, and that metaphor — that the brain is a glorified Turing machine — gravely underestimates the human brain when in construction mode.
This, by the way, is not in any way to diminish Shakespeare’s towering genius, but rather to observe that, however impossibly brilliant it is, it is swamped by the flood of exposition, analysis, interpretation, re-rendition and performance that has gone on since he published it, all as narratised and framed by the apparatus between the reader’s ears. The quote above — as Hamlet deals with a ghost — is apposite because — well, because thinking makes it so.[5]
“Construal” — interpreting and assigning meaning to — and “construction” — building — have the same etymology. They are interchangeable in this sense: over time that cultural milieu takes the received corpus of literature and, literally, constructs it, into edifices its authors can scarce have imagined.
It is because a reader is not a simple “symbol processor”, that she does more than merely decrypt text to reveal a one-to-one assembly of the author’s intention in her own head, that Hamlet can speak, still, to the human dilemmas of the twenty-first century.
On the lack of metaphors in foxholes
There is one kind of “literature” that is like a computer programme: where the last thing the writer wants is for the reader use her imagination, , or bring her cultural baggage in from the hall and use it to “construct” a meaning. In this singular domain, clarity of the writer’s intention is paramount: the only priority is divining the what those who wrote the text meant by it.
This is, of course, legal literature.
Rather than ceding interpretative control to the reader, a legal drafter seeks to squash any chance for improvisation, to pre-emptively stomp on all ambiguity. Just as there are no atheists in foxholes, there are no metaphors in a Trust Deed.
Legal drafting seeks to be as “finite” as it can be. It strives as far as it can to readers to mere symbol-processing machines; to allow them only the job of extracting the author’s incontrovertible meaning. But, in a natural language constructed out of dead metaphors, this is very, very hard to do: that such a comfortable living is to be made conducting commercial litigation shows that.
It is one reason why legalese is so laboured. To render text as mechanical, precise and reliable as it can be, lawyers are compelled to chase down all blind alleys, previsualise all phantoms and prescribe outcomes for all logical possibilities. Where the best “literary” language prefers possibility to certainty, legal language sanctifies certainty above all else, at the cost of possibility. To hell with literary style and elegance.
Where literary language is, in James Carse’s sense, infinite, legal language is finite.
LLMs okay for literature: bad for legal drafting
Now: the punchline. Given how integral the reader and her cultural baggage are to the creative act in normal literature, we can see how, in that domain, a large language model, freely spitting out text ripe with interpretative possibilities — text that positively begs for someone to “construct” it, just to make sense of it — is a feasible model.
The interpretative task is paramount: to move from a model where most of the creative work is done by the reader to one where all of it is, is no great step. We can imagine doing without a human writer altogether. The Writers Guild of America can certainly imagine that. Who cares what the text is meant to say, as long as it is coherent enough for an enterprising reader to make something out of it?
But that does not work for legal language. Legal language is code: it must say exactly what the parties require: nothing more or less, and it must do it in a way that leaves nothing open to later creative interpretation. Legal drafting is a form of symbol processing, where the meaning resides wholly within and is fully limited by the text.
A random-word generator that creates plausible sounding legal text, but whose singular meaning is incoherent, or not exactly reflective of the parties’ intent, is not much use. If skilled lawyers must sit with it while it iterates, the question arises: why persist with it at all? Just have the lawyers write the stuff in the first place.
We have seen it suggested that one might invert the process instead, letting humans do the first cut, then pressing the machine into action to critique the human drafts, to find potential errors and omissions.
But this is to get the division of labour exactly backwards, using expensive, context-sensitive meatware to do the legwork and a dumb machine to provide the “magic”. And besides, there is a design flaw in any legal process which supposes that the risk in a legal contract is distributed evenly throughout its content, and that therefore the legal proposition is one of handling volume.
Boilerplate is boilerplate for a reason. It is pinned down; tried, tested and done: it takes those things that should go without saying out of the equation. There is nothing to be gained from having a large language model drafting boilerplate from scratch each time — especially if, randomly, it changes things: boilerplate is a utility, not an asset. It is the part of the deal the lawyers already spend the least time on. The parts that matter — that they do spend time on — are typically discrete, manageable parts of drafting. It is hard to see how an LLM really assists in the process.
We come back to that question of basic human self-respect. We need to stop subordinating ourselves to machines. We should not compromise just to optimise for machine processing, to make it easier for machines to manage what we do.
Meet the new boss —
JC does not doubt that LLM is coming, nor that the legal industry will find a use for it: just that there is a useful, sustained use for it. It feels more like a parlour trick: surprising at first, diverting after a while, but then the novelty wears off, and the appeal of persevering with what is basically a gabby but unfocussed child wears pales.
The traditional legal model faces existential challenges for sure, but they are not presented, and will not be addressed by random word generators.
False memory syndrome
Large learning models already can execute this “research assistant” mode. It seems less flashy but — if we can trust it — this is a compelling use case that will remain once our inflated expectations has collapsed into the trough of despair. We could ask Bing AI to track down a long-lost childhood story.
Bing, do you know a children’s story involving a fox or a wolf or something being chased by a pack of hunting dogs and it hides under a rock. The hiding place is too small, and it has to poke one part of its body out. It has an argument with each of its body parts to see which is the most deserving of being saved and concludes its tail is the least deserving, so it sticks its tail out. The hunting dogs arrive and grab it by the tail and catch it. Moral of the story is something like you are only as good as your weakest link.
At first, Bing shrugged its shoulders and said, “sorry”. When we persevere with a few more details, success!
I found the story you were looking for! It’s called “The Wolf and His Tail”. In this story, a wolf is being chased by a pack of hunting dogs and hides under a rock, The hiding place is too small, and it has to poke one part of its body out. It has an argument with each of its body parts to see which is the most deserving of being saved and concludes that its tail is the least deserving so it sticks its tail out. The hunting dogs arrive and grab it by the tail and catch it. The moral of the story is that you are only as good as your weakest link.
Magnificent! We can ask who wrote the story:
“The Wolf and His Tail” is a children’s story that was written by Barbara Juster Esbensen. It was first published in 1972.
There is only one problem: this is not true. Barbara Juster Esbensen was indeed a children’s author, but she wrote no such book. It is as if Bing has been badgered into false memory syndrome. Perhaps we should see what Google’s Bard has to say. We can input the same prompt. This is the result:
The story you’re describing sounds like the Aesop fable “The Fox and the Hunter.” In this fable, a fox is being chased by a pack of hunters. The fox finds a narrow crevice in a rock and hides inside. However, the crevice is too small, and the fox’s tail is sticking out. The hunters arrive and see the fox’s tail, and they grab it and pull the fox out of the crevice. The fox is then killed by the hunters.
The moral of the story is that a chain is only as strong as its weakest link. This means that if any part of a system is weak, the entire system can fail. The fox’s tail was its weakest link, and as a result, the fox was caught by the hunters.
We can be a bit embarrassed to have forgotten about Aesop, until we go looking for the Aesop’s fable and discover that it doesn’t exist either. Bard just made this up.
Whatever the use caseCoda: is ChatGPT getting worse?
In other news, scientists are concerned that ChatGPT might be getting worse. Studies indicate that its accuracy at tasks requiring computational accuracy, like playing noughts and crosses or calculating prime numbers, is rapidly diminishing.
Perhaps ChatGPT is getting bored, or might it have something to do with the corpus increasingly comprising nonsense text generated on the hoof by some random using ChatGPT?
- ↑ Contracts don’t confer rights on third parties accidentally. Where is is deliberate, it is obtuse to exclude them.
- ↑ As reported on CNBC
- ↑ Dogs know what we are feeling, RSPCA.
- ↑ Maybe not that forgettable, come to think of it: it has stayed with me 15 years, after all.
- ↑ Hamlet, II, ii.