Template:M intro work Large Learning Model: Difference between revisions

Jump to navigation Jump to search
No edit summary
No edit summary
Line 84: Line 84:
When we say, “fetch me a tennis racquet”, and the machine comes back with something more like a lacrosse stick, we are far more impressed than we would be had a human done the same thing. We would think the human a bit dim. But with [[generative AI]] we don’t, at first, even notice we are not getting what we asked for. We might think, “oh, that will do,” or perhaps, “ok, computer: try again, but make the basket bigger, the handle shorter, and tighten up the net.” We can iterate this way until we have what we want — or we could just use a conventional photo of a tennis racquet.
When we say, “fetch me a tennis racquet”, and the machine comes back with something more like a lacrosse stick, we are far more impressed than we would be had a human done the same thing. We would think the human a bit dim. But with [[generative AI]] we don’t, at first, even notice we are not getting what we asked for. We might think, “oh, that will do,” or perhaps, “ok, computer: try again, but make the basket bigger, the handle shorter, and tighten up the net.” We can iterate this way until we have what we want — or we could just use a conventional photo of a tennis racquet.


AI-generated image generation famously struggles with hands, eyes, and logically possible three-dimensional architecture. First impressions can be stunning, but a closer look reveals an absurdist symphony. Given how large learning models work, this should not surprise us. They are all trees, no wood.  
AI-generated image generation famously struggles with hands, eyes and logical three-dimensional architecture. First impressions can be stunning, but the second look reveals an absurdist symphony. It is just as true of text prompts: on close inspection we can see the countless minute logical ''cul-de-sacs'' and two-footed hacks. (Many humans write in logical ''cul-de-sacs'' and two-footed hacks, but that is another story.)


It is just as true of text prompts. AI-generated text looks fabulous, at first blush:  inspect it more closely and we can see this photorealistic resonance [[emerges]] out of logical cul-de-sacs and two-footed hacks. (We may be so beguiled because many humans write in logical cul-de-sacs and two-footed hacks, but that is another story.)
Either way, the novelty soon palls: as we persevere we begin to see the magician’s wires.We get a sense of how the model goes about what it does. It has its familiar tropes and tics and persistent ways of doing things which aren’t quite what you have in mind. The piquant surprise at what it produces dampens at each go-round, eventually settling into an [[Entropy|entropic]] and vaguely dissatisfying quotidian.  


In either case, that [[Emergent|emergent]] creative act — the thing that renders ''King Lear'' an ageless cultural landmark, {{br|Dracula: The Undead}} forgettable pap<ref>Maybe not ''that'' forgettable, come to think of it: it has stayed with me 15 years, after all.</ref> doesn’t subsist in the text. ''It is between our ears.'' We are oddly willing to cede intellectual eminence to a machine.
In this way the appeal of iterating a targeted work product with a random pattern-matcher loses its lustre. The first couple of passes are great: they get from zero to 0.5. But the marginal improvement in each following round diminishes, as the machine reaches asymptotically towards its upper capability in producing what you had in mind, which we estimate unscientifically as about 75% of it.  


But the novelty soon wears off: as we persevere, we begin to see the magician’s wires. There are familiar tropes; we see how the model goes about what it does. It has its tics. It becomes less surprising at each go-round eventually settling into an [[Entropy|entropic]] quotidian. LLMs quickly lose lustre.
Now, as [[generative AI]] improves  towards 100 — assuming it does improve: there are some indications it may not; see below — that threshold may move but it will never get to 100. In the mean time, as each successive round takes more time and bears less fruit, mortal enthusiasm and patience with the LLM will have long-since waned: well before the [[Singularity]] arrives.


As does the appeal of generating targeted specific work product iteratively using a random word generator. The first couple of passes are great, they go from zero to 0.5, but the marginal improvement in each round diminishes, and the machine reaches asymptotically towards its upper boundary in articulating what you had in mind, which is about 75% of it.  
And many improvements we will see will largely be in the [[meatware]]: as we refine and elaborate our queries; we learn how better to frame our queries, and “prompt-engineering” becomes the skill, rather than the dumb, parallel pattern-matching process that responds to it. Ours is the skill going in, and ours is the skill construing the output. What the machine does is the boring bit.  


As [[generative AI]] evolves that threshold may move towards 100 — there are some indications it may get worse, see below — but it will not get there, and each round becomes increasingly time-consuming and fruitless, human enthusiasm will wane long before the [[Singularity]] arrives.
In all kinds of literature ''bar one'', construal is where the real magic happens: it is the [[Emergent|emergent]] creative act that renders ''King Lear'' an timeless cultural leviathan and {{br|Dracula: The Undead}} forgettable pap<ref>Maybe not ''that'' forgettable, come to think of it: it has stayed with me 15 years, after all.</ref>. A literary work may start with the text, but it barely stays there for a moment. The “meaning” of literature is personal: it lives between our ears, and within the cultural milieu that interconnects the reading population.  


Also why when we are targeting a specific outcome, it is frequently frustratingly not quite what we had in mind
“Construal” and “construction” are interchangeable in this sense: over time that cultural milieu takes the received corpus of literature and, literally, ''constructs'' it into edifices its authors can have scarce have imagined. ''Hamlet'' speaks, still, to the social and human dilemmas of the twenty-first century in ways Shakespeare cannot possibly have contemplated.<ref>A bit ironic that Microsoft should call its chatbot “Bard”, of all things.</ref>


We refine and elaborate our query; we learn how to better engineer prompts, and this becomes the skill, rather than the process of pattern matching to respond to it. so ours is the skill going in, and ours is the skill narratising the output. That is true of all literary discourse: just as much of the “world-creation” goes on between the page and the reader as it does between writer and text.
Now there is one kind of “literature” where the last thing the writer wants is for the reader use her imagination to fill in holes in the meaning. Where clarity of authorial intention is paramount; where communicating and understanding ''purpose'' is the sole priority: ''legal'' literature.  


Letting the reader do the imaginative work to fill in the holes is fine when it comes to literature — better than fine, in fact: it characterises the best art. But it is not how ''legal'' discourse works.  
The ''last'' thing a legal drafter wants is to cede interpretative control to the reader. Rather, she seeks to squash all opportunities presented by creative ambiguity. Just as there are no atheists in foxholes, [[there are no metaphors in a trust deed|there are no metaphors in a Trust Deed]].  


The ''last'' thing a legal drafter wants is to cede interpretative control to the reader. Rather, by her draft she seeks to squash exactly the ambiguity that the metaphors of good literature require.  
Legal drafting seeks to do to readers what code does to computer hardware: it reduces the reader to a machine, a mere symbol processor. It leaves as little room as possible for interpretation.  


Legal drafting is not literature in any sense: it reduces the reader to a machine: it hard codes unambiguous meaning. It leaves as little room as possible for interpretation. This is why [[legalese]] is so laboured. It is designed to remove all doubt, ambiguity and ''fun'' out of reading. It renders it mechanical, precise and reliable. It bestows [[certainty]] at the cost of [[possibility]] regardless of the cost, and to hell with literary style and elegance.
This is one reason why [[legalese]] tends to be so laboured. It is designed to chase down and prescribe outcomes for all logical possibilities, remove all ambiguity and render the text mechanical, precise and reliable. Where normal literature favours possibility over certainty, legal language bestows [[certainty]] at the cost of [[possibility]], and to hell with literary style and elegance.
 
Legal language is ''finite''. Literature is ''[[Finite and Infinite Games|infinite]]''.


If law is literature then it is unique in that it is the only one to favour certainty over possibility. Unique in that it is the one in which we must still Grant soul jurisdiction to authorial intent for Scott the problem is, a large language model has no authorial intent.  
If law is literature then it is unique in that it is the only one to favour certainty over possibility. Unique in that it is the one in which we must still Grant soul jurisdiction to authorial intent for Scott the problem is, a large language model has no authorial intent.