Template:M intro work Large Learning Model: Difference between revisions

Tags: Mobile edit Mobile web edit
Tags: Mobile edit Mobile web edit
Line 86: Line 86:
We say, “fetch me a tennis racquet”, and the machine comes back with something resembling a lacrosse stick, we are far more impressed than we would be of a human doing the same thing. We would think the human useless. Dim. But with [[generative AI]] we don’t, at first, notice we are not getting what we asked for. We might think, “oh, that will do.” or, perhaps, “try again, but make the basket bigger, the handle shorter, and tighten up the net.”  
We say, “fetch me a tennis racquet”, and the machine comes back with something resembling a lacrosse stick, we are far more impressed than we would be of a human doing the same thing. We would think the human useless. Dim. But with [[generative AI]] we don’t, at first, notice we are not getting what we asked for. We might think, “oh, that will do.” or, perhaps, “try again, but make the basket bigger, the handle shorter, and tighten up the net.”  


This is most obvious in AI-generated art, which famously struggles with hands, eyes, and logically possible three-dimensional architecture, but it is just as true of text prompts.AI-generated text looks fabulous at first blush; inspect it more closely and that photorealistic resonance [[emerges]] out of impossibilities and two-footed hacks. That emergent genius doesn’t subsist in the text. ''It is between our ears.'' We are oddly willing to cede intellectual eminence to a machine.
This is most obvious in AI-generated art, which famously struggles with hands, eyes, and logically possible three-dimensional architecture. First impressions can be stunning, but a closer look reveals an absurdist symphony. Given how large learning models work, this should bit suprise us. They are all trees, no wood.  


This is why the novelty wears off: as we persevere, we begin to see the magician’s wires. There are familiar tropes; we see how the model goes about what it does. It becomes progressively less surprising, and eventually settles into an entropic quotidian. Generating targeted specific work product iteratively through a random word generator becomes increasingly time-consuming.
It is just as true of text prompts. AI-generated text looks fabulous at first blush; inspect it more closely and the photorealistic resonance [[emerges]] out of logical cul-de-sacs and two-footed hacks. The fact that they so beguile us is because much of what humans write comprises logical cul-de-sacs and two-footed hacks, but that is another story.
 
In either case, that emergent creative act — the thing that renders ''King Lear'' as ageless cultural landmark, but {{br|Dracula: The Undead}} as forgettable pap<ref>Maybe not ''that'' forgettable, come to think of it: it has stayed with me 15 years, after all.</ref> doesn’t subsist in the text. ''It is between our ears.'' We are oddly willing to cede intellectual eminence to a machine.
 
But the novelty soon wears off: as we persevere, we begin to see the magician’s wires. There are familiar tropes; we see how the model goes about what it does. It has its tics. It becomes progressively less surprising eventually settling into an entropic quotidian. It loses lustre.
 
As does the appeal of generating targeted specific work product iteratively using a random word generator. The first couple of passes are great, they go from zero to 0.5, but the marginal improvement in each round diminishes, and the machine reaches asymptotically towards in upper bound of what you had in mind, which is about 75% of it.
 
As generative AI evolves that threshold may move towards 100, — there are some indications it may get worse, see below — but it will not get there, and each round becomes increasingly time-consuming and fruitless, human enthusiasm will wane long before the [[singularity]].


Also why when we are targeting a specific outcome, it is frequently frustratingly not quite what we had in mind
Also why when we are targeting a specific outcome, it is frequently frustratingly not quite what we had in mind
Line 94: Line 102:
We refine and elaborate our query; we learn how to better engineer prompts, and this becomes the skill, rather than the process of pattern matching to respond to it. so ours is the skill going in, and ours is the skill narratising the output. That is true of all literary discourse: just as much of the “world-creation” goes on between the page and the reader as it does between writer and text.
We refine and elaborate our query; we learn how to better engineer prompts, and this becomes the skill, rather than the process of pattern matching to respond to it. so ours is the skill going in, and ours is the skill narratising the output. That is true of all literary discourse: just as much of the “world-creation” goes on between the page and the reader as it does between writer and text.


Letting the reader do the imaginative work to fill in the holes is fine when it comes to literature — better than fine, in fact: it characterises the best art.
Letting the reader do the imaginative work to fill in the holes is fine when it comes to literature — better than fine, in fact: it characterises the best art. But it is not how ''legal'' discourse works.
 
The ''last'' thing a legal drafter wants is to cede control of the narrative to the reader. Rather, a draft seeks to squash exactly the ambiguity that the metaphors of good literature require.  


But that is not how legal discourse works. The ''last'' thing a legal drafter wants is to cede control of the narrative to the reader. Rather a draft should hard code unambiguous meaning. As little room should be left for interpretation as possible. This is one reason why legalese is so laboured. It is designed to remove all doubt, regardless of the cost, and to hell with literary style and elegance.
Legal drafting is not literature in any sense: it reduces the reader to a machine: it hard codes unambiguous meaning. It leaves as little room as possible for interpretation. This is why [[legalese]] is so laboured. It is designed to remove all doubt, ambiguity and ''fun'' out of the reading process. It renders it mechanical, precise and reliable. It bestows certainty at the cost of possibly regardless of the cost, and to hell with literary style and elegance.


We should regard legal drafting as closer to computer code colon a form of symbol processing where the meaning resides in and is limited by The format of the text. That is to solve common there is no room for imaginative world creation.
We should regard legal drafting as closer to computer code colon a form of symbol processing where the meaning resides in and is limited by The format of the text. That is to solve common there is no room for imaginative world creation.