Template:M intro work Large Learning Model: Difference between revisions
Jump to navigation
Jump to search
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) No edit summary |
||
Line 84: | Line 84: | ||
When we say, “fetch me a tennis racquet”, and the machine comes back with something more like a lacrosse stick, we are far more impressed than we would be had a human done the same thing. We would think the human a bit dim. But with [[generative AI]] we don’t, at first, even notice we are not getting what we asked for. We might think, “oh, that will do,” or perhaps, “ok, computer: try again, but make the basket bigger, the handle shorter, and tighten up the net.” We can iterate this way until we have what we want — or we could just use a conventional photo of a tennis racquet. | When we say, “fetch me a tennis racquet”, and the machine comes back with something more like a lacrosse stick, we are far more impressed than we would be had a human done the same thing. We would think the human a bit dim. But with [[generative AI]] we don’t, at first, even notice we are not getting what we asked for. We might think, “oh, that will do,” or perhaps, “ok, computer: try again, but make the basket bigger, the handle shorter, and tighten up the net.” We can iterate this way until we have what we want — or we could just use a conventional photo of a tennis racquet. | ||
AI-generated image generation famously struggles with hands, eyes | AI-generated image generation famously struggles with hands, eyes and logical three-dimensional architecture. First impressions can be stunning, but the second look reveals an absurdist symphony. It is just as true of text prompts: on close inspection we can see the countless minute logical ''cul-de-sacs'' and two-footed hacks. (Many humans write in logical ''cul-de-sacs'' and two-footed hacks, but that is another story.) | ||
It | Either way, the novelty soon palls: as we persevere we begin to see the magician’s wires.We get a sense of how the model goes about what it does. It has its familiar tropes and tics and persistent ways of doing things which aren’t quite what you have in mind. The piquant surprise at what it produces dampens at each go-round, eventually settling into an [[Entropy|entropic]] and vaguely dissatisfying quotidian. | ||
In | In this way the appeal of iterating a targeted work product with a random pattern-matcher loses its lustre. The first couple of passes are great: they get from zero to 0.5. But the marginal improvement in each following round diminishes, as the machine reaches asymptotically towards its upper capability in producing what you had in mind, which we estimate unscientifically as about 75% of it. | ||
Now, as [[generative AI]] improves towards 100 — assuming it does improve: there are some indications it may not; see below — that threshold may move but it will never get to 100. In the mean time, as each successive round takes more time and bears less fruit, mortal enthusiasm and patience with the LLM will have long-since waned: well before the [[Singularity]] arrives. | |||
And many improvements we will see will largely be in the [[meatware]]: as we refine and elaborate our queries; we learn how better to frame our queries, and “prompt-engineering” becomes the skill, rather than the dumb, parallel pattern-matching process that responds to it. Ours is the skill going in, and ours is the skill construing the output. What the machine does is the boring bit. | |||
In all kinds of literature ''bar one'', construal is where the real magic happens: it is the [[Emergent|emergent]] creative act that renders ''King Lear'' an timeless cultural leviathan and {{br|Dracula: The Undead}} forgettable pap<ref>Maybe not ''that'' forgettable, come to think of it: it has stayed with me 15 years, after all.</ref>. A literary work may start with the text, but it barely stays there for a moment. The “meaning” of literature is personal: it lives between our ears, and within the cultural milieu that interconnects the reading population. | |||
“Construal” and “construction” are interchangeable in this sense: over time that cultural milieu takes the received corpus of literature and, literally, ''constructs'' it into edifices its authors can have scarce have imagined. ''Hamlet'' speaks, still, to the social and human dilemmas of the twenty-first century in ways Shakespeare cannot possibly have contemplated.<ref>A bit ironic that Microsoft should call its chatbot “Bard”, of all things.</ref> | |||
Now there is one kind of “literature” where the last thing the writer wants is for the reader use her imagination to fill in holes in the meaning. Where clarity of authorial intention is paramount; where communicating and understanding ''purpose'' is the sole priority: ''legal'' literature. | |||
The ''last'' thing a legal drafter wants is to cede interpretative control to the reader. Rather, she seeks to squash all opportunities presented by creative ambiguity. Just as there are no atheists in foxholes, [[there are no metaphors in a trust deed|there are no metaphors in a Trust Deed]]. | |||
Legal drafting seeks to do to readers what code does to computer hardware: it reduces the reader to a machine, a mere symbol processor. It leaves as little room as possible for interpretation. | |||
This is one reason why [[legalese]] tends to be so laboured. It is designed to chase down and prescribe outcomes for all logical possibilities, remove all ambiguity and render the text mechanical, precise and reliable. Where normal literature favours possibility over certainty, legal language bestows [[certainty]] at the cost of [[possibility]], and to hell with literary style and elegance. | |||
Legal language is ''finite''. Literature is ''[[Finite and Infinite Games|infinite]]''. | |||
If law is literature then it is unique in that it is the only one to favour certainty over possibility. Unique in that it is the one in which we must still Grant soul jurisdiction to authorial intent for Scott the problem is, a large language model has no authorial intent. | If law is literature then it is unique in that it is the only one to favour certainty over possibility. Unique in that it is the one in which we must still Grant soul jurisdiction to authorial intent for Scott the problem is, a large language model has no authorial intent. |