Template:M intro work Large Learning Model: Difference between revisions

Jump to navigation Jump to search
No edit summary
Tags: Mobile edit Mobile web edit
Line 72: Line 72:
Fittingly, the first chatbot was a designed as a parlour trick. In 1966 Joseph Weizenbaum, a computer scientist at MIT created “[[ELIZA]]” to explore communication between humans and machines. [[ELIZA]] used pattern matching and substitution techniques to generate realistic conversations. By today’s standards, [[ELIZA]] was rudimentary, simply regurgitating whatever was typed into it, reformatted as an open-ended statement or question, thereby inviting further input. As a session continued, the user’s answers became more specific and elaborate, allowing [[ELIZA]] to seem ever more perceptive in its responses.  
Fittingly, the first chatbot was a designed as a parlour trick. In 1966 Joseph Weizenbaum, a computer scientist at MIT created “[[ELIZA]]” to explore communication between humans and machines. [[ELIZA]] used pattern matching and substitution techniques to generate realistic conversations. By today’s standards, [[ELIZA]] was rudimentary, simply regurgitating whatever was typed into it, reformatted as an open-ended statement or question, thereby inviting further input. As a session continued, the user’s answers became more specific and elaborate, allowing [[ELIZA]] to seem ever more perceptive in its responses.  


[[ELIZA]] was a basic “keepy uppy” machine.
Even though [[ELIZA]] was a basic “keepy uppy” machine, it proved surprisingly addictive — even to those who knew how it worked. Weizenbaum was famously shocked how easily people, including his own secretary, were prepared to believe [[ELIZA]] “understood” them and contributed meaningfully to the interaction.  


Still, it proved surprisingly addictive — even to those who knew how it worked. Weizenbaum was famously shocked how easily people, including  his own secretary, were prepared to believe [[ELIZA]] “understood” them and contributed meaningfully to the interaction.  
This is, of course, how all “mind-reading” works: with the right kind of clever questions, the conjurer extracts from the subject herself all the information she needs to create the illusion.  


This is, of course, how any kind of mind-reading works: with the right kind of clever questions the conjurer extracts from the subject herself all the information needed to create the illusion.  
[[LLM]]s work the same way. Like all good conjuring tricks, [[generative AI]] relies on misdirection: its singular genius is that it lets us misdirect ''ourselves'', into wilfully suspending disbelief, never noticing who is doing the creative heavy lifting needed to turn machine-made screed into magic: ''we are''. We are neuro-linguistically programming ''ourselves'' to be wowed by LLMs.


[[LLM]]s work the same way. Like all good conjuring tricks, [[generative AI]] relies on misdirection: its singular genius is that it lets us misdirect ''ourselves'', into wilfully suspending disbelief, never noticing who is doing the creative heavy lifting needed to turn its output into magic: ''we are''.
Yet again, we are subordinating ourselves to easy convenience of the machines. We need to have some self-respect and kick this habit.


Yet again we are suborning ourselves to the machines. We need to kick this habit.
By writing prompts, we create our own expectation of what we will see. When the pattern-matching machine produces something roughly like it, we use our own imaginations to frame, filter, boost, sharpen and polish the output into what we want to see. We render that output as commensurate we can with our original instructions.  


The irony is that we are [[Neuro-linguistic programming|neuro-linguistically programming]] ''ourselves'' to be wowed by LLMs. By writing prompts, we create our own expectation of what we will see, and when the pattern-matching machine produces something roughly like it, we use our own imaginations to frame and filter and sand down rough edges to see what we want to see, to render that output as commensurate as possible to our original instructions.  
When we say, “fetch me a tennis racquet”, and the machine comes back with something more like a lacrosse stick, we are far more impressed than we would be had a human done the same thing. We would think the human a bit dim. But with [[generative AI]] we don’t, at first, even notice we are not getting what we asked for. We might think, “oh, that will do,” or perhaps, “ok, computer: try again, but make the basket bigger, the handle shorter, and tighten up the net.” we can iterate this way until we have what we want — or we could just use a conventional photo of a tennis racquet.


We say, “fetch me a tennis racquet”, and the machine comes back with something resembling a lacrosse stick, we are far more impressed than we would be of a human doing the same thing. We would think the human useless. Dim. But with [[generative AI]] we don’t, at first, notice we are not getting what we asked for. We might think, “oh, that will do.” or, perhaps, “try again, but make the basket bigger, the handle shorter, and tighten up the net.
AI-generated image generation famously struggles with hands, eyes, and logically possible three-dimensional architecture. First impressions can be stunning, but a closer look reveals an absurdist symphony. Given how large learning models work, this should not surprise us. They are all trees, no wood.  


This is most obvious in AI-generated art, which famously struggles with hands, eyes, and logically possible three-dimensional architecture. First impressions can be stunning, but a closer look reveals an absurdist symphony. Given how large learning models work, this should bit suprise us. They are all trees, no wood.  
It is just as true of text prompts. AI-generated text looks fabulous, at first blush:  inspect it more closely and we can see this photorealistic resonance [[emerges]] out of logical cul-de-sacs and two-footed hacks. (We may be so beguiled because many humans write in logical cul-de-sacs and two-footed hacks, but that is another story.)


It is just as true of text prompts. AI-generated text looks fabulous at first blush; inspect it more closely and the photorealistic resonance [[emerges]] out of logical cul-de-sacs and two-footed hacks. The fact that they so beguile us is because much of what humans write comprises logical cul-de-sacs and two-footed hacks, but that is another story.  
In either case, that [[Emergent|emergent]] creative act — the thing that renders ''King Lear'' an ageless cultural landmark, {{br|Dracula: The Undead}} forgettable pap<ref>Maybe not ''that'' forgettable, come to think of it: it has stayed with me 15 years, after all.</ref> doesn’t subsist in the text. ''It is between our ears.'' We are oddly willing to cede intellectual eminence to a machine.


In either case, that emergent creative act — the thing that renders ''King Lear'' as ageless cultural landmark, but {{br|Dracula: The Undead}} as forgettable pap<ref>Maybe not ''that'' forgettable, come to think of it: it has stayed with me 15 years, after all.</ref> doesn’t subsist in the text. ''It is between our ears.'' We are oddly willing to cede intellectual eminence to a machine.
But the novelty soon wears off: as we persevere, we begin to see the magician’s wires. There are familiar tropes; we see how the model goes about what it does. It has its tics. It becomes less surprising at each go-round eventually settling into an [[Entropy|entropic]] quotidian. LLMs quickly lose lustre.


But the novelty soon wears off: as we persevere, we begin to see the magician’s wires. There are familiar tropes; we see how the model goes about what it does. It has its tics. It becomes progressively less surprising eventually settling into an entropic quotidian. It loses lustre.
As does the appeal of generating targeted specific work product iteratively using a random word generator. The first couple of passes are great, they go from zero to 0.5, but the marginal improvement in each round diminishes, and the machine reaches asymptotically towards its upper boundary in articulating what you had in mind, which is about 75% of it.  


As does the appeal of generating targeted specific work product iteratively using a random word generator. The first couple of passes are great, they go from zero to 0.5, but the marginal improvement in each round diminishes, and the machine reaches asymptotically towards in upper bound of what you had in mind, which is about 75% of it.
As [[generative AI]] evolves that threshold may move towards 100 — there are some indications it may get worse, see below — but it will not get there, and each round becomes increasingly time-consuming and fruitless, human enthusiasm will wane long before the [[Singularity]] arrives.
 
As generative AI evolves that threshold may move towards 100, — there are some indications it may get worse, see below — but it will not get there, and each round becomes increasingly time-consuming and fruitless, human enthusiasm will wane long before the [[singularity]].


Also why when we are targeting a specific outcome, it is frequently frustratingly not quite what we had in mind
Also why when we are targeting a specific outcome, it is frequently frustratingly not quite what we had in mind
Line 104: Line 102:
Letting the reader do the imaginative work to fill in the holes is fine when it comes to literature — better than fine, in fact: it characterises the best art. But it is not how ''legal'' discourse works.  
Letting the reader do the imaginative work to fill in the holes is fine when it comes to literature — better than fine, in fact: it characterises the best art. But it is not how ''legal'' discourse works.  


The ''last'' thing a legal drafter wants is to cede control of the narrative to the reader. Rather, a draft seeks to squash exactly the ambiguity that the metaphors of good literature require.  
The ''last'' thing a legal drafter wants is to cede interpretative control to the reader. Rather, by her draft she seeks to squash exactly the ambiguity that the metaphors of good literature require.  


Legal drafting is not literature in any sense: it reduces the reader to a machine: it hard codes unambiguous meaning. It leaves as little room as possible for interpretation. This is why [[legalese]] is so laboured. It is designed to remove all doubt, ambiguity and ''fun'' out of reading. It renders it mechanical, precise and reliable. It bestows [[certainty]] at the cost of [[possibility]] regardless of the cost, and to hell with literary style and elegance.
Legal drafting is not literature in any sense: it reduces the reader to a machine: it hard codes unambiguous meaning. It leaves as little room as possible for interpretation. This is why [[legalese]] is so laboured. It is designed to remove all doubt, ambiguity and ''fun'' out of reading. It renders it mechanical, precise and reliable. It bestows [[certainty]] at the cost of [[possibility]] regardless of the cost, and to hell with literary style and elegance.

Navigation menu