Template:M intro work Large Learning Model: Difference between revisions

Tags: Mobile edit Mobile web edit
Tags: Mobile edit Mobile web edit
Line 70: Line 70:


====Literary theory, legal construction and LLMs====
====Literary theory, legal construction and LLMs====
Fittingly, the first chatbot was a designed as a parlour trick. “Eliza” was a rudimentary program with a limited range of responses, most of which just regurgitated the user’s input and reformatted it into a question or an open-ended statement that invited a further response. It was a basic “keepy uppy” machine.
Fittingly, the first chatbot was a designed as a parlour trick. In 1966 Joseph Weizenbaum, a computer scientist at MIT created “ELIZA” to explore communication between humans and machines,  simulating conversation with a pattern matching and substitution techniques. ELIZA was by today’s standards a rudimentary program with a limited range of responses, most of which just regurgitated the user’s input and reformatted it into a question or an open-ended statement that invited a further response. It was a basic “keepy uppy” machine.


It proved surprisingly addictive — even to those who knew how it worked. As a session continued, the user’s answers became more specific and elaborate, allowing the machine to seem ever more perceptive in its responses.
It proved surprisingly addictive — even to those who knew how it worked. Weizenbaum was famously shocked how easily individuals, including  his own secretary, were prepared to attribute sentience. As a session continued, the user’s answers became more specific and elaborate, allowing the machine to seem ever more perceptive in its responses.


This is, of course, how all fortune telling works: with the right kind of clever questions the clairvoyant extracts from the victim herself the information needed to create the illusion.


Like all good conjuring tricks, [[generative AI]] wasrelies on misdirection: in fact, it lets us misdirect ourselves, into wilfully suspending disbelief, not noticing who is doing the “heavy lifting” to turn the pattern-matching output into magic: ''we are''.
Like all good conjuring tricks, [[generative AI]] relies on misdirection: in fact, it lets us misdirect ourselves, into wilfully suspending disbelief, not noticing who is doing the “heavy lifting” to turn the pattern-matching output into magic: ''we are''.


The irony is that we are [[Neuro-linguistic programming|neuro-linguistically programming]] ''ourselves'' to be wowed by LLMs. By writing the prompts we create our own expectation of what we will see, and when the pattern-matching matching produces something roughly like it, we use our own imaginations to frame and filter and sand down rough edges to render that output as commensurate as possible to our original instructions.  
The irony is that we are [[Neuro-linguistic programming|neuro-linguistically programming]] ''ourselves'' to be wowed by LLMs. By writing the prompts we create our own expectation of what we will see, and when the pattern-matching matching produces something roughly like it, we use our own imaginations to frame and filter and sand down rough edges to render that output as commensurate as possible to our original instructions.  


We say, “fetch me a tennis racquet”, and the machine comes back with a lacrosse stick, we might think , “oh, that will do.”
We say, “fetch me a tennis racquet”, and the machine comes back with a lacrosse stick, we might think , “oh, that will do.” or, perhaps, “make the basket a bit bigger, the handle shorter, and tighten up the net.” We don’t, at first, notice we are not getting what we asked for.


This is most obvious in AI-generated art, which famously struggles with hands, eyes, and logically possible three-dimensional architecture , but it is just as true of text prompts. It looks fabulous at first blush; inspect it more closely and that photorealistic resonance [[emerges]] out of impossibilities and two-footed hacks. That emergent genius doesn’t subsist in the text. ''It is between our ears.'' We are oddly willing to cede intellectual eminence to a machine.
This is most obvious in AI-generated art, which famously struggles with hands, eyes, and logically possible three-dimensional architecture, but it is just as true of text prompts.AI-generated text looks fabulous at first blush; inspect it more closely and that photorealistic resonance [[emerges]] out of impossibilities and two-footed hacks. That emergent genius doesn’t subsist in the text. ''It is between our ears.'' We are oddly willing to cede intellectual eminence to a machine.


This is why the novelty wears off: as we persevere, we begin to see the magician’s wires. There are familiar tropes; we see how the model goes about what it does. It becomes progressively less surprising, and eventually settles into an entropic quotidian. Generating targeted specific work product iteratively through a random word generator becomes increasingly time-consuming.
This is why the novelty wears off: as we persevere, we begin to see the magician’s wires. There are familiar tropes; we see how the model goes about what it does. It becomes progressively less surprising, and eventually settles into an entropic quotidian. Generating targeted specific work product iteratively through a random word generator becomes increasingly time-consuming.


also why when we are targeting a specific outcome, it is frequently frustratingly not quite what we had in mind
Also why when we are targeting a specific outcome, it is frequently frustratingly not quite what we had in mind


We refine and elaborate our query; we learn how to better engineer prompts, and this becomes the skill, rather than the process of pattern matching to respond to it. so ours is the skill going in, and ours is the skill narratising the output. That is true of all literary discourse: just as much of the “world-creation” goes on between the page and the reader as it does between writer and text.
We refine and elaborate our query; we learn how to better engineer prompts, and this becomes the skill, rather than the process of pattern matching to respond to it. so ours is the skill going in, and ours is the skill narratising the output. That is true of all literary discourse: just as much of the “world-creation” goes on between the page and the reader as it does between writer and text.
Letting the reader do the imaginative work to fill in the holes is fine when it comes to literature — better than fine, in fact: it characterises the best art.
But that is not how legal discourse works. The ''last'' thing a legal drafter wants is to cede control of the narrative to the reader. Rather a draft should hard code unambiguous meaning. As little room should be left for interpretation as possible. This is one reason why legalese is so laboured. It is designed to remove all doubt, regardless of the cost, and to hell with literary style and elegance.
We should regard legal drafting as closer to computer code colon a form of symbol processing where the meaning resides in and is limited by The format of the text. That is to solve common there is no room for imaginative world creation.


====Meet the new boss —====
====Meet the new boss —====