Template:M intro work Large Learning Model: Difference between revisions

Jump to navigation Jump to search
No edit summary
Tags: Mobile edit Mobile web edit
Line 70: Line 70:


====Literary theory, legal construction and LLMs====
====Literary theory, legal construction and LLMs====
Eliza
Fittingly, the first chatbot was a designed as a parlour trick. “Eliza” was a rudimentary program with a limited range of responses, most of which just regurgitated the user’s input and reformatted it into a question or an open-ended statement that invited a further response. It was a basic “keepy uppy” machine.
Like all good conjuring tricks, [[generative AI]] relies on misdirection: in fact, it lets us misdirect ourselves, into wilfully suspending disbelief and therefore not noticing who is doing the “heavy lifting” to turn its output into magic: ''we are''.


The irony is that we are [[Neuro-linguistic programming|neuro-linguistically programming]] ''ourselves'' to be wowed by LLMs. By inputting prompts we create our own expectation of what we want to see, and when the pattern-matching matching produces something like it, we use our imaginations to frame and filter that output as closely as we can to our own instructions.
It proved surprisingly addictive — even to those who knew how it worked. As a session continued, the user’s answers became more specific and elaborate, allowing the machine to seem ever more perceptive in its responses.
 
 
Like all good conjuring tricks, [[generative AI]] wasrelies on misdirection: in fact, it lets us misdirect ourselves, into wilfully suspending disbelief, not noticing who is doing the “heavy lifting” to turn the pattern-matching output into magic: ''we are''.
 
The irony is that we are [[Neuro-linguistic programming|neuro-linguistically programming]] ''ourselves'' to be wowed by LLMs. By writing the prompts we create our own expectation of what we will see, and when the pattern-matching matching produces something roughly like it, we use our own imaginations to frame and filter and sand down rough edges to render that output as commensurate as possible to our original instructions.
 
We say, “fetch me a tennis racquet”, and the machine comes back with a lacrosse stick, we might think , “oh, that will do.”
 
This is most obvious in AI-generated art, which famously struggles with hands, eyes, and logically possible three-dimensional architecture , but it is just as true of text prompts. It looks fabulous at first blush; inspect it more closely and that photorealistic resonance [[emerges]] out of impossibilities and two-footed hacks. That emergent genius doesn’t subsist in the text. ''It is between our ears.'' We are oddly willing to cede intellectual eminence to a machine.


This is why the novelty wears off: as we persevere, we begin to see the magician’s wires. There are familiar tropes; we see how the model goes about what it does. It becomes progressively less surprising, and eventually settles into an entropic quotidian. Generating targeted specific work product iteratively through a random word generator becomes increasingly time-consuming.
This is why the novelty wears off: as we persevere, we begin to see the magician’s wires. There are familiar tropes; we see how the model goes about what it does. It becomes progressively less surprising, and eventually settles into an entropic quotidian. Generating targeted specific work product iteratively through a random word generator becomes increasingly time-consuming.
Line 81: Line 89:
We refine and elaborate our query; we learn how to better engineer prompts, and this becomes the skill, rather than the process of pattern matching to respond to it. so ours is the skill going in, and ours is the skill narratising the output. That is true of all literary discourse: just as much of the “world-creation” goes on between the page and the reader as it does between writer and text.
We refine and elaborate our query; we learn how to better engineer prompts, and this becomes the skill, rather than the process of pattern matching to respond to it. so ours is the skill going in, and ours is the skill narratising the output. That is true of all literary discourse: just as much of the “world-creation” goes on between the page and the reader as it does between writer and text.


This is most obvious in AI-generated art
====Meet the new boss —====
====Meet the new boss —====
We don’t doubt that LLM is coming, nor that the legal industry will find a use for it: just that there is a ''useful'', sustained use for it. It feels more like a parlour trick: surprising at first, diverting after a while, but then the novelty wears off, and the appeal of persevering with what is basically a gabby but unfocussed child wears pales.
We don’t doubt that LLM is coming, nor that the legal industry will find a use for it: just that there is a ''useful'', sustained use for it. It feels more like a parlour trick: surprising at first, diverting after a while, but then the novelty wears off, and the appeal of persevering with what is basically a gabby but unfocussed child wears pales.