Template:M intro work Large Learning Model: Difference between revisions

Tags: Mobile edit Mobile web edit
Tags: Mobile edit Mobile web edit
Line 70: Line 70:


====Literary theory, legal construction and LLMs====
====Literary theory, legal construction and LLMs====
Fittingly, the first chatbot was a designed as a parlour trick. In 1966 Joseph Weizenbaum, a computer scientist at MIT created “ELIZA” to explore communication between humans and machines,  simulating conversation with a pattern matching and substitution techniques. ELIZA was by today’s standards a rudimentary program with a limited range of responses, most of which just regurgitated the user’s input and reformatted it into a question or an open-ended statement that invited a further response. It was a basic “keepy uppy” machine.
Fittingly, the first chatbot was a designed as a parlour trick. In 1966 Joseph Weizenbaum, a computer scientist at MIT created “[[ELIZA]]” to explore communication between humans and machines. [[ELIZA]] used pattern matching and substitution techniques to generate realistic conversations. By today’s standards, [[ELIZA]] was rudimentary, simply regurgitating whatever was typed into it, reformatted as an open-ended statement or question, thereby inviting further input. As a session continued, the user’s answers became more specific and elaborate, allowing [[ELIZA]] to seem ever more perceptive in its responses.  


It proved surprisingly addictive — even to those who knew how it worked. Weizenbaum was famously shocked how easily individuals, including  his own secretary, were prepared to attribute sentience. As a session continued, the user’s answers became more specific and elaborate, allowing the machine to seem ever more perceptive in its responses.
[[ELIZA]] was a basic “keepy uppy” machine.


This is, of course, how all fortune telling works: with the right kind of clever questions the clairvoyant extracts from the victim herself the information needed to create the illusion.  
Still, it proved surprisingly addictive — even to those who knew how it worked. Weizenbaum was famously shocked how easily people, including  his own secretary, were prepared to believe [[ELIZA]] “understood” them and contributed meaningfully to the interaction.  


Like all good conjuring tricks, [[generative AI]] relies on misdirection: in fact, it lets us misdirect ourselves, into wilfully suspending disbelief, not noticing who is doing the “heavy lifting” to turn the pattern-matching output into magic: ''we are''.
This is, of course, how any kind of mind-reading works: with the right kind of clever questions the conjurer extracts from the subject herself all the information needed to create the illusion.  


The irony is that we are [[Neuro-linguistic programming|neuro-linguistically programming]] ''ourselves'' to be wowed by LLMs. By writing the prompts we create our own expectation of what we will see, and when the pattern-matching matching produces something roughly like it, we use our own imaginations to frame and filter and sand down rough edges to render that output as commensurate as possible to our original instructions.  
[[LLM]]s work the same way. Like all good conjuring tricks, [[generative AI]] relies on misdirection: its singular genius is that it lets us misdirect ''ourselves'', into wilfully suspending disbelief, never noticing who is doing the creative heavy lifting needed to turn its output into magic: ''we are''.


We say, “fetch me a tennis racquet”, and the machine comes back with a lacrosse stick, we might think , “oh, that will do.” or, perhaps, “make the basket a bit bigger, the handle shorter, and tighten up the net.” We don’t, at first, notice we are not getting what we asked for.
Yet again we are suborning ourselves to the machines. We need to kick this habit.
 
The irony is that we are [[Neuro-linguistic programming|neuro-linguistically programming]] ''ourselves'' to be wowed by LLMs. By writing prompts, we create our own expectation of what we will see, and when the pattern-matching machine produces something roughly like it, we use our own imaginations to frame and filter and sand down rough edges to see what we want to see, to render that output as commensurate as possible to our original instructions.
 
We say, “fetch me a tennis racquet”, and the machine comes back with something resembling a lacrosse stick, we are far more impressed than we would be of a human doing the same thing. We would think the human useless. Dim. But with [[generative AI]] we don’t, at first, notice we are not getting what we asked for. We might think, “oh, that will do.” or, perhaps, “try again, but make the basket bigger, the handle shorter, and tighten up the net.”  


This is most obvious in AI-generated art, which famously struggles with hands, eyes, and logically possible three-dimensional architecture, but it is just as true of text prompts.AI-generated text looks fabulous at first blush; inspect it more closely and that photorealistic resonance [[emerges]] out of impossibilities and two-footed hacks. That emergent genius doesn’t subsist in the text. ''It is between our ears.'' We are oddly willing to cede intellectual eminence to a machine.
This is most obvious in AI-generated art, which famously struggles with hands, eyes, and logically possible three-dimensional architecture, but it is just as true of text prompts.AI-generated text looks fabulous at first blush; inspect it more closely and that photorealistic resonance [[emerges]] out of impossibilities and two-footed hacks. That emergent genius doesn’t subsist in the text. ''It is between our ears.'' We are oddly willing to cede intellectual eminence to a machine.