82,891
edits
Amwelladmin (talk | contribs) Tags: Mobile edit Mobile web edit |
Amwelladmin (talk | contribs) Tags: Mobile edit Mobile web edit |
||
Line 70: | Line 70: | ||
====Literary theory, legal construction and LLMs==== | ====Literary theory, legal construction and LLMs==== | ||
Fittingly, the first chatbot was a designed as a parlour trick. In 1966 Joseph Weizenbaum, a computer scientist at MIT created | Fittingly, the first chatbot was a designed as a parlour trick. In 1966 Joseph Weizenbaum, a computer scientist at MIT created “[[ELIZA]]” to explore communication between humans and machines. [[ELIZA]] used pattern matching and substitution techniques to generate realistic conversations. By today’s standards, [[ELIZA]] was rudimentary, simply regurgitating whatever was typed into it, reformatted as an open-ended statement or question, thereby inviting further input. As a session continued, the user’s answers became more specific and elaborate, allowing [[ELIZA]] to seem ever more perceptive in its responses. | ||
[[ELIZA]] was a basic “keepy uppy” machine. | |||
Still, it proved surprisingly addictive — even to those who knew how it worked. Weizenbaum was famously shocked how easily people, including his own secretary, were prepared to believe [[ELIZA]] “understood” them and contributed meaningfully to the interaction. | |||
This is, of course, how any kind of mind-reading works: with the right kind of clever questions the conjurer extracts from the subject herself all the information needed to create the illusion. | |||
[[LLM]]s work the same way. Like all good conjuring tricks, [[generative AI]] relies on misdirection: its singular genius is that it lets us misdirect ''ourselves'', into wilfully suspending disbelief, never noticing who is doing the creative heavy lifting needed to turn its output into magic: ''we are''. | |||
We say, “fetch me a tennis racquet”, and the machine comes back with a lacrosse stick, we might think , “oh, that will do.” or, perhaps, | Yet again we are suborning ourselves to the machines. We need to kick this habit. | ||
The irony is that we are [[Neuro-linguistic programming|neuro-linguistically programming]] ''ourselves'' to be wowed by LLMs. By writing prompts, we create our own expectation of what we will see, and when the pattern-matching machine produces something roughly like it, we use our own imaginations to frame and filter and sand down rough edges to see what we want to see, to render that output as commensurate as possible to our original instructions. | |||
We say, “fetch me a tennis racquet”, and the machine comes back with something resembling a lacrosse stick, we are far more impressed than we would be of a human doing the same thing. We would think the human useless. Dim. But with [[generative AI]] we don’t, at first, notice we are not getting what we asked for. We might think, “oh, that will do.” or, perhaps, “try again, but make the basket bigger, the handle shorter, and tighten up the net.” | |||
This is most obvious in AI-generated art, which famously struggles with hands, eyes, and logically possible three-dimensional architecture, but it is just as true of text prompts.AI-generated text looks fabulous at first blush; inspect it more closely and that photorealistic resonance [[emerges]] out of impossibilities and two-footed hacks. That emergent genius doesn’t subsist in the text. ''It is between our ears.'' We are oddly willing to cede intellectual eminence to a machine. | This is most obvious in AI-generated art, which famously struggles with hands, eyes, and logically possible three-dimensional architecture, but it is just as true of text prompts.AI-generated text looks fabulous at first blush; inspect it more closely and that photorealistic resonance [[emerges]] out of impossibilities and two-footed hacks. That emergent genius doesn’t subsist in the text. ''It is between our ears.'' We are oddly willing to cede intellectual eminence to a machine. |