Template:M intro technology rumours of our demise: Difference between revisions

No edit summary
Tags: Mobile edit Mobile web edit
Tags: Mobile edit Mobile web edit
Line 248: Line 248:


(The “directed intelligence of human consensus” is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might be able to tell us, having been on both ends of it).<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>
(The “directed intelligence of human consensus” is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might be able to tell us, having been on both ends of it).<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>
===Bayesian priors and the canon of Chat-GPT===
===Bayesian priors and the canon of ChatGPT===
Last point on literary theory, is that the “[[Bayesian priors]]” argument which fails for Shakespeare fails all the more so for a [[large language model]].  
Last point on literary theory, is that the “[[Bayesian priors]]” argument which fails for Shakespeare fails all the more so for a [[large language model]].  


Just as a great deal of the intellectual energy involved in rendering a text into the three-dimensional metaphorical universe we think of as ''King Lear'' comes from beyond the author of that text, so it does with the output of an LLM. Its model, after all, is entirely drawn from the human canon.  
Just as a great deal of the intellectual energy involved in rendering a text into the three-dimensional [[metaphor]]ical universe we think of as ''King Lear'' comes from the cultural milieu surrounding the text, and not its actual author, so it does with the output of an LLM. Its its source after all, is entirely drawn from the human canon. A large language model trained only on randomly assembled ASCII characters would return randomly assembled ASCII characters.


There is concern that setting up a feedback loop whereby LLMs can consume their own generated text then some kind of hyperintelligence might emerge. Just look what happened with AlphaGo.
But what if the material is not random? What if the model augments its training data with its own output? Might that create an apocalyptic feedback loop whereby LLMs bootstrap themselves into some kind of hyperintelligent superlanguage, beyond the cognitive capacity of an unaugmented human, from which platform they might dominate human discourse? Are we inadvertently seeding ''Skynet''?


It didn't require ''any'' human training. They just fed it the rules, switched it on and in with indecent brevity it had walloped the grandmaster. It just played games against itself.  What happens when LLMs do that?
Just look what happened with AlphaGo. It didn't require ''any'' human training data. They just fed it the rules, switched it on and, with indecent brevity, it had walloped the game’s ruling grandmaster. It learned by playing millions of games against itself.  What happens when LLMs do that?


{{Quote|{{rice pudding and income tax}}}}
{{Quote|{{rice pudding and income tax}}}}


But brute force testing outcomes of a bounded zero-sum game with simple, fixed rules is a completely different proposition. This is “body stuff”. The environment is fully fixed, understood and determined. ''This is exactly where we would expect a Turing machine to excel''.
Brute-forcing outcomes in a fully bounded, zero-sum environment with simple, fixed rules is exactly what machines are designed to do. If this is to be our comparison, we are misdirecting ourselves, voluntarily suspending disbelief, setting ourselves up to fail. This is agreeing a contest on the machine’s terms. For playing Go is “body stuff”. The environment is fully fixed, understood, transparent and determined. There is no uncertainty. No ambiguity. No rom for creative construal. ''This is exactly where we would expect a Turing machine to excel''.


An LLM by contrast is algorithmically compositing synthetic outputs ''against human text. The text it pattern -matches against needs to be well-formed human language. That is how ChatGPT works its magic. Significantly degrading the corpus that it pattern matches against will progressively degrade the output. This is called “model collapse”, it is an observed effect and believed to be an insoluble problem. LLMs will only work for humans if they’re fed human generated content. A redditor put it more succinctly than I can:
We deploy LLMs by contrast, in complex environments. They are unbounded, ambiguous, inchoate and impermanent. The situation changes. Mathematical computation doesn't work.
 
{{Quote|Imagine how hard physics would be if particles could think.
:— Murray Gell-Mann}}
 
An LLM works because it can quickly composite synthetic outputs ''against human text''. It must pattern-match against well-formed human language. That is how it works its magic. Degrading its training data will progressively degrade its output. This “model collapse” is an observed effect. LLMs will only work for humans if they’re fed human generated content. A redditor put it more succinctly than I can:
{{Quote|{{AlphaGo v LLM}}}}
{{Quote|{{AlphaGo v LLM}}}}


And there is an final contributor to every cultural artefact we haven’t yet considered. The main one: the ''reader''. It is the reader, and her “[[cultural baggage]]”, who must make head and tail of art and literature and who  determines whether it stands or falls. This is true however rich the cultural milieu that supports the art.  
There is another contributor to the cultural milieu surrounding any text: the ''reader''. It is the reader, and her “[[cultural baggage]]”, who must make head and tail of the text. She alone determines, for her own case, whether it stands or falls. This is true however rich the cultural milieu that supports the art.  


Construing natural language, much less visuals or sound, is no matter of mere [[Symbol processing|symbol-processing]]. Humans are ''not'' [[Turing machine|Turing machines]].  
Construing natural language, much less visuals or sound, is no matter of mere [[Symbol processing|symbol-processing]]. Humans are ''not'' [[Turing machine|Turing machines]].  


We know this because the overture from ''Tristan und Isolde'' reduce one person to tears and can leave the next one cold. I can see in the Camden Cat a true inheritor of the blues pioneers, you might see an unremarkable busker. A text becomes art in the reader’s head.  
We know this because the overture from ''Tristan und Isolde'' can reduce different listeners to tears of joy or boredom. One contrarian can see in the Camden Cat a true inheritor of the great blues pioneers, while others see an unremarkable busker. A text sparks meaning, and becomes art, in the reader’s head.
 
It turns out that hitherto sterile debate about the nature of meaning and art that turned generations of students off philosophy is now critically important.


This is as true of magic — the conjurer’s trick is to misdirect her audience into ''imagining'' something that isn’t there: the magic is supplied by the audience — and it is of ''digital'' magic. We imbue what an LLM generates with meaning. ''The meatware is doing the heavy lifting''.
This is as true of magic — the conjurer’s trick is to misdirect her audience into ''imagining'' something that isn’t there: the magic is supplied by the audience — and it is of ''digital'' magic. We imbue what an LLM generates with meaning. ''The meatware is doing the heavy lifting''.