Template:M intro technology rumours of our demise: Difference between revisions

Tags: Mobile edit Mobile web edit
Line 253: Line 253:
Just as a great deal of the intellectual energy involved in rendering a text into the three-dimensional metaphorical universe we think of as ''King Lear'' comes from beyond the author of that text, so it does with the output of an LLM. Its model, after all, is entirely drawn from the human canon.  
Just as a great deal of the intellectual energy involved in rendering a text into the three-dimensional metaphorical universe we think of as ''King Lear'' comes from beyond the author of that text, so it does with the output of an LLM. Its model, after all, is entirely drawn from the human canon.  


And there is one other contributor to a cultural artefact we haven’t yet considered. The main one: the ''reader''. It is the reader, and her “[[cultural baggage]]”, who must make head and tail of a work of literature, however rich the cultural milieu that supports it. Construing natural language is no matter of mere [[Symbol processing|symbol-processing]]. Humans are ''not'' [[Turing machine|Turing machines]].  
====Model collapse ====
There is concern that setting up a feedback loop whereby LLMs can consume their own generated text then some kind of hyperintelligence might emerge. Just look what happened with AlphaGo.
 
It didn't require ''any'' human training. They just fed it the rules, switched it on and in with indecent brevity it had walloped the grandmaster. It just played games against itself.  What happens when LLMs do that?
 
{{Hhgg rice pudding and income tax}}
 
But brute force testing outcomes of a bounded zero-sum game with simple, fixed rules is a completely different proposition. This is “body stuff”. The environment is fully fixed, understood and determined. ''This is exactly where we would expect a Turing machine to excel''.
 
An LLM by contrast is algorithmically compositing synthetic outputs ''against human  text. The text it pattern -matches against needs to be well-formed human language. That is how ChatGPT works its magic. Significantly degrading the corpus that it pattern matches against will progressively degrade the output. This is called “model collapse”, it is an observed effect and believed to be an insoluble problem. LLMs will only work for humans if they’re fed human generated content. A redditor put it more succinctly than I can:
 
[Redditor piece on intertextual postmodernism]
====The ChatGPT canon====
 
And there is an final contributor to every cultural artefact we haven’t yet considered. The main one: the ''reader''. It is the reader, and her “[[cultural baggage]]”, who must make head and tail of art and literature and who  determines whether it stands or falls. This is true however rich the cultural milieu that supports the art.  
 
Construing natural language, much less visuals or sound, is no matter of mere [[Symbol processing|symbol-processing]]. Humans are ''not'' [[Turing machine|Turing machines]].  


We know this because the overture from ''Tristan und Isolde'' reduce one person to tears and can leave the next one cold. I can see in the Camden Cat a true inheritor of the blues pioneers, you might see an unremarkable busker. A text becomes art in the reader’s head.  
We know this because the overture from ''Tristan und Isolde'' reduce one person to tears and can leave the next one cold. I can see in the Camden Cat a true inheritor of the blues pioneers, you might see an unremarkable busker. A text becomes art in the reader’s head.