Template:M intro technology rumours of our demise: Difference between revisions

Tags: Mobile edit Mobile web edit
Tags: Mobile edit Mobile web edit
Line 258: Line 258:
It didn't require ''any'' human training. They just fed it the rules, switched it on and in with indecent brevity it had walloped the grandmaster. It just played games against itself.  What happens when LLMs do that?
It didn't require ''any'' human training. They just fed it the rules, switched it on and in with indecent brevity it had walloped the grandmaster. It just played games against itself.  What happens when LLMs do that?


{{Hhgg rice pudding and income tax}}
{{Quote|{{rice pudding and income tax}}}}


But brute force testing outcomes of a bounded zero-sum game with simple, fixed rules is a completely different proposition. This is “body stuff”. The environment is fully fixed, understood and determined. ''This is exactly where we would expect a Turing machine to excel''.
But brute force testing outcomes of a bounded zero-sum game with simple, fixed rules is a completely different proposition. This is “body stuff”. The environment is fully fixed, understood and determined. ''This is exactly where we would expect a Turing machine to excel''.


An LLM by contrast is algorithmically compositing synthetic outputs ''against human  text. The text it pattern -matches against needs to be well-formed human language. That is how ChatGPT works its magic. Significantly degrading the corpus that it pattern matches against will progressively degrade the output. This is called “model collapse”, it is an observed effect and believed to be an insoluble problem. LLMs will only work for humans if they’re fed human generated content. A redditor put it more succinctly than I can:
An LLM by contrast is algorithmically compositing synthetic outputs ''against human  text. The text it pattern -matches against needs to be well-formed human language. That is how ChatGPT works its magic. Significantly degrading the corpus that it pattern matches against will progressively degrade the output. This is called “model collapse”, it is an observed effect and believed to be an insoluble problem. LLMs will only work for humans if they’re fed human generated content. A redditor put it more succinctly than I can:
{{Quote|{{AlphaGo v LLM}}}}


[Redditor piece on intertextual postmodernism]
====The ChatGPT canon====
====The ChatGPT canon====