Template:M intro technology rumours of our demise: Difference between revisions

No edit summary
Tags: Mobile edit Mobile web edit
Line 248: Line 248:


(The “directed intelligence of human consensus” is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might be able to tell us, having been on both ends of it).<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>
(The “directed intelligence of human consensus” is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might be able to tell us, having been on both ends of it).<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>
===We have this facility now===
Large learning models already can execute this “research assistant” mode. It seems less flashy but the JC’s view is this is the really compelling use case that will be with us once the peak of inflated expectations has collapsed into the trough of despair. BingAI just now found something I have been looking for for years without success.
{{Quote|Bing, do you know a children's story involving a fox or a wolf or something being chased by a pack of hunting dogs and it hides under a rock. The hiding place is too small, and it has to poke one part of its body out. It has an argument with each of its body parts to see which is the most deserving of being saved and concludes that its tail is the least deserving so it sticks its tail out. The hunting dogs arrive and grab it by the tail and catch it. Moral of the story is something like you are only as good as your weakest link.}}
{{Quote|I found the story you were looking for! It’s called “The Wolf and His Tail” ''[Bing confirms the plot ]'' “The Wolf and His Tail” is a children’s story that was written by Barbara Juster Esbensen. It was first published in 1972.}}
This is a value that will not go away.
===Bayesian priors and the canon of chatgpt===
Last point on literary theory, is that the Bayesian priors argument which fails for Shakespeare fails all the more so for a large learning model.
Just as a great deal of the intellectual energy involved in taking a text and rendering it into the three-dimensional metaphorical universe we think of as King Lear comes from beyond the author, so is the same clearly true of an LLM. Its model, after all, is entirely drawn from the human canon.  And there is one other contributor to the meaning of an artwork we haven’t yet considered. The main one: the reader. It is the reader, and her “[[cultural baggage]]”, who must make head or tail of a work of literature, however rich the cultural milieu that supports it. Comprehending figurative language is not a matter of symbol-processing. We are ''not'' Turing machines. We know this because one person can find the overture from ''Tristan und Isolde'' sublime, and it can leave the next person cold. I can see in Daniel Jeanrenaud the true inheritors of the blues pioneers, you might find his music unremarkable. You might even feel that way about Leadbelly or Robert Johnson themselves.
The reader makes the art. This is as true of magic — the conjurer’s trick is to misdirect the audience into imagining something that isn't there: the magic is supplied by the watcher — and is if of digital magic. We imbue what an LLM provides with meaning.
LLMs, as currently configured, cannot do. If you feed an LLM with its own output it rapidly degrades into meaningless mush. LLMs are not intentional. They are not directed, they depend on the ongoing environment to shape their fitness. That environment is necessarily human.


===A real challenger bank===
===A real challenger bank===