Template:M intro technology rumours of our demise: Difference between revisions

Line 186: Line 186:
This, by the way, has been the tale of the collaborative internet: despite [[Chris Anderson]]’s forecast in 2006 that universal interconnectedness would change economics forever<ref>[[The Long Tail: How Endless Choice is Creating Unlimited Demand]] ''(2006)''</ref> — that, suddenly, it would be costless to service the long tail of global demand, prompting some kind of explosion in cultural diversity — in practice, the exact opposite has happened.<ref>It is called “{{Plainlink|https://www.theclassroom.com/cultural-convergence-examples-16778.html|cultural convergence}}”.</ref> The overriding imperatives of [[scale]] have obliterated the subtle appeals of diversity, while sudden, unprecedented global interconnectedness has had the [[system effect]] of ''homogenising demand''.  
This, by the way, has been the tale of the collaborative internet: despite [[Chris Anderson]]’s forecast in 2006 that universal interconnectedness would change economics forever<ref>[[The Long Tail: How Endless Choice is Creating Unlimited Demand]] ''(2006)''</ref> — that, suddenly, it would be costless to service the long tail of global demand, prompting some kind of explosion in cultural diversity — in practice, the exact opposite has happened.<ref>It is called “{{Plainlink|https://www.theclassroom.com/cultural-convergence-examples-16778.html|cultural convergence}}”.</ref> The overriding imperatives of [[scale]] have obliterated the subtle appeals of diversity, while sudden, unprecedented global interconnectedness has had the [[system effect]] of ''homogenising demand''.  


While it has become ever easier to target the “[[fat head]]”, ''the [[long tail]] has grown thinner''.<ref>{{author|Anita Elberse}}’s [[Blockbusters: Why Big Hits and Big Risks are the Future of the Entertainment Business|''Blockbusters'']] is excellent on this point. </ref>  
This is the counter-intuitive effect of a “[[cheapest-to-deliver]]” strategy: while it has become ever easier to target the “[[fat head]]”, ''the [[long tail]] has grown thinner''.<ref>{{author|Anita Elberse}}’s [[Blockbusters: Why Big Hits and Big Risks are the Future of the Entertainment Business|''Blockbusters'']] is excellent on this point. </ref> As the tail contracts, the [[commercial imperative]] to target our lowest common denominators gets stronger. ''This is a highly undesirable feedback loop''. It will homogenise ''us''. ''We'' will become less diverse. ''We'' will become more [[Antifragile|fragile]].


A [[cheapest-to-deliver]] strategy will have the counter-intuitive effect of ''truncating'' the “[[long tail]]” of consumer choice. As the tail contracts, the [[commercial imperative]] to target common denominators gets stronger. ''This is a highly undesirable feedback loop''. It will homogenise ''us''. ''We'' will become less diverse. We will become more [[Antifragile|fragile]]. 
Now: if [[artificial intelligence]] is so spectacular, shouldn’t we be a bit more ambitious about what ''it'' could do for ''us''? Isn’t “giving you the bare minimum you’ll take to keep stringing you along” a bit ''underwhelming''? Isn’t employing it in a way that will “dumb us down” a bit, well, ''dumb''?
 
Now if [[artificial intelligence]] is so spectacular, shouldn’t we be a bit more ambitious about what ''it'' could do for ''us''? Isn’t “giving you the bare minimum you’ll take to keep stringing you along” a bit ''underwhelming''?


===Digression: Darwin’s profligate idea===
===Digression: Darwin’s profligate idea===
By now most accept the magic of [[evolution by natural selection]]. This really ''is'' magic: a comprehensive account of the origin of life — indeed, the sum of all organic “creation”, as it were — can be [[Reductionism|reduced]] to a mindless, repetitive process that we can state in a short sentence.
By now, most accept the theory of [[evolution by natural selection]]. This really ''is'' magic: it provides a comprehensive account of the origin of life — indeed, the sum of all organic “creation”, as it were — which [[Reductionism|reduces]] to a mindless, repetitive process that we can state in a short sentence:


{{Quote|In a population of organisms with individual traits whose offspring inherit those traits only with random variations, those having traits most suited to the prevailing environment will best survive and reproduce over time.<ref>This is the “ugliest man” by which, [[Nietzsche]] claims, we killed God. </ref>}}
{{Quote|In a population of organisms with individual traits whose offspring inherit those traits only with random variations, those having traits most suited to the prevailing environment will best survive and reproduce over time.<ref>This is the “ugliest man” by which, [[Nietzsche]] claims, we killed God. </ref>}}


The economy of ''[[design]]'' is this process is staggering. The economy of ''effort in its execution'' is not.
The economy of ''[[design]]'' is this process is staggering. The economy of ''effort in execution'' is not.


Evolution is ''tremendously'' ''wasteful''. Not just in how it ''does'' adapt, but in how much of the time it ''does not''.  
Evolution is ''tremendously'' ''wasteful''. Not just in how it ''does'' adapt, but in how much of the time it ''does not''.  


The chain of adaptations from amino acids to Lennon & McCartney may have billions of links in it, but that is a model of parsimony compared with the number of adaptations over that time ''didn’t'' go anywhere — that arced off into one of [[design space]]’s gazillion dead ends and forlornly fizzled out. For every serendipitous mutation ''there are millions and millions of duds''.   
The chain of adaptations from amino acids to Lennon & McCartney may have billions of links in it, but that is a model of parsimony compared with the number of adaptations over that time ''didn’t'' go anywhere — that arced off into one of [[design space]]’s gazillion dead ends and fizzled out. For every serendipitous mutation ''there are millions and millions of duds''.   


[[Evolution]] isn’t directed — that is its very super-power — so it fumbles blindly around, fizzing and sparking, and its power is that vanishingly small proportion of mutations that ''accidentally do something useful''. Evolution is a random, [[stochastic]] process. It depends on aeons of time and burns unimaginable resources. Evolution solves problems by ''brute force''.   
[[Evolution]] isn’t directed — that is its very super-power — so it fumbles blindly around, fizzing and sparking, and only a vanishingly small proportion of the mutations it generates ever do something ''useful'' and those that do, do so accidentally. Evolution is a random, [[stochastic]] process. It depends on aeons of time and burns unimaginable resources. Evolution solves problems by ''brute force''.   


Even though it came about through an evolutionary process, “natural” mammalian intelligence, whose apogee is ''homo sapiens'', isn’t like that. It ''is'' directed. We can narratise and hypothesise, remember, learn, and rule out plainly stupid  ideas without having to go through the motions of trying them.<ref>I know, I know. Hold your letters.</ref>  
Even though it came about through that exact evolutionary process, “natural” mammalian intelligence, whose apogee is ''homo sapiens'', isn’t like that. It ''is'' directed. In a way that our DNA cannot, humans can narratise and hypothesise, remember, learn, and rule out plainly stupid  ideas without having to go through the motions of trying them.<ref>I know, I know. Hold your letters.</ref>  


All mammals can do this to a degree; even retrievers.<ref>Actually come to think of it Lucille, bless her beautiful soul, didn’t seem to do that very often. But still. </ref> Humans happen to be particularly good at it. This is our great superpower: it took three and a half billion years to get from amino acid to the wheel, but 6,000 years to get from the wheel to the Nvidia  RTX 4090 GPU.  
All mammals can do this to a degree; even retrievers.<ref>Actually come to think of it Lucille, bless her beautiful soul, didn’t seem to do that very often. But still. </ref> Humans happen to be particularly good at it. It took three and a half billion years to get from amino acid to the wheel, but 6,000 years to get from the wheel to the Nvidia  RTX 4090 GPU.  


Now. [[Large language model]]<nowiki/>s are, like evolution, ''undirected''. They are a brute force method, using colossal amounts of energy and processing power. They work by a stochastic algorithm not dissimilar to evolution by natural selection. They can get better by chomping yet more data, faster, in parallel, with batteries of server farms in air-cooled warehouses full of lightning-fast multi-core graphics processors. But this takes colossal amounts of processing power and energy. This is starting to get expensive and hard. We are bumping up against computational limits as Moore’s law conks out, and environmental consequences as the planet does.
Now. [[Large language model]]<nowiki/>s are, like evolution, ''undirected''. They are a “brute force” method, using colossal amounts of energy and processing power to perform countless really simple operations. They work by a stochastic algorithm not dissimilar to evolution by natural selection. They can get better by chomping yet more data, faster, in more parallel instances, with batteries of server farms in air-cooled warehouses full of lightning-fast multi-core graphics processors. But this takes colossal amounts of processing power and energy. It is is already starting to get expensive and hard. We are bumping up against computational limits, as Moore’s law conks out, and environmental consequences, as the planet does.


For the longest time, computing power has been the cheap, efficient model. That is ceasing to be true. More silicon is not a zero-cost option. We will start to see the opportunity cost to devoting all these resources to something that, at the moment, creates diverting sophomore mashups we don’t actually need.<ref>We would do well to remember Arthur C. Clarke’s law here. The parallel processing power an LLM requires is already massive. It may be that the cost of expanding it in the way envisioned would be unfeasibly huge — in which case the original “business case” for [[technological redundancy]] falls away. See also the [[simulation hypothesis]]: it may be that the most efficient way of simulating the universe with sufficient granularity to support the simulation hypothesis is ''to actually build and run a universe'' in which case, the hypothesis fails.</ref>
For the longest time, “computing power” has been the cheap, efficient option. That is ceasing to be true. More processing is not a zero-cost option. We will start to see the opportunity cost to devoting all these resources to something that, at the moment, creates diverting sophomore mashups we don’t actually need.<ref>We would do well to remember Arthur C. Clarke’s law here. The parallel processing power an LLM requires is already massive. It may be that the cost of expanding it in the way envisioned would be unfeasibly huge — in which case the original “business case” for [[technological redundancy]] falls away. See also the [[simulation hypothesis]]: it may be that the most efficient way of simulating the universe with sufficient granularity to support the simulation hypothesis is ''to actually build and run a universe'' in which case, the hypothesis fails.</ref>


''The one thing homo sapiens doesn’t need is more free information''. Our defining quality seems to be that ''we can’t shut up''.
The defining quality of this apex evolutionary being is that ''we can’t shut up''. The one thing homo sapiens ''doesn’t'' need is ''more unremarkable information''.  


Why burn all this energy creating [[premium mediocre]] content when we have eight billion idle organic CPUs, capable of first class content? When there are millennia of existing content just sitting, there unattended?
So, question: why burn all this energy creating [[premium mediocre|mediocre]] AI content when there is an unexplored library of human achievement at our feet and eight billion organic CPUs sitting around ready to create more?  


====LibraryThing as a model====
====LibraryThing as a model====
So how about using this technology to better exploit our existing natural intelligence, rather than imitating it? Could we create [[system effect]]s to ''extend'' the long tail?  
So, how about using AI to better exploit our existing natural intelligence, rather than imitating or superseding it? Could we, instead, create [[system effect]]s to ''extend'' the long tail?  


It isn’t hard to imagine how this might work. A rudimentary version exists in {{Plainlink|https://www.librarything.com/|LibraryThing}}’s recommendation engine. It isn’t even wildly clever '''—''' {{Plainlink|https://www.librarything.com/|LibraryThing}} has been around for nearly twenty years and doesn’t even use AI: each user lists, by ISBN, the books in her personal library. She can rate them, review them, and the LibraryThing algorithm will compare each users’s virtual “library” with all the other user libraries on the site and return the most similar user libraries to yours. The attraction of this is not the books you have in common but the ones you ''don’t''.
It isn’t hard to imagine how this might work. A rudimentary version exists in {{Plainlink|https://www.librarything.com/|LibraryThing}}’s recommendation engine. It isn’t new or wildly clever '''—''' as far as I know, {{Plainlink|https://www.librarything.com/|LibraryThing}} doesn’t use AI each user lists, by ISBN, the books in her personal library. The LibraryThing algorithm will tell you with some degree of confidence, based on combined metadata, whether it thinks you will like any other book. It used to have a feature where it could ''un''recommend books it thought you would hate, but for some reason it decommissioned that!  Most powerfully, it will compare all the virtual “libraries” on the site and return the most similar user libraries to yours. The attraction of this is not the books you have in common, but the ones you ''don’t''.


The effect can be uncanny. My closest match is a user called eggnog2085. We have 62 books in common out of the 4,843 books between us. I can browse eggnog2085’s books, in a virtual library. The ones we have in common are among my favourite books. Many of eggnog2085’s books are new to me, and pretty much every one appeals. It is like wandering around a library designed to appeal specifically to me. You can see a meta-algorithm, that takes the most simpatico libraries and cross-references them for the most common books between them that are not in my library.  
Browsing doppelganger libraries can be uncanny. It is like wandering around a library of books you have never read, but which are designed to appeal specifically to you.  


LibraryThing generates this kind of delight with a relatively crude traditional algorithm. Imagine how it might perform with AI, trawling all of human creation with more granular information about user habits.  
LibraryThing generates this kind of delight with a relatively crude traditional algorithm. Imagine how it might perform with AI, trawling all of human creation with more granular information about user habits.  


Note how this role — seeking out delightful new human creativity — satisfies our criteria for the division of labour in that it is quite beyond the capability of any group of humans to do it, and it would not devalue, much less usurp, genuine human intellectual capacity. Rather, it would ''empower'' it.
Note how this role — seeking out delightful new human creativity — satisfies our criteria for the [[division of labour]] in that it is quite beyond the capability of any group of humans to do it, and it would not devalue, much less usurp, genuine human intellectual capacity. Rather, it would ''empower'' it.


Note also the [[system effect]] it would have: if humans held out hope that algorithms were committed to exploring the long tail of human creativity, and not shepherding people towards bits monetisable head, this would incentivise humans to create unique and idiosyncratic things.  
Note also the [[system effect]] it would have: if we held out hope that algorithms were pushing us down the long tail of human creativity, and not shepherding people towards its monetisable head, this would incentivise us all to create more unique and idiosyncratic things.  


It would have the system effect of distributing wealth and information — that is, [[strength]], not [[Power structure|power]] — ''along'' the curve of human diversity, rather than concentrating it at the top.
It would have the system effect of distributing wealth and information — that is, [[strength]], not [[Power structure|power]] — ''down'' the curve of human diversity, rather than concentrating it at the top.


We have lying all around us, unused, petabytes of human ingenuity, voluntarily donated into the indifferent maw of the internet. ''We are not lacking ingenuity''. This is one problem homo sapiens ''does not have''. Why would we spend our energy on creating artificial sources of new intelligence? Surely the best way of using this brilliant new generation of machine is to harness the ingenuity that is literally lying around.
We have lying all around us, unused, petabytes of human ingenuity, voluntarily donated into the indifferent maw of the internet. ''We are not lacking ingenuity''. Surely the best way of using these brilliant new machines is to harness what is literally lying around.
 
==== Division of labour, redux ====
So, about that “[[division of labour]]”. When it comes to mechanical tasks of the “body”, [[Turing machine]]s scale well, and humans scale very badly. “Scaling”, when we are talking about computational tasks, means doing them over and over again, in series or parallel, quickly and accurately. Each operation can be identical; their combined effect astronomical. Of course machines are good at this: this is why we build them. They are digital: they preserve information indefinitely, however many processors we use, with almost no loss of fidelity.


The JC is not at all bearish on technology in general, or artificial intelligence in particular. He’s just bearish on dopey applications for it.
You could try to use networked humans to replicate a [[Turing machine]], but the results would be slow, costly, disappointing and the humans would not enjoy it.<ref>{{author|Cixin Liu}} runs exactly this thought experiment in his ''Three Body Problem'' trilogy.</ref> Humans are slow and analogue. With each touch the [[signal-to-noise ratio]] would quickly degrade: this is the premise for the parlour game “Chinese Whispers” — with each repetition  the signal degrades, with amusing consequences.


Information technology has done a fabulous job of alleviating boredom, by filling our empty moments with a 5-inch rectangle of gossip, outrage and titillation, but it has done little to nourish the intellect. This is a function of the choices me have made. They, in turn are informed by the interests. Maybe we are missing something by never being bored. Maybe that is a clear space where imagination can run wild. Perhaps being fearful of boredom, by constantly distracting ourselves from our own existential anguish, we make ourselves vulnerable to this two-dimensional online world.
A game of Chinese Whispers among a group of [[Turing machine]]s would make for a grim evening.  


==== Division of labour, redux ====
In any case, you could not assign a human, or any number of humans, the task of “cataloguing the entire canon of human creative output”: this is quite beyond their theoretical, never mind practical, ability. With a machine, at least in concept, you could.<ref>Though the infinite fidelity of machines is overstated, as I discovered when trying to find the etymology of the word “[[satisfice]]. Its modern usage was coined by Herbert Simon in a paper in 1956, but the Google ngram suggests its usage began to tick up in the late 1940s. On further examination, the records transpire to be optical character recognition errors. So there is a large part of the human oeuvre — the pre-digital bit that has had be digitised— that does suffer from analogue copy errors.</ref>
About that “[[division of labour]]”. When it comes to mechanical tasks, machines — especially [[Turing machine]]s — scale very well, while humans scale very badly. “Scaling” when we are talking about computational tasks means doing them over and over again, in series or parallel, quickly and accurately. Each operation can be identical; their combined effect astronomical. Of course machines are good at this: this is why we build them. They are digital: they preserve information indefinitely, however many processors we use, with almost no loss of fidelity.


You could try to use networked humans to replicate a [[Turing machine]], but the results would be slow, costly and disappointing and the humans would not enjoy it.<ref>{{author|Cixin Liu}} runs exactly this thought experiment in his ''Three Body Problem'' trilogy.</ref> Humans are slow and analogue. With each touch they ''degrade'' information or ''augment'' it, depending on how you feel about it. The [[signal-to-noise ratio]] would quickly degrade: this is the premise for the parlour game “Chinese Whispers” — with each repetition the signal degrades, with amusing consequences. A game of Chinese Whispers among a group of [[Turing machine]]s would make for a grim evening.)
But when it comes to “mind stuff,” humans scale well. “Scaling”, for imaginative tasks, is different. Here we don’t want identical, digital, high-fidelity ''duplication'': ten thousand copies of ''Finnegans Wake'' contribute no more to the human canon than does one.<ref>Or possibly, even ''none'': Wikipedia tells us that, “due to its linguistic experiments, stream of consciousness writing style, literary allusions, free dream associations, and abandonment of narrative conventions, ''Finnegans Wake'' has been agreed to be a work largely unread by the general public.”</ref> Multiple humans doing “mind stuff” contribute precisely that idiosyncrasy, variability, difference in perspective that generates the wisdom, or brutality, of crowds: a complex community of readers can independently parse, analyse, explain, narratise, extend, criticise, extrapolate, filter, amend, correct, and improvise the information ''and each others’ reactions to it''.  


In any case, you could not assign a human, or any number of humans, the task of “cataloguing the entire canon of human creative output”: this is quite beyond their theoretical, never mind practical, ability. With a machine, at least in concept, you could.<ref>Though the infinite fidelity of machines is overstated, as I discovered when trying to find the etymology of the word “[[satisfice]]”. Its modern usage was coined by Herbert Simon in a paper in 1956, but the Google ngram suggests its usage began to tick up in the late 1940s. On further examination, the records transpire to be optical character recognition errors. So there is a large part of the human oeuvre — the pre-digital bit that has had be digitised— that does suffer from analogue copy errors.</ref>
The wisdom of the crowd shapes itself: consensus has a directed intelligence all of its own. This community of expertise is what [[Sam Bankman-Fried]] misses in his dismissal of Shakespeare’s “[[Bayesian prior|Bayesian priors]]”. However handsome William Shakespeare’s own contribution to the Shakespeare canon — the actual folio — it is finite, small and historic. It was complete by 1616 and hasn’t been changed since. The rest of the body of work we think of as “Shakespeare” comprises interpretations, editions, performances, literary criticisms, essays, adaptations and its (and their) infusion into the vernacular and it ''vastly outweighs the master’s actual folio''. What it more, it continues to grow.  


No surprise: human minds scale badly at “body stuff”. But when it comes to “mind stuff,” they scale magnificently. Here, what we look for in “scaling” is different. We don’t want identical, digital, high-fidelity ''duplication'': ten thousand copies of ''Finnegans Wake'' contribute no more to the human canon than does one.<ref>Or possibly, even ''none'': Wikipedia tells us that, “due to its linguistic experiments, stream of consciousness writing style, literary allusions, free dream associations, and abandonment of narrative conventions, ''Finnegans Wake'' has been agreed to be a work largely unread by the general public.”</ref> Multiple humans doing  “mind stuff” contribute precisely that variability, difference in perspective that generates the wisdom — or brutality — of crowds: a complex community of readers can independently parse, analyse, explain, narratise, extend, criticise, extrapolate, filter, amend, correct, and improvise the information ''and each others’ reactions to it''. The crowd shapes itself: human consensus has a directed intelligence all of its own. This community of expertise is what [[Sam Bankman-Fried]] overlooks in his dismissal of Shakespeare’s “[[Bayesian prior|Bayesian priors]]”. However handsome William Shakespeare’s own contribution was to the Shakespeare canon — the actual folio — it is finite, small and historic, in that it was completed in 1603 and hasn’t been changed since. The rest of the body of work we think of as “Shakespeare” comprises interpretations, editions, performances, literary criticisms, essays, adaptations and its (and their) infusion into the vernacular, vastly outstrips the master’s actual folio and continues to grow and evolve to this day. This kind of “mind scaling” creates its own intellectual energy and momentum (it is rather like the theory of God as a “ prime mover”). No matter how fast pattern-matching machines run in parallel, however much brute, replicating horsepower they throw at the task, artificial intelligence in the shape of a large learning model, without human steering, can’t do any of this.
This kind of “communal mind scaling” creates its own intellectual energy and momentum from a small, wondrous seed planted and nurtured four hundred years ago.<ref>On this view Shakespeare is rather like a “prime mover” who created a universe but has left it to develop according to its own devices.</ref>  No matter how fast pattern-matching machines run in parallel, however much brute, replicating horsepower they throw at the task, it is hard to artificial intelligence in the shape of a large learning model, without human steering, doing any of this.


(The “directed intelligence of human consensus” is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might be able to tell us, having been on both ends of it).<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>
(The “directed intelligence of human consensus” is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might be able to tell us, having been on both ends of it).<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>