83,196
edits
Amwelladmin (talk | contribs) |
Amwelladmin (talk | contribs) |
||
Line 186: | Line 186: | ||
This, by the way, has been the tale of the collaborative internet: despite [[Chris Anderson]]’s forecast in 2006 that universal interconnectedness would change economics forever<ref>[[The Long Tail: How Endless Choice is Creating Unlimited Demand]] ''(2006)''</ref> — that, suddenly, it would be costless to service the long tail of global demand, prompting some kind of explosion in cultural diversity — in practice, the exact opposite has happened.<ref>It is called “{{Plainlink|https://www.theclassroom.com/cultural-convergence-examples-16778.html|cultural convergence}}”.</ref> The overriding imperatives of [[scale]] have obliterated the subtle appeals of diversity, while sudden, unprecedented global interconnectedness has had the [[system effect]] of ''homogenising demand''. | This, by the way, has been the tale of the collaborative internet: despite [[Chris Anderson]]’s forecast in 2006 that universal interconnectedness would change economics forever<ref>[[The Long Tail: How Endless Choice is Creating Unlimited Demand]] ''(2006)''</ref> — that, suddenly, it would be costless to service the long tail of global demand, prompting some kind of explosion in cultural diversity — in practice, the exact opposite has happened.<ref>It is called “{{Plainlink|https://www.theclassroom.com/cultural-convergence-examples-16778.html|cultural convergence}}”.</ref> The overriding imperatives of [[scale]] have obliterated the subtle appeals of diversity, while sudden, unprecedented global interconnectedness has had the [[system effect]] of ''homogenising demand''. | ||
This is the counter-intuitive effect of a “[[cheapest-to-deliver]]” strategy: while it has become ever easier to target the “[[fat head]]”, ''the [[long tail]] has grown thinner''.<ref>{{author|Anita Elberse}}’s [[Blockbusters: Why Big Hits and Big Risks are the Future of the Entertainment Business|''Blockbusters'']] is excellent on this point. </ref> As the tail contracts, the [[commercial imperative]] to target our lowest common denominators gets stronger. ''This is a highly undesirable feedback loop''. It will homogenise ''us''. ''We'' will become less diverse. ''We'' will become more [[Antifragile|fragile]]. | |||
Now: if [[artificial intelligence]] is so spectacular, shouldn’t we be a bit more ambitious about what ''it'' could do for ''us''? Isn’t “giving you the bare minimum you’ll take to keep stringing you along” a bit ''underwhelming''? Isn’t employing it in a way that will “dumb us down” a bit, well, ''dumb''? | |||
Now if [[artificial intelligence]] is so spectacular, shouldn’t we be a bit more ambitious about what ''it'' could do for ''us''? Isn’t “giving you the bare minimum you’ll take to keep stringing you along” a bit ''underwhelming''? | |||
===Digression: Darwin’s profligate idea=== | ===Digression: Darwin’s profligate idea=== | ||
By now most accept the | By now, most accept the theory of [[evolution by natural selection]]. This really ''is'' magic: it provides a comprehensive account of the origin of life — indeed, the sum of all organic “creation”, as it were — which [[Reductionism|reduces]] to a mindless, repetitive process that we can state in a short sentence: | ||
{{Quote|In a population of organisms with individual traits whose offspring inherit those traits only with random variations, those having traits most suited to the prevailing environment will best survive and reproduce over time.<ref>This is the “ugliest man” by which, [[Nietzsche]] claims, we killed God. </ref>}} | {{Quote|In a population of organisms with individual traits whose offspring inherit those traits only with random variations, those having traits most suited to the prevailing environment will best survive and reproduce over time.<ref>This is the “ugliest man” by which, [[Nietzsche]] claims, we killed God. </ref>}} | ||
The economy of ''[[design]]'' is this process is staggering. The economy of ''effort in | The economy of ''[[design]]'' is this process is staggering. The economy of ''effort in execution'' is not. | ||
Evolution is ''tremendously'' ''wasteful''. Not just in how it ''does'' adapt, but in how much of the time it ''does not''. | Evolution is ''tremendously'' ''wasteful''. Not just in how it ''does'' adapt, but in how much of the time it ''does not''. | ||
The chain of adaptations from amino acids to Lennon & McCartney may have billions of links in it, but that is a model of parsimony compared with the number of adaptations over that time ''didn’t'' go anywhere — that arced off into one of [[design space]]’s gazillion dead ends and | The chain of adaptations from amino acids to Lennon & McCartney may have billions of links in it, but that is a model of parsimony compared with the number of adaptations over that time ''didn’t'' go anywhere — that arced off into one of [[design space]]’s gazillion dead ends and fizzled out. For every serendipitous mutation ''there are millions and millions of duds''. | ||
[[Evolution]] isn’t directed — that is its very super-power — so it fumbles blindly around, fizzing and sparking, and | [[Evolution]] isn’t directed — that is its very super-power — so it fumbles blindly around, fizzing and sparking, and only a vanishingly small proportion of the mutations it generates ever do something ''useful'' and those that do, do so accidentally. Evolution is a random, [[stochastic]] process. It depends on aeons of time and burns unimaginable resources. Evolution solves problems by ''brute force''. | ||
Even though it came about through | Even though it came about through that exact evolutionary process, “natural” mammalian intelligence, whose apogee is ''homo sapiens'', isn’t like that. It ''is'' directed. In a way that our DNA cannot, humans can narratise and hypothesise, remember, learn, and rule out plainly stupid ideas without having to go through the motions of trying them.<ref>I know, I know. Hold your letters.</ref> | ||
All mammals can do this to a degree; even retrievers.<ref>Actually come to think of it Lucille, bless her beautiful soul, didn’t seem to do that very often. But still. </ref> Humans happen to be particularly good at it. | All mammals can do this to a degree; even retrievers.<ref>Actually come to think of it Lucille, bless her beautiful soul, didn’t seem to do that very often. But still. </ref> Humans happen to be particularly good at it. It took three and a half billion years to get from amino acid to the wheel, but 6,000 years to get from the wheel to the Nvidia RTX 4090 GPU. | ||
Now. [[Large language model]]<nowiki/>s are, like evolution, ''undirected''. They are a | Now. [[Large language model]]<nowiki/>s are, like evolution, ''undirected''. They are a “brute force” method, using colossal amounts of energy and processing power to perform countless really simple operations. They work by a stochastic algorithm not dissimilar to evolution by natural selection. They can get better by chomping yet more data, faster, in more parallel instances, with batteries of server farms in air-cooled warehouses full of lightning-fast multi-core graphics processors. But this takes colossal amounts of processing power and energy. It is is already starting to get expensive and hard. We are bumping up against computational limits, as Moore’s law conks out, and environmental consequences, as the planet does. | ||
For the longest time, | For the longest time, “computing power” has been the cheap, efficient option. That is ceasing to be true. More processing is not a zero-cost option. We will start to see the opportunity cost to devoting all these resources to something that, at the moment, creates diverting sophomore mashups we don’t actually need.<ref>We would do well to remember Arthur C. Clarke’s law here. The parallel processing power an LLM requires is already massive. It may be that the cost of expanding it in the way envisioned would be unfeasibly huge — in which case the original “business case” for [[technological redundancy]] falls away. See also the [[simulation hypothesis]]: it may be that the most efficient way of simulating the universe with sufficient granularity to support the simulation hypothesis is ''to actually build and run a universe'' in which case, the hypothesis fails.</ref> | ||
''The one thing homo sapiens doesn’t need is more | The defining quality of this apex evolutionary being is that ''we can’t shut up''. The one thing homo sapiens ''doesn’t'' need is ''more unremarkable information''. | ||
So, question: why burn all this energy creating [[premium mediocre|mediocre]] AI content when there is an unexplored library of human achievement at our feet and eight billion organic CPUs sitting around ready to create more? | |||
====LibraryThing as a model==== | ====LibraryThing as a model==== | ||
So how about using | So, how about using AI to better exploit our existing natural intelligence, rather than imitating or superseding it? Could we, instead, create [[system effect]]s to ''extend'' the long tail? | ||
It isn’t hard to imagine how this might work. A rudimentary version exists in {{Plainlink|https://www.librarything.com/|LibraryThing}}’s recommendation engine. It isn’t | It isn’t hard to imagine how this might work. A rudimentary version exists in {{Plainlink|https://www.librarything.com/|LibraryThing}}’s recommendation engine. It isn’t new or wildly clever '''—''' as far as I know, {{Plainlink|https://www.librarything.com/|LibraryThing}} doesn’t use AI — each user lists, by ISBN, the books in her personal library. The LibraryThing algorithm will tell you with some degree of confidence, based on combined metadata, whether it thinks you will like any other book. It used to have a feature where it could ''un''recommend books it thought you would hate, but for some reason it decommissioned that! Most powerfully, it will compare all the virtual “libraries” on the site and return the most similar user libraries to yours. The attraction of this is not the books you have in common, but the ones you ''don’t''. | ||
Browsing doppelganger libraries can be uncanny. It is like wandering around a library of books you have never read, but which are designed to appeal specifically to you. | |||
LibraryThing generates this kind of delight with a relatively crude traditional algorithm. Imagine how it might perform with AI, trawling all of human creation with more granular information about user habits. | LibraryThing generates this kind of delight with a relatively crude traditional algorithm. Imagine how it might perform with AI, trawling all of human creation with more granular information about user habits. | ||
Note how this role — seeking out delightful new human creativity — satisfies our criteria for the division of labour in that it is quite beyond the capability of any group of humans to do it, and it would not devalue, much less usurp, genuine human intellectual capacity. Rather, it would ''empower'' it. | Note how this role — seeking out delightful new human creativity — satisfies our criteria for the [[division of labour]] in that it is quite beyond the capability of any group of humans to do it, and it would not devalue, much less usurp, genuine human intellectual capacity. Rather, it would ''empower'' it. | ||
Note also the [[system effect]] it would have: if | Note also the [[system effect]] it would have: if we held out hope that algorithms were pushing us down the long tail of human creativity, and not shepherding people towards its monetisable head, this would incentivise us all to create more unique and idiosyncratic things. | ||
It would have the system effect of distributing wealth and information — that is, [[strength]], not [[Power structure|power]] — '' | It would have the system effect of distributing wealth and information — that is, [[strength]], not [[Power structure|power]] — ''down'' the curve of human diversity, rather than concentrating it at the top. | ||
We have lying all around us, unused, petabytes of human ingenuity, voluntarily donated into the indifferent maw of the internet. ''We are not lacking ingenuity''. | We have lying all around us, unused, petabytes of human ingenuity, voluntarily donated into the indifferent maw of the internet. ''We are not lacking ingenuity''. Surely the best way of using these brilliant new machines is to harness what is literally lying around. | ||
==== Division of labour, redux ==== | |||
So, about that “[[division of labour]]”. When it comes to mechanical tasks of the “body”, [[Turing machine]]s scale well, and humans scale very badly. “Scaling”, when we are talking about computational tasks, means doing them over and over again, in series or parallel, quickly and accurately. Each operation can be identical; their combined effect astronomical. Of course machines are good at this: this is why we build them. They are digital: they preserve information indefinitely, however many processors we use, with almost no loss of fidelity. | |||
You could try to use networked humans to replicate a [[Turing machine]], but the results would be slow, costly, disappointing and the humans would not enjoy it.<ref>{{author|Cixin Liu}} runs exactly this thought experiment in his ''Three Body Problem'' trilogy.</ref> Humans are slow and analogue. With each touch the [[signal-to-noise ratio]] would quickly degrade: this is the premise for the parlour game “Chinese Whispers” — with each repetition the signal degrades, with amusing consequences. | |||
A game of Chinese Whispers among a group of [[Turing machine]]s would make for a grim evening. | |||
In any case, you could not assign a human, or any number of humans, the task of “cataloguing the entire canon of human creative output”: this is quite beyond their theoretical, never mind practical, ability. With a machine, at least in concept, you could.<ref>Though the infinite fidelity of machines is overstated, as I discovered when trying to find the etymology of the word “[[satisfice]]”. Its modern usage was coined by Herbert Simon in a paper in 1956, but the Google ngram suggests its usage began to tick up in the late 1940s. On further examination, the records transpire to be optical character recognition errors. So there is a large part of the human oeuvre — the pre-digital bit that has had be digitised— that does suffer from analogue copy errors.</ref> | |||
But when it comes to “mind stuff,” humans scale well. “Scaling”, for imaginative tasks, is different. Here we don’t want identical, digital, high-fidelity ''duplication'': ten thousand copies of ''Finnegans Wake'' contribute no more to the human canon than does one.<ref>Or possibly, even ''none'': Wikipedia tells us that, “due to its linguistic experiments, stream of consciousness writing style, literary allusions, free dream associations, and abandonment of narrative conventions, ''Finnegans Wake'' has been agreed to be a work largely unread by the general public.”</ref> Multiple humans doing “mind stuff” contribute precisely that idiosyncrasy, variability, difference in perspective that generates the wisdom, or brutality, of crowds: a complex community of readers can independently parse, analyse, explain, narratise, extend, criticise, extrapolate, filter, amend, correct, and improvise the information ''and each others’ reactions to it''. | |||
The wisdom of the crowd shapes itself: consensus has a directed intelligence all of its own. This community of expertise is what [[Sam Bankman-Fried]] misses in his dismissal of Shakespeare’s “[[Bayesian prior|Bayesian priors]]”. However handsome William Shakespeare’s own contribution to the Shakespeare canon — the actual folio — it is finite, small and historic. It was complete by 1616 and hasn’t been changed since. The rest of the body of work we think of as “Shakespeare” comprises interpretations, editions, performances, literary criticisms, essays, adaptations and its (and their) infusion into the vernacular and it ''vastly outweighs the master’s actual folio''. What it more, it continues to grow. | |||
This kind of “communal mind scaling” creates its own intellectual energy and momentum from a small, wondrous seed planted and nurtured four hundred years ago.<ref>On this view Shakespeare is rather like a “prime mover” who created a universe but has left it to develop according to its own devices.</ref> No matter how fast pattern-matching machines run in parallel, however much brute, replicating horsepower they throw at the task, it is hard to artificial intelligence in the shape of a large learning model, without human steering, doing any of this. | |||
(The “directed intelligence of human consensus” is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might be able to tell us, having been on both ends of it).<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref> | (The “directed intelligence of human consensus” is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might be able to tell us, having been on both ends of it).<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref> |