Template:M intro technology rumours of our demise: Difference between revisions

No edit summary
Line 146: Line 146:
Nor should we be misdirected by the “magic” of sufficiently advanced technology, like [[artificial intelligence]], to look too far ahead.  
Nor should we be misdirected by the “magic” of sufficiently advanced technology, like [[artificial intelligence]], to look too far ahead.  


We take one look at the output of an AI art generator and conclude the highest human intellectual achievements are under siege. Not only does reducing art to its “[[Bayesian prior]]s” stunningly [[Symbol processing|miss the point]] about art — in an artificially intelligent way, ironically — but it skips over all the easier, drearier, more machine-like applications to which machines might profitably put, but with which the poor inconstant human is still burdened. For these mundane but potentially life-changing tasks there is, apparently, no technological resolution in sight: machines that can fold washing, remember where you put the car keys, weed out fake news, and wipe down the kitchen table,wipe a baby’s arse.  
We take one look at the output of an AI art generator and conclude the highest human intellectual achievements are under siege. However good humans artists may be, they cannot compete with the massively parallel power of LLMs, which can generate billions of images some of which, by accident, will be transcendentally great art.


Okay, these require motor control and interaction with the irreducibly messy [[off-world|real world]], so there are practical barriers to progression. But other such facilities would not: imagine a machine that could search through all the billions of books, recordings and artworks that humanity has created, and surface the undiscovered genius, to give its [[Bayesian prior]]s a fighting chance?
Not only does reducing art to its “[[Bayesian prior]]s” like this stunningly [[Symbol processing|miss the point]] about art, but it suggests those who would deploy  artificial intelligence have their priorities dead wrong. There is no shortage of sublime human expression: quite the opposite. The internet is awash with “content”: there is already an order of magnitude more content to consume than our collected ears and eyes have capacity to take in. And, here: have some more. 


''This is not just the Spotify algorithm'', as occasionally delightful as it is. That has its own agenda: revenue maximisation for Spotify is its primary goal, and listener enlightenment is an occasional by-product, where the two intersect. But as long as you are satisfied enough to keep listening, it doesn't care. It is a ''[[cheapest to deliver]]'' strategy: it satisfices. It will tend to serve up populist mush: the reader’s personal “[[cheesecake for the brain]]”. Per {{author|Anita Elberse}}’s {{br|The Blockbuster Effect}}, commercial algorithms, targeted primarily at revenue optimisation, have had the counter-intuitive effect of ''truncating'' the “[[long tail]]” of consumer choice Wired’s Chris Anderson famously envisioned it. A sensible use for this technology would ''extend'' it.  
''We don’t need more content''. What we do need is ''dross management'' and ''needle-from-haystack'' extraction. This is stuff machines ought to be really good at. Why don’t we point the machines at that?


If artificial intelligence is so spectacular, shouldn’t we be a bit more ambitious in our expectations about what it could do? Isn't “giving you the bare minimum to keep stringing you along” just a little ''underwhelming''?
Remember the [[division of labour]]: machines are good at dreary, fiddly, repetitive stuff. There are plenty of easy, dreary, mechanical applications to which machines might profitably put but to which they have not, and with which we are still burdened: folding washing, clearing up the kitchen and changing nappies. For these mundane but potentially life-changing tasks there is, apparently, no technological resolution in sight.


Imagine a personal large language model, private to the user — free, therefore, of the usual data privacy concerns — that would pattern-match purely by reference to the it's single client’s actual reading habits, prompts and instructions and the recommendations of pattern-matched like-minded readers. This algorithm would be tasked with ''diversifying'' — finding undiscovered works the client might not otherwise find — and rather than ''converging'' — gravitating towards common, popular, commercially promoted ones, it would actively avoid them. .  
Okay, some require motor control and interaction with the irreducibly messy [[off-world|real world]], so there are practical barriers to progression.  And robot lawnmowers and vacuum cleaners suggest a route ahead there. (They are cool!) 
 
But other facilities would not: remembering where you put the car keys, weeding out fake news, managing browser cookies, or this: ''curating'' the great corpus of human creation, rather than ''ripping it off''. 
 
===== Digression: Nietzsche, Blake and the Camden Cat =====
{{Quote|''The Birth of Tragedy'' sold 625 copies in six years; the three parts of ''Thus Spoke Zarathustra'' sold fewer than a hundred copies each. Not until it was too late did his works finally reach a few decisive ears, including Edvard Munch, August Strindberg, and the Danish-Jewish critic Georg Brandes, whose lectures at the University of Copenhagen first introduced Nietzsche’s philosophy to a wider audience.
:—''The Sufferings of Nietzsche'', Los Angeles Review of Books, 2018}}
Here [[Sam Bankman-Fried]], with his loopy remarks about William Shakespeare’s damning [[Bayesian prior|Bayesian priors]], has a point, though not the one he thought he had. [[Friedrich Nietzsche]] died in obscurity and was only redeemed, after a nasty mix-up with National Socialism, in the 1960s, some seventy years after his death. William Blake, too, died in obscurity, as did Emily Dickinson.
 
These are the artists for whom the improbability engine worked its magic, even if not in their lifetimes. But how many undiscovered Nietzsches, Blakes and Dickinsons are there, who never caught the light, and now lie lost, sedimented into unreachably deep strata of the human canon? How many ''living'' artists are brilliantly ploughing an under-appreciated furrow, cursing their own immaculate Bayesian priors? How many solitary geniuses are out there who, as we speak, are galloping towards an obscurity a [[large language model]] might save them from?
 
(I know of at least one: the international rockabilly singer [[Daniel Jeanrenaud]], known to his fans as the [[Camden Cat]], who for thirty years has plied his trade with a beat up acoustic guitar on the Northern Line, and once wrote and recorded one of the great rockabilly singles of all time. The only record of it, now, is this {{Plainlink|1=https://www.youtube.com/watch?v=LS_fqYROIQM|2=this youtube video}}. It has 361 views, and 6 likes.)
 
(Digression over.)
 
Imagine personal large language model, private to the user — free, therefore, of data privacy concerns — that would pattern-match purely by reference to its client’s actual reading and listening history, its prompts, instructions and the recommendations of pattern-matched like-minded readers, and which searched through all of those billions of books, plays, films, recordings and artworks we already have and, instead of using them to generate random mashups, uncovered genius? 
 
Rather than ''converging'' on common ground, this algorithm would be designed to ''diversify'' — to find things its client would never otherwise find.
 
''This is not just the Spotify recommendation algorithm'', as occasionally delightful as that is. That has its own primary goal of revenue maximisation: client illumination is a necessary by-product, but only where the primary goal is met. But as long as clients are illuminated ''enough'' ''to keep listening'', it doesn’t care. 
 
Commercial algorithms follow a ''[[cheapest to deliver]]'' strategy: they satisfices. So, it will tend to serve up populist mush: the reader’s personal “[[cheesecake for the brain]]”. this kind of algorithm, per {{author|Anita Elberse}}’s [[Blockbusters: Why Big Hits and Big Risks are the Future of the Entertainment Business|''Blockbusters'']], targeted primarily at revenue optimisation, will have had the counter-intuitive effect of ''truncating'' the “[[long tail]]” of consumer choice. A sensible use for this technology would ''extend'' it.  
 
In any case, if artificial intelligence is so spectacular, shouldn’t we be a bit more ambitious in our expectations about what it could do for us? Isn’t “giving you the bare minimum you’ll take to keep stringing you along” just a little ''underwhelming''?
 
.  


A rudimentary version of this exists in the LibraryThing recommendation engine, but the scope, with artificial intelligence is huge.
A rudimentary version of this exists in the LibraryThing recommendation engine, but the scope, with artificial intelligence is huge.
Line 215: Line 240:


The age of the machines — our complacent faith in them — has made matters worse. Machines will conspire to ignore “human magic”, when offered, especially when it says “this is not right”.  
The age of the machines — our complacent faith in them — has made matters worse. Machines will conspire to ignore “human magic”, when offered, especially when it says “this is not right”.  
That kind of magic was woven by [[Bethany McLean]]. [[Collateralised debt obligation|Michael Burry]]. [[Harry Markopolos]]. [[Wirecard|Dan McCrum]]. The formalist system systematically ignored them, fired them, tried to put them in prison.
That kind of magic was woven by [[Bethany McLean]]. [[Collateralised debt obligation|Michael Burry]]. [[Harry Markopolos]]. [[Wirecard|Dan McCrum]]. The formalist system systematically ignored them, fired them, tried to put them in prison.