Template:M intro technology rumours of our demise: Difference between revisions

no edit summary
Tags: Mobile edit Mobile web edit
No edit summary
Tags: Mobile edit Mobile web edit
 
(11 intermediate revisions by the same user not shown)
Line 30: Line 30:
But technology also creates space and capacity to indulge ourselves. [[Parkinson’s law]] states: work expands to fill the time allotted for its completion. Technology ''also'' frees us up to care about things we never used to care about. The microcomputer made generate duplicate and distribute documents far, far easier. There’s that boon and bane, again.
But technology also creates space and capacity to indulge ourselves. [[Parkinson’s law]] states: work expands to fill the time allotted for its completion. Technology ''also'' frees us up to care about things we never used to care about. The microcomputer made generate duplicate and distribute documents far, far easier. There’s that boon and bane, again.


So, before concluding that ''this time'' the machines will put us out of work we must explain ''how''. [[this time is different|Why is this time different]]? What has changed?  
So, before concluding that ''this time'' the machines will put us out of work we must explain ''how''. [[this time is different|Why is this time different]]? What has changed?  


===FAANGs ahoy?===
===FAANGs ahoy?===
Line 86: Line 86:
Thirdly — cool gadget maker that pivots to banking ''and does it well'' has as much chance of maintaining millennial brand loyalty as does a toy factory that moves into dentistry.
Thirdly — cool gadget maker that pivots to banking ''and does it well'' has as much chance of maintaining millennial brand loyalty as does a toy factory that moves into dentistry.


Those Occupy Wall Street gang? Apple fanboys, the lot of them. ''At the moment''. But it isn’t the ''way'' [[trad fi]] banks go about banking that tarnishes your brand. It’s ''banking''. ''No-one likes money-lenders''. It is a dull, painful, risky business. Part of the game is doing shitty things to customers when they lose your money. Repossessing the Tesla. Foreclosing on the condo. That ''isn’t'' part of the game of selling MP3 players.
Those Occupy Wall Street gang? Apple fanboys, the lot of them. ''At the moment''. But it isn’t the ''way'' [[trad fi]] banks go about banking that tarnishes your brand. It’s ''banking''. ''No-one likes money-lenders''. It is a dull, painful, risky business. Part of the game is doing shitty things to customers when they lose your money. Repossessing the Tesla. Foreclosing on the condo. That ''isn’t'' part of the game of selling MP3 players.


''The business of banking will trash the brand.''
''The business of banking will trash the brand.''
Line 122: Line 122:
But state-of-the-art machines, per Arthur C. Clarke, aren’t magic: it just ''seems'' like it, sometimes. They are a two-dimensional, simplified model of human intelligence. A proxy: a modernist [[simulacrum]]. They are a shorthand way of mimicking a limited sort of sentience, potentially useful in known environments and constrained circumstances.
But state-of-the-art machines, per Arthur C. Clarke, aren’t magic: it just ''seems'' like it, sometimes. They are a two-dimensional, simplified model of human intelligence. A proxy: a modernist [[simulacrum]]. They are a shorthand way of mimicking a limited sort of sentience, potentially useful in known environments and constrained circumstances.


Yet we have begun to model ourselves upon machines. The most dystopian part of John Cryan’s opening quote was the first part — “''today, we have people doing work like robots''” — because it describes a stupid present reality. We have persuaded ourselves that “being machine-like’’ should be our loftiest aim. But if we are in a footrace where what matters is simply strength, speed, consistency, modularity, [[fungibility]] and ''mundanity'' — humans will surely ''lose''.  
Yet we have begun to model ourselves upon machines. The most dystopian part of John Cryan’s opening quote was the first part — “''today, we have people doing work like robots''” — because it describes a stupid present reality. We have persuaded ourselves that “being machine-like’’ should be our loftiest aim. But if we are in a footrace where what matters is simply strength, speed, consistency, modularity, [[fungibility]] and ''mundanity'' — humans will surely ''lose''.  


But we ''aren’t'' in that foot race. Strength, speed, consistency, fungibility and patience are the loftiest aims ''only where there is no suitable machine''.   
But we ''aren’t'' in that foot race. Strength, speed, consistency, fungibility and patience are the loftiest aims ''only where there is no suitable machine''.   
Line 133: Line 133:
We are used to the “[[Turing machine]]” as a [[metaphor]] for “mind” but it is a bad metaphor. It is unambitious. It does not do justice to the human mind.  
We are used to the “[[Turing machine]]” as a [[metaphor]] for “mind” but it is a bad metaphor. It is unambitious. It does not do justice to the human mind.  


Perhaps we could invert it. We might instead use “body” — in that dishonourably dualist, [[Descartes|Cartesian]] sense — as a [[metaphor]] for a Turing machine, and “mind” for natural human intelligence. “Mind” and “body” in this sense, are a practical guiding principle for the [[division of labour]] between human and machine: what goes to “body”, give to a machine: motor skills; temperature regulation; the pulmonary system; digestion; aspiration — the conscious mind has no business there. There is little it can add. It only gets in the way. There is compelling evidence that when the conscious mind takes over motor skills, things go to hell.<ref>This is the premise of {{author|Daniel Kahneman}}’s {{br|Thinking, Fast and Slow}}, and for that matter, [[Matthew Syed]]’s {{br|Bounce}}.</ref>
Perhaps we could invert it. We might instead use “body” — in that dishonourably dualist, [[Descartes|Cartesian]] sense — as a [[metaphor]] for a Turing machine, and “mind” for natural human intelligence. “Mind” and “body” in this sense, are a practical guiding principle for the [[division of labour]] between human and machine: what goes to “body”, give to a machine: motor skills; temperature regulation; the pulmonary system; digestion; aspiration — the conscious mind has no business there. There is little it can add. It only gets in the way. There is compelling evidence that when the conscious mind takes over motor skills, things go to hell.<ref>This is the premise of {{author|Daniel Kahneman}}’s {{br|Thinking, Fast and Slow}}, and for that matter, [[Matthew Syed]]’s {{br|Bounce}}.</ref>


But leave interpersonal relationships, communication, perception, [[construction]], decision-making in times of uncertainty, imagination and creation to the mind. ''Leave the machines out of this''. They will only bugger it up. Let them ''report'', by all means. Let them assist: triage the “conscious act” to hive off the mechanical tasks on which it depends.<ref>{{Author|Julian Jaynes}} has a magnificent passage in his book {{br|The Origins of Consciousness in the Breakdown of the Bicameral Mind}} where he steps through all the aspects of consciousness that we assume are conscious, but which are not.  
But leave interpersonal relationships, communication, perception, [[construction]], decision-making in times of uncertainty, imagination and creation to the mind. ''Leave the machines out of this''. They will only bugger it up. Let them ''report'', by all means. Let them assist: triage the “conscious act” to hive off the mechanical tasks on which it depends.<ref>{{Author|Julian Jaynes}} has a magnificent passage in his book {{br|The Origins of Consciousness in the Breakdown of the Bicameral Mind}} where he steps through all the aspects of consciousness that we assume are conscious, but which are not.  


“Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of. How simple that is to say; how difficult to appreciate!”</ref> Let the machines loose on those mechanical tasks. Let them provide, on request, the information the conscious mind needs to make its build its models and make its plans, but ''do not let them intermediate that plan''.
“Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of. How simple that is to say; how difficult to appreciate!”</ref> Let the machines loose on those mechanical tasks. Let them provide, on request, the information the conscious mind needs to make its build its models and make its plans, but ''do not let them intermediate that plan''.
Line 148: Line 148:
We take one look at the output of an AI art generator and conclude the highest human intellectual achievements are under siege. However good humans artists may be, they cannot compete with the massively parallel power of LLMs, which can generate billions of images some of which, by accident, will be transcendentally great art.   
We take one look at the output of an AI art generator and conclude the highest human intellectual achievements are under siege. However good humans artists may be, they cannot compete with the massively parallel power of LLMs, which can generate billions of images some of which, by accident, will be transcendentally great art.   


Not only does reducing art to its “[[Bayesian prior]]s” like this stunningly [[Symbol processing|miss the point]] about art, but it suggests those who would deploy artificial intelligence have their priorities dead wrong. There is no shortage of sublime human expression: quite the opposite. The internet is awash with “content”: there is already an order of magnitude more content to consume than our collected ears and eyes have capacity to take in. And, here: have some more. 
Not only does reducing art to its “[[Bayesian prior]]s” like this stunningly [[Symbol processing|miss the point]] about art, but it suggests those who would deploy artificial intelligence have their priorities dead wrong. There is no shortage of sublime human expression: quite the opposite. The internet is awash with “content”: there is far more than our collected ears and eyes can take in.  


''We don’t need more content''. What we do need is ''dross management'' and ''needle-from-haystack'' extraction. This is stuff machines ought to be really good at. Why don’t we point the machines at that?
And, here: have some more


Remember the [[division of labour]]: machines are good at dreary, fiddly, repetitive stuff. There are plenty of easy, dreary, mechanical applications to which machines might profitably put but to which they have not, and with which we are still burdened: folding washing, clearing up the kitchen and changing nappies. For these mundane but potentially life-changing tasks there is, apparently, no technological resolution in sight.
''We don’t need more content''. What we need is ''dross management'' and ''needle-from-haystack'' extraction. Machines ought to be really good at this.  
 
Okay, some require motor control and interaction with the irreducibly messy [[off-world|real world]], so there are practical barriers to progression.  And robot lawnmowers and vacuum cleaners suggest a route ahead there. (They are cool!) 
 
But other facilities would not: remembering where you put the car keys, weeding out fake news, managing browser cookies, or this: ''curating'' the great corpus of human creation, rather than ''ripping it off''.


===== Digression: Nietzsche, Blake and the Camden Cat =====
===== Digression: Nietzsche, Blake and the Camden Cat =====
Line 163: Line 159:
The [[Bayesian priors]] are pretty damning. ...When Shakespeare wrote, almost all of Europeans were busy farming, and very few people attended university; few people were even literate — probably as low as about ten million people. By contrast, there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564?
The [[Bayesian priors]] are pretty damning. ...When Shakespeare wrote, almost all of Europeans were busy farming, and very few people attended university; few people were even literate — probably as low as about ten million people. By contrast, there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564?
:—[[Sam Bankman-Fried|Sam Bankman Fried]]’s “sophomore college blog”}}
:—[[Sam Bankman-Fried|Sam Bankman Fried]]’s “sophomore college blog”}}
[[Sam Bankman-Fried]] had a point, though not the one he thought. [[Friedrich Nietzsche]] died in obscurity, as did William Blake and Emily Dickinson.  
[[Sam Bankman-Fried]] had a point here, though not the one he thought.
 
[[Friedrich Nietzsche]] died in obscurity, as did William Blake and Emily Dickinson. They were lucky that the improbability engine worked its magic for them, even if not in their lifetimes. 


For these geniuses, the improbability engine worked its magic, even if not in their lifetimes. But how many undiscovered Nietzsches, Blakes and Dickinsons are there, who never caught the light, and now lie sedimented into unreachably deep strata of the human canon? How many ''living'' artists are currently ploughing an under-appreciated furrow, cursing their own immaculate “[[Bayesian priors]]”? How many solitary geniuses are out there, right now, stampeding towards an obscurity a [[large language model]] might save them from?
But how many undiscovered Nietzsches, Blakes and Dickinsons are there, now sedimented into unreachably deep strata of the human canon?  


(I know of at least one: rockabilly singer [[Daniel Jeanrenaud]], the [[Camden Cat]], who for thirty years has plied his trade with a beat-up acoustic guitar on the Northern Line, and once wrote and recorded one of the great rockabilly singles of all time. It remains bafflingly unacknowledged. Here it is, on {{Plainlink|1=https://soundcloud.com/thecamdencats/you-carry-on?si=24ececd75c0540faafd470d822971ab7|2=SoundCloud}}.)
How many ''living'' artists are currently ploughing an under-appreciated furrow, stampeding towards an obscurity a [[large language model]] might save them from, cursing their own immaculate “[[Bayesian priors]]?  


Digression over.
(I know of at least one: the [[Camden Cat]], who for thirty years has plied his trade with a beat-up acoustic guitar on the Northern Line, and once wrote and recorded one of the great rockabilly singles of all time. It remains bafflingly unacknowledged. Here it is, on {{Plainlink|1=https://soundcloud.com/thecamdencats/you-carry-on?si=24ececd75c0540faafd470d822971ab7|2=SoundCloud}}.)


=== If AI is a cheapest-to-deliver strategy you are doing it wrong ===
=== If AI is a cheapest-to-deliver strategy you’re doing it wrong ===
{{quote|
{{quote|
{{D|Cheapest-to-deliver|/ˈʧiːpɪst tuː dɪˈlɪvə/|adj}}
{{D|Cheapest-to-deliver|/ˈʧiːpɪst tuː dɪˈlɪvə/|adj}}
Of the range of possible ways of discharging your [[contract|contractual obligation]] to the letter, the one that will cost you the least and irritate your customer the most should you choose it.}}
Of the range of possible ways of discharging your [[contract|contractual obligation]] to the letter, the one that will cost you the least and irritate your customer the most should you choose it.}}


Imagine we each had private [[large language model]]s at our personal disposal — free, therefore, of data privacy concerns — that could pattern-match against our individual reading and listening histories, our engineered prompts, our instructions and the recommendations of like-minded readers.   
Imagine having personal [[large language model]]s at our disposal that could pattern-match against our individual reading and listening histories, our engineered prompts, our instructions and the recommendations of like-minded readers.   


Our LLM would search through the entire human ''oeuvre'' — the billions of existing books, plays, films, recordings and artworks, known and unknown but instead of using that information to make its own mashups, it would bring back existing human-authored works from the canon that its patterns told it would unusually suit you?   
Our LLM would search through the billions of existing books, plays, films, recordings and artworks, known and unknown that comprise the human ''oeuvre'' but, instead of making its own mashups, it would retrieve existing works that its patterns said would specifically appeal to us?   


This is not just the Spotify recommendation algorithm, as occasionally delightful as that is. Any commercial algorithm has its own primary goal to maximise revenue. A certain amount of “customer delight” might be a by-product, but only as far as it intersects with that primary commercial goal. As long as customers are ''just delighted'' ''enough'' to keep listening, the algorithm doesn’t care ''how'' delighted they are.<ref>As with the JC’s school exam grades: anything more than 51% is wasted effort.Try as he might, the JC  was never able to persuade his dear old ''Mutti'' about this.</ref>
This is not just the Spotify recommendation algorithm, as occasionally delightful as that is. Any commercial algorithm has its own primary goal to maximise revenue. A certain amount of “customer delight” might be a by-product, but only as far as it intersects with that primary commercial goal. As long as customers are ''just delighted'' ''enough'' to keep listening, the algorithm doesn’t care ''how'' delighted they are.<ref>As with the JC’s school exam grades: anything more than 51% is wasted effort.Try as he might, the JC  was never able to persuade his dear old ''Mutti'' about this.</ref>


Commercial algorithms need only follow a ''[[cheapest to deliver]]'' strategy: they “[[satisfice]]”. Being targeted at optimising revenue, they converge upon what is likely to be popular, because that is easier to find. Why scan the ocean deeps of human content when you can skim the top and keep the punters happy enough?  
Commercial algorithms need only follow a ''[[cheapest to deliver]]'' strategy: they “[[satisfice]]”. Being targeted at optimising revenue, they converge upon what is likely to be popular, because that is easier to find. Why scan ocean deeps for human content when you can skim the top and keep the punters happy enough?  


This, by the way, has been the tale of the collaborative internet: despite [[Chris Anderson]]’s forecast in 2006 that universal interconnectedness would change economics forever<ref>[[The Long Tail: How Endless Choice is Creating Unlimited Demand]] ''(2006)''</ref> — that, suddenly, it would be costless to service the long tail of global demand, prompting some kind of explosion in cultural diversity — in practice, the exact opposite has happened.<ref>It is called “{{Plainlink|https://www.theclassroom.com/cultural-convergence-examples-16778.html|cultural convergence}}”.</ref> The overriding imperatives of [[scale]] have obliterated the subtle appeals of diversity, while sudden, unprecedented global interconnectedness has had the [[system effect]] of ''homogenising demand''.  
This, by the way, has been the tragic commons of the collaborative internet: despite [[Chris Anderson]]’s forecast in 2006, that universal interconnectedness would change economics for the better<ref>[[The Long Tail: How Endless Choice is Creating Unlimited Demand]] ''(2006)''</ref> — that, suddenly, it would be costless to service the long tail of global demand, prompting some kind of explosion in cultural diversity — the exact opposite has happened.<ref>It is called “{{Plainlink|https://www.theclassroom.com/cultural-convergence-examples-16778.html|cultural convergence}}”.</ref> The overriding imperatives of [[scale]] have obliterated the subtle appeals of diversity, while sudden, unprecedented global interconnectedness has had the [[system effect]] of ''homogenising demand''.  


This is the counter-intuitive effect of a “[[cheapest-to-deliver]]” strategy: while it has become ever easier to target the “[[fat head]]”, ''the [[long tail]] has grown thinner''.<ref>{{author|Anita Elberse}}’s [[Blockbusters: Why Big Hits and Big Risks are the Future of the Entertainment Business|''Blockbusters'']] is excellent on this point. </ref> As the tail contracts, the [[commercial imperative]] to target our lowest common denominators gets stronger. ''This is a highly undesirable feedback loop''. It will homogenise ''us''. ''We'' will become less diverse. ''We'' will become more [[Antifragile|fragile]].
This is the counter-intuitive effect of a “[[cheapest-to-deliver]]” strategy: while it has become ever easier to target the “[[fat head]]”, ''the [[long tail]] has grown thinner''.<ref>{{author|Anita Elberse}}’s [[Blockbusters: Why Big Hits and Big Risks are the Future of the Entertainment Business|''Blockbusters'']] is excellent on this point. </ref> As the tail contracts, the [[commercial imperative]] to target the lowest common denominators inflates. ''This is a highly undesirable feedback loop''. It will homogenise ''us''. ''We'' will become less diverse. ''We'' will become more [[Antifragile|fragile]]. We will resemble machines. ''We are not good at being machines''.


Now: if [[artificial intelligence]] is so spectacular, shouldn’t we be a bit more ambitious about what ''it'' could do for ''us''? Isn’t “giving you the bare minimum you’ll take to keep stringing you along” a bit ''underwhelming''? Isn’t employing it in a way that will “dumb us down” a bit, well, ''dumb''?
Shouldn’t we be more ambitious about what [[artificial intelligence]] could do for ''us''? Isn’t “giving you the bare minimum you’ll take to keep stringing you along” a bit ''underwhelming''? Isn’t using it to “dumb us down” a bit, well, ''dumb''?


===Digression: Darwin’s profligate idea===
===Digression: Darwin’s profligate idea===
By now, most accept the theory of [[evolution by natural selection]]. This really ''is'' magic: it provides a comprehensive account of the origin of life — indeed, the sum of all organic “creation”, as it were — which [[Reductionism|reduces]] to a mindless, repetitive process that we can state in a short sentence:  
The theory of [[evolution by natural selection]] really ''is'' magic: it gives a comprehensive account of the origin of life that [[Reductionism|reduces]] to a mindless, repetitive process that we can state in a short sentence:  


{{Quote|In a population of organisms with individual traits whose offspring inherit those traits only with random variations, those having traits most suited to the prevailing environment will best survive and reproduce over time.<ref>This is the “ugliest man” by which, [[Nietzsche]] claims, we killed God. </ref>}}
{{Quote|In a population of organisms with individual traits whose offspring inherit those traits only with random variations, those having traits most suited to the prevailing environment will best survive and reproduce over time.<ref>This is the “ugliest man” by which, [[Nietzsche]] claims, we killed God. </ref>}}


The economy of ''[[design]]'' is this process is staggering. The economy of ''effort in execution'' is not.
The economy of [[Design|''design'']] in this process is staggering. The economy of ''effort in execution'' is not. Evolution is ''tremendously'' ''wasteful''. Not just in how it ''does'' adapt, but in how often it ''does not''. For every serendipitous mutation, ''there are millions and millions of duds''.
 
The chain of adaptations from amino acids to Lennon & McCartney may have billions of links in it, but that is a model of parsimony compared with the number of adaptations that ''didn’t'' go anywhere — that arced off into one of design space’s gazillion dead ends and just fizzled out.
 
[[Evolution]] isn’t directed — that is its very super-power — so it fumbles blindly around, fizzing and sparking, and only a vanishingly small proportion of mutations ever do anything ''useful''. Evolution is a random, [[stochastic]] process. It depends on aeons of time and burns unimaginable resources. Evolution solves problems by ''brute force''. 
 
Even though it came about through undirected evolution, “natural” mammalian intelligence, whose apogee is ''homo sapiens'', ''is'' directed. In a way that DNA cannot, humans can hypothesise, remember, learn, and rule out plainly stupid ideas without having to go through the motions of trying them.  


Evolution is ''tremendously'' ''wasteful''. Not just in how it ''does'' adapt, but in how much of the time it ''does not''.  
All mammals can do this to a degree; even retrievers.<ref>Actually come to think of it Lucille, bless her beautiful soul, didn’t seem to do that very often. But still. </ref> Humans happen to be particularly good at it. It took three and a half billion years to get from amino acid to the wheel, but only 6,000 to get from the wheel to the Nvidia RTX 4090 GPU.  


The chain of adaptations from amino acids to Lennon & McCartney may have billions of links in it, but that is a model of parsimony compared with the number of adaptations over that time ''didn’t'' go anywhere — that arced off into one of [[design space]]’s gazillion dead ends and fizzled out. For every serendipitous mutation ''there are millions and millions of duds''.
Now. [[Large language model]]<nowiki/>s are, like evolution, a “brute force”, ''undirected'' method. They can get better by chomping yet more data, faster, in more parallel instances, with batteries of server farms in air-cooled warehouses full of lightning-fast multi-core graphics processors. But this is already starting to get harder. We are bumping up against computational limits, as Moore’s law conks out, and environmental consequences, as the planet does.


[[Evolution]] isn’t directed — that is its very super-power — so it fumbles blindly around, fizzing and sparking, and only a vanishingly small proportion of the mutations it generates ever do something ''useful'' and those that do, do so accidentally. Evolution is a random, [[stochastic]] process. It depends on aeons of time and burns unimaginable resources. Evolution solves problems by ''brute force''.
For the longest time, “computing power” has been the cheap, efficient option. That is ceasing to be true. More processing is not a zero-cost option. We will start to see the opportunity cost of devoting all these resources to something that, at the moment, creates diverting sophomore mashups we don’t need.<ref>We would do well to remember Arthur C. Clarke’s law here. The parallel processing power an LLM requires is already massive. It may be that the cost of expanding it in the way envisioned would be unfeasibly huge — in which case the original “business case” for [[technological redundancy]] falls away. See also the [[simulation hypothesis]]: it may be that the most efficient way of simulating the universe with sufficient granularity to support the simulation hypothesis is ''to actually build and run a universe'' in which case, the hypothesis fails.</ref>


Even though it came about through that exact evolutionary process, “natural” mammalian intelligence, whose apogee is ''homo sapiens'', isn’t like that. It ''is'' directed. In a way that our DNA cannot, humans can narratise and hypothesise, remember, learn, and rule out plainly stupid  ideas without having to go through the motions of trying them.<ref>I know, I know. Hold your letters.</ref>
We have, lying unused around us, petabytes of human ingenuity, voluntarily donated into the indifferent maw of the internet. ''We are not lacking content''. Surely the best way of using these brilliant new machines is to harness what we already have. The one thing homo sapiens ''doesn’t'' need is ''more unremarkable information''.  


All mammals can do this to a degree; even retrievers.<ref>Actually come to think of it Lucille, bless her beautiful soul, didn’t seem to do that very often. But still. </ref> Humans happen to be particularly good at it. It took three and a half billion years to get from amino acid to the wheel, but 6,000 years to get from the wheel to the Nvidia  RTX 4090 GPU.
====LibraryThing as a model====
So, how about using AI to better exploit our existing natural intelligence, rather than imitating or superseding it? Could we, instead, create [[System effect|system effects]] to ''extend'' the long tail?


Now. [[Large language model]]<nowiki/>s are, like evolution, ''undirected''. They are a “brute force” method, using colossal amounts of energy and processing power to perform countless really simple operations. They work by a stochastic algorithm not dissimilar to evolution by natural selection. They can get better by chomping yet more data, faster, in more parallel instances, with batteries of server farms in air-cooled warehouses full of lightning-fast multi-core graphics processors. But this takes colossal amounts of processing power and energy. It is is already starting to get expensive and hard. We are bumping up against computational limits, as Moore’s law conks out, and environmental consequences, as the planet does.
It isn’t hard to imagine how this might work. A rudimentary version exists in LibraryThing’s recommendation engine. It isn’t new or wildly clever '''''' as far as I know, LibraryThing doesn’t use AI — each user lists, by ISBN, the books in her personal library. The LibraryThing algorithm will tell you with some degree of confidence, based on combined metadata, whether it thinks you will like any other book. Most powerfully, it will compare all the virtual “libraries” on the site and return the most similar user libraries to yours. The attraction of this is not the books you have in common, but the ones you ''don’t''.


For the longest time, “computing power” has been the cheap, efficient option. That is ceasing to be true. More processing is not a zero-cost option. We will start to see the opportunity cost to devoting all these resources to something that, at the moment, creates diverting sophomore mashups we don’t actually need.<ref>We would do well to remember Arthur C. Clarke’s law here. The parallel processing power an LLM requires is already massive. It may be that the cost of expanding it in the way envisioned would be unfeasibly huge — in which case the original “business case” for [[technological redundancy]] falls away. See also the [[simulation hypothesis]]: it may be that the most efficient way of simulating the universe with sufficient granularity to support the simulation hypothesis is ''to actually build and run a universe'' in which case, the hypothesis fails.</ref>
Browsing doppelganger libraries is like wandering around a library of books you have never read, but which are designed to appeal specifically to you.


The defining quality of this apex evolutionary being is that ''we can’t shut up''. The one thing homo sapiens ''doesn’t'' need is ''more unremarkable information''.  
Note how this role — seeking out delightful new human creativity — satisfies our criteria for the [[division of labour]] in that it is quite beyond the capability of any group of humans to do it, and it would not devalue, much less usurp genuine human intellectual capacity. Rather, it would ''empower'' it.


So, question: why burn all this energy creating [[premium mediocre|mediocre]] AI content when there is an unexplored library of human achievement at our feet and eight billion organic CPUs sitting around ready to create more?
Note also the [[system effect]] it would have: if we held out hope that algorithms were pushing us down the long tail of human creativity, and not shepherding people towards its monetisable head, this would incentivise us all to create more unique and idiosyncratic things.


====LibraryThing as a model====
It also would have the system effect of distributing wealth and information — that is, [[strength]], not [[Power structure|power]] ''down'' the curve of human diversity, rather than concentrating it at the top.
So, how about using AI to better exploit our existing natural intelligence, rather than imitating or superseding it? Could we, instead, create [[system effect]]s to ''extend'' the long tail?


It isn’t hard to imagine how this might work. A rudimentary version exists in {{Plainlink|https://www.librarything.com/|LibraryThing}}’s recommendation engine. It isn’t new or wildly clever '''—''' as far as I know, {{Plainlink|https://www.librarything.com/|LibraryThing}} doesn’t use AI — each user lists, by ISBN, the books in her personal library. The LibraryThing algorithm will tell you with some degree of confidence, based on combined metadata, whether it thinks you will like any other book. It used to have a feature where it could ''un''recommend books it thought you would hate, but for some reason it decommissioned that!  Most powerfully, it will compare all the virtual “libraries” on the site and return the most similar user libraries to yours. The attraction of this is not the books you have in common, but the ones you ''don’t''.
==== Division of labour, redux ====
So, about that “[[division of labour]]”. When it comes to mechanical tasks of the “body”, [[Turing machine]]s scale well, and humans scale badly.  


Browsing doppelganger libraries can be uncanny. It is like wandering around a library of books you have never read, but which are designed to appeal specifically to you.  
===== Body scaling =====
“Scaling”, when we are talking about computational tasks, means doing them over and over again, in series or parallel, quickly and accurately. Each operation can be identical; their combined effect astronomical. Nvidia graphics chips are so good for AI because they can do 25,000 ''trillion'' basic operations per second. ''Of course'' machines are good at this: this is why we build them. They are digital: they preserve information indefinitely, however many processors we use, with almost no loss of fidelity.


LibraryThing generates this kind of delight with a relatively crude traditional algorithm. Imagine how it might perform with AI, trawling all of human creation with more granular information about user habits.  
You could try to use networked humans to replicate a [[Turing machine]], but the results would be slow, costly, disappointing and the humans would not enjoy it.<ref>{{author|Cixin Liu}} runs exactly this thought experiment in his ''Three Body Problem'' trilogy.</ref> With each touch the [[signal-to-noise ratio]] degrades: this is the premise for the parlour game “Chinese Whispers”.


Note how this role — seeking out delightful new human creativity — satisfies our criteria for the [[division of labour]] in that it is quite beyond the capability of any group of humans to do it, and it would not devalue, much less usurp, genuine human intellectual capacity. Rather, it would ''empower'' it.
A game of Chinese Whispers among a group of [[Turing machine]]s would make for a grim evening.  


Note also the [[system effect]] it would have: if we held out hope that algorithms were pushing us down the long tail of human creativity, and not shepherding people towards its monetisable head, this would incentivise us all to create more unique and idiosyncratic things.  
In any case, you could not assign a human, or any number of humans, the task of “cataloguing the entire canon of human creative output”: this is quite beyond their theoretical, never mind practical, ability. With a machine, at least in concept, you could.<ref>Though the infinite fidelity of machines is overstated, as I discovered when trying to find the etymology of the word “[[satisfice]]”. Its modern usage was coined by Herbert Simon in a paper in 1956, but the Google ngram suggests its usage began to tick up in the late 1940s. On further examination, the records transpire to be optical character recognition errors. So there is a large part of the human oeuvre — the pre-digital bit that has had be digitised— that does suffer from analogue copy errors.</ref>


It would have the system effect of distributing wealth and information — that is, [[strength]], not [[Power structure|power]] — ''down'' the curve of human diversity, rather than concentrating it at the top.
===== Mind scaling =====
“Scaling”, for imaginative tasks, is different. Here, humans scale well. We don’t want identical, digital, high-fidelity ''duplications'': ten thousand copies of ''Finnegans Wake'' will contribute no more to the human canon than does one.<ref>Or possibly, even ''none'': Wikipedia tells us that, “due to its linguistic experiments, stream of consciousness writing style, literary allusions, free dream associations, and abandonment of narrative conventions, ''Finnegans Wake'' has been agreed to be a work largely unread by the general public.”</ref> Multiple humans doing “mind stuff” contribute precisely that idiosyncrasy, variability and difference in perspective that generates the wisdom, or brutality, of crowds. A community of readers independently parse, analyse, explain, narratise, contextualise, extend, criticise, extrapolate, filter, amend, correct, and improvise the information ''and each others’ reactions to it''.  


We have lying all around us, unused, petabytes of human ingenuity, voluntarily donated into the indifferent maw of the internet. ''We are not lacking ingenuity''. Surely the best way of using these brilliant new machines is to harness what is literally lying around.
This community of expertise is what [[Sam Bankman-Fried]] misses in dismissing Shakespeare’s damning “[[Bayesian prior|Bayesian priors]]”. However handsome William Shakespeare’s personal contribution was to the Shakespeare canon — the text of the actual folio<ref>What counts as “canon” in Shakespeare’s own written work is a matter of debate. There are different, inconsistent editions. Without Shakespeare to tell us, we must decide for ourselves. Even were Shakespeare able to tell us, we could ''still'' decide for ourselves.</ref> — it is finite, small and historic. It was complete by 1616 and hasn’t been changed since. The rest of the body of work we think of as “Shakespeare” is a continually growing body of interpretations, editions, performances, literary criticisms, essays, adaptations and its (and their) infusion into the vernacular. This ''vastly outweighs the bard’s actual folio''.


==== Division of labour, redux ====
This kind of “communal mind scaling” creates its own intellectual energy and momentum from a small, wondrous seed planted and nurtured four hundred years ago.<ref>On this view Shakespeare is rather like a “prime mover” who created a universe but has left it to develop according to its own devices.</ref> 
So, about that “[[division of labour]]”. When it comes to mechanical tasks of the “body”, [[Turing machine]]s scale well, and humans scale very badly. “Scaling”, when we are talking about computational tasks, means doing them over and over again, in series or parallel, quickly and accurately. Each operation can be identical; their combined effect astronomical. Of course machines are good at this: this is why we build them. They are digital: they preserve information indefinitely, however many processors we use, with almost no loss of fidelity.


You could try to use networked humans to replicate a [[Turing machine]], but the results would be slow, costly, disappointing and the humans would not enjoy it.<ref>{{author|Cixin Liu}} runs exactly this thought experiment in his ''Three Body Problem'' trilogy.</ref> Humans are slow and analogue. With each touch the [[signal-to-noise ratio]] would quickly degrade: this is the premise for the parlour game “Chinese Whispers” — with each repetition  the signal degrades, with amusing consequences.
The wisdom of the crowd thus shapes itself: community consensus has a directed intelligence all of its own. It is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might tell us, having been on both ends of it.<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>
===Bayesian priors and the canon of ChatGPT===
The “[[Bayesian priors]]” argument which fails for Shakespeare also fails for a [[large language model]].  


A game of Chinese Whispers among a group of [[Turing machine]]s would make for a grim evening.  
Just as most of the intellectual energy needed to render a text into the three-dimensional [[Metaphor|metaphorical]] universe we know as ''King Lear'' comes from the surrounding cultural milieu, so it does with the output of an LLM. The source, after all, is entirely drawn from the human canon. A model trained only on randomly assembled ASCII characters would return only randomly assembled ASCII characters.


In any case, you could not assign a human, or any number of humans, the task of “cataloguing the entire canon of human creative output”: this is quite beyond their theoretical, never mind practical, ability. With a machine, at least in concept, you could.<ref>Though the infinite fidelity of machines is overstated, as I discovered when trying to find the etymology of the word “[[satisfice]]”. Its modern usage was coined by Herbert Simon in a paper in 1956, but the Google ngram suggests its usage began to tick up in the late 1940s. On further examination, the records transpire to be optical character recognition errors. So there is a large part of the human oeuvre — the pre-digital bit that has had be digitised— that does suffer from analogue copy errors.</ref>
But what if the material is not random? What if the model augments its training data with its own output? Might that create an apocalyptic feedback loop, whereby LLMs bootstrap themselves into some kind of hyper-intelligent super-language, beyond mortal cognitive capacity, whence the machines might dominate human discourse?


But when it comes to “mind stuff,” humans scale well. “Scaling”, for imaginative tasks, is different. Here we don’t want identical, digital, high-fidelity ''duplication'': ten thousand copies of ''Finnegans Wake'' contribute no more to the human canon than does one.<ref>Or possibly, even ''none'': Wikipedia tells us that, “due to its linguistic experiments, stream of consciousness writing style, literary allusions, free dream associations, and abandonment of narrative conventions, ''Finnegans Wake'' has been agreed to be a work largely unread by the general public.”</ref> Multiple humans doing  “mind stuff” contribute precisely that idiosyncrasy, variability, difference in perspective that generates the wisdom, or brutality, of crowds: a complex community of readers can independently parse, analyse, explain, narratise, extend, criticise, extrapolate, filter, amend, correct, and improvise the information ''and each others’ reactions to it''.
Are we inadvertently seeding ''Skynet''?


The wisdom of the crowd shapes itself: consensus has a directed intelligence all of its own. This community of expertise is what [[Sam Bankman-Fried]] misses in his dismissal of Shakespeare’s “[[Bayesian prior|Bayesian priors]]. However handsome William Shakespeare’s own contribution to the Shakespeare canon — the actual folio — it is finite, small and historic. It was complete by 1616 and hasn’t been changed since. The rest of the body of work we think of as “Shakespeare” comprises interpretations, editions, performances, literary criticisms, essays, adaptations and its (and their) infusion into the vernacular and it ''vastly outweighs the master’s actual folio''. What it more, it continues to grow.  
Just look at what happened with [[Alpha Go]]. It didn’t require ''any'' human training data: it learned by playing millions of games against itself. Programmers just fed it the rules, switched it on and, with indecent brevity, it worked everything out and walloped the game’s reigning grandmaster.


This kind of “communal mind scaling” creates its own intellectual energy and momentum from a small, wondrous seed planted and nurtured four hundred years ago.<ref>On this view Shakespeare is rather like a “prime mover” who created a universe but has left it to develop according to its own devices.</ref>  No matter how fast pattern-matching machines run in parallel, however much brute, replicating horsepower they throw at the task, it is hard to artificial intelligence in the shape of a large learning model, without human steering, doing any of this.
Could LLMs do that? This fear is not new:{{Quote|{{rice pudding and income tax}}}}


(The “directed intelligence of human consensus” is not [[utopia|magically benign]], of course, as [[Sam Bankman-Fried]] might be able to tell us, having been on both ends of it).<ref>See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.</ref>
But brute-forcing outcomes in fully bounded, [[Zero-sum game|zero-sum]] environments with simple, fixed rules — in the jargon of [[Complexity|complexity theory]], a “tame” environment — is what machines are designed to do. We should not be surprised that they are good at this, nor that humans are bad at it. ''This is exactly where we would expect a Turing machine to excel''.  
===Bayesian priors and the canon of Chat-GPT===
Last point on literary theory, is that the “[[Bayesian priors]]” argument which fails for Shakespeare fails all the more so for a [[large language model]].  


Just as a great deal of the intellectual energy involved in rendering a text into the three-dimensional metaphorical universe we think of as ''King Lear'' comes from beyond the author of that text, so it does with the output of an LLM. Its model, after all, is entirely drawn from the human canon.  
By contrast, LLMs must operate in complex, “[[wicked]]” environments. Here conditions are unbounded, ambiguous, inchoate and impermanent. ''This is where humans excel''. Here, the whole environment, and everything in it, continually changes. The components interact with each other in [[Non-linear interaction|non-linear]] ways. The landscape dances. Imagination here is an advantage: brute force mathematical computation won’t do.{{Quote|Think how hard physics would be if particles could think.
:— Murray Gell-Mann}}


====Model collapse ====
An LLM works by compositing a synthetic output from a massive database of pre-existing text. It must pattern-match against well-formed human language. Degrading its training data with its own output will progressively degrade its output. Such “model collapse” is an observed effect.<ref>https://www.techtarget.com/whatis/feature/Model-collapse-explained-How-synthetic-training-data-breaks-AI</ref> LLMs will only work for humans if they’re fed human generated content. [[Alpha Go]] is different.
There is concern that setting up a feedback loop whereby LLMs can consume their own generated text then some kind of hyperintelligence might emerge. Just look what happened with AlphaGo.
{{Quote|{{AlphaGo v LLM}}}}


It didn't require ''any'' human training. They just fed it the rules, switched it on and in with indecent brevity it had walloped the grandmaster. It just played games against itself. What happens when LLMs do that?
There is another contributor to the cultural milieu surrounding any text: the ''reader''. It is the reader, and her “cultural baggage”, who must make head and tail of the text. She alone determines, for her own case, whether it stands or falls. This is true however rich is the cultural milieu that supports the text. We know this because the overture from ''Tristan und Isolde'' can reduce different listeners to tears of joy or boredom. One contrarian can see, in the Camden Cat, a true inheritor of the great blues pioneers, others might see an unremarkable busker.  


{{Hhgg rice pudding and income tax}}
Construing natural language, much less visuals or sound, is no matter of mere [[Symbol processing|symbol-processing]]. Humans are ''not'' [[Turing machine|Turing machines]]. A text only sparks meaning, and becomes art, in the reader’s head. This is just as true of magic — the conjurer’s skill is to misdirect the audience into ''imagining something that isn’t there.'' The audience supplies the magic.


But brute force testing outcomes of a bounded zero-sum game with simple, fixed rules is a completely different proposition. This is “body stuff”. The environment is fully fixed, understood and determined. ''This is exactly where we would expect a Turing machine to excel''.
The same goes of an LLM — it is simply ''digital'' magic. We imbue what an LLM generates with meaning. ''We are doing the heavy lifting''.


An LLM by contrast is algorithmically compositing synthetic outputs ''against human  text. The text it pattern -matches against needs to be well-formed human language. That is how ChatGPT works its magic. Significantly degrading the corpus that it pattern matches against will progressively degrade the output. This is called “model collapse”, it is an observed effect and believed to be an insoluble problem. LLMs will only work for humans if they’re fed human generated content. A redditor put it more succinctly than I can:
===Coda===
{{Quote|{{abgrund}}
:—Nietzsche}}
Man, this got out of control.  


[Redditor piece on intertextual postmodernism]
So is this just Desperate-Dan, last-stand pattern-matching from an obsolete model, staring forlornly into the abyss? Being told to accept his obsolescence is an occupational hazard for the JC, so no change there.
====The ChatGPT canon====


And there is an final contributor to every cultural artefact we haven’t yet considered. The main one: the ''reader''. It is the reader, and her “[[cultural baggage]], who must make head and tail of art and literature and who  determines whether it stands or falls. This is true however rich the cultural milieu that supports the art.  
But if [[This time it’s different|this really is the time that is different]], something about it feels underwhelming. If ''this'' is the hill we die on, we’ve let ourselves down.


Construing natural language, much less visuals or sound, is no matter of mere [[Symbol processing|symbol-processing]]. Humans are ''not'' [[Turing machine|Turing machines]].  
''Don’t be suckered by parlour tricks.'' Don’t redraw our success criteria to suit the machines.  To reconfigure how we judge each other to make easier for technology to do it at scale is not to be obsolete. It is to surrender.


We know this because the overture from ''Tristan und Isolde'' reduce one person to tears and can leave the next one cold. I can see in the Camden Cat a true inheritor of the blues pioneers, you might see an unremarkable busker. A text becomes art in the reader’s head.  
Humans can’t help doing their own sort of pattern-matching. There are common literary tropes where our creations overwhelm us — ''Frankenstein'', ''[[2001: A Space Odyssey]]'', ''Blade Runner'', ''Terminator'', ''Jurassic Park'', ''The Matrix''. They are cautionary tales. They are deep in the cultural weft, and we are inclined to see them everywhere. The actual quotidian progress of technology has a habit of confounding science fiction and being a bit more boring.  


This is as true of magic — the conjurer’s trick is to misdirect her audience into ''imagining'' something that isn’t there: the magic is supplied by the audience — and it is of ''digital'' magic. We imbue what an LLM generates with meaning. ''The meatware is doing the heavy lifting''.
LLMs will certainly change things, but we’re not fit for battery juice just yet.


If you feed an LLM with its own output it rapidly degrades into meaningless mush. LLMs are not intentional. They are not directed, they depend on the ongoing environment to shape their fitness. That environment is necessarily human.
Buck up, friends: there’s work to do.


===A real challenger bank===
===A real challenger bank===
Line 304: Line 313:
Robotic people do not generally have a rogue streak. They are not loose cannons. They no not call “bullshit”. They do not question their orders. They do not answer back.
Robotic people do not generally have a rogue streak. They are not loose cannons. They no not call “bullshit”. They do not question their orders. They do not answer back.


And so we see: financial services organisations do not value people who do. They value the ''fearful''. They elevate the rule-followers. They distrust “human magic”, which they characterise as human ''weakness''. They find it in the wreckage of [[Enron]], or [[Financial disasters roll of honour|Kerviel]], or [[Madoff]], or [[Archegos]]. Bad apples. [[Operator error]]. They emphasise this human stain over the failure of management that inevitably enabled it. People who did not play by the rules over systems of control that allowed, .or even obliged, them to.
And so we see: financial services organisations do not value people who do. They value the ''fearful''. They elevate the rule-followers. They distrust “human magic”, which they characterise as human ''weakness''. They find it in the wreckage of [[Enron]], or [[Financial disasters roll of honour|Kerviel]], or [[Madoff]], or [[Archegos]]. Bad apples. [[Operator error]]. They emphasise this human stain over the failure of management that inevitably enabled it. People who did not play by the rules over systems of control that allowed, .or even obliged, them to.


The run [[post mortem]]s: with the rear-facing forensic weaponry of [[internal audit]], [[external counsel]] they reconstruct the [[fog of war]] and build a narrative around it. The solution: ''more systems. More control. More elaborate algorithms. More rigid playbooks. The object of the exercise: eliminate the chance of human error. Relocate everything to process.''
The run [[post mortem]]s: with the rear-facing forensic weaponry of [[internal audit]], [[external counsel]] they reconstruct the [[fog of war]] and build a narrative around it. The solution: ''more systems. More control. More elaborate algorithms. More rigid playbooks. The object of the exercise: eliminate the chance of human error. Relocate everything to process.''
Line 310: Line 319:
Yet the accidents keep coming. Our [[financial crashes roll of honour]] refers. They happen with the same frequency, and severity, notwithstanding the additional sedimentary layers of machinery we develop to stop them.  
Yet the accidents keep coming. Our [[financial crashes roll of honour]] refers. They happen with the same frequency, and severity, notwithstanding the additional sedimentary layers of machinery we develop to stop them.  


Hypothesis: these disasters are not prevented by high-modernism. They are a symptom of it. They are its ''products''.
Hypothesis: these disasters are not prevented by high-modernism. They are a symptom of it. They are its ''products''.


===Zero-day vulnerabilities===
===Zero-day vulnerabilities===
Line 340: Line 349:
They are over-populated, too, with low-quality staff. Citigroup claims to employ 70,000 technology experts worldwide, and, well, [[Citigroup v Brigade Capital Management|Revlon]].  
They are over-populated, too, with low-quality staff. Citigroup claims to employ 70,000 technology experts worldwide, and, well, [[Citigroup v Brigade Capital Management|Revlon]].  


It ''is'' hard to imagine Amazon accidentally wiring half a billion dollars to customers because of crappy software and playbook- following school leavers in a call centre in Bucharest. (Can we imagine Amazon using call centres in Bucharest? Absolutely). Banks have a first-mover ''dis''advantage here, as most didn’t start thinking of themselves as tech companies until the last twenty years, by which stage their tech infrastructure was intractably shot.  
It ''is'' hard to imagine Amazon accidentally wiring half a billion dollars to customers because of crappy software and playbook- following school leavers in a call centre in Bucharest. (Can we imagine Amazon using call centres in Bucharest? Absolutely). Banks have a first-mover ''dis''advantage here, as most didn’t start thinking of themselves as tech companies until the last twenty years, by which stage their tech infrastructure was intractably shot.  


But the banks have had decades to recover, and if they didn’t then, they definitely do now think of themselves as tech companies. Bits of the banking system — high-frequency trading algorithms, blockchain and data analytics used on global strategies — are as sophisticated as anything Apple can design.<ref>That said, the signal processing capability of Apple’s consumer music software is pretty impressive.</ref>
But the banks have had decades to recover, and if they didn’t then, they definitely do now think of themselves as tech companies. Bits of the banking system — high-frequency trading algorithms, blockchain and data analytics used on global strategies — are as sophisticated as anything Apple can design.<ref>That said, the signal processing capability of Apple’s consumer music software is pretty impressive.</ref>
Line 351: Line 360:


=== Yes, bank staff are rubbish===
=== Yes, bank staff are rubbish===
Now, to lionise the human spirit ''in the abstract'', as we do, is not to say we should sanctify bank employees as a class ''in the particular''. The JC has spent a quarter century among them. They — we — may be unusually paid, for all the difference we make to the median life on planet Earth, but we are not unusually gifted or intelligent.  
Now, to lionise the human spirit ''in the abstract'', as we do, is not to say we should sanctify bank employees as a class ''in the particular''. The JC has spent a quarter century among them. They — ''we'' — may be unusually well-paid, for all the difference we make to the median life on planet Earth, but we are not unusually gifted or intelligent.  Come on, guys: — [[backtesting]]. [[Debt value adjustments]]. [[ David Viniar|“Six sigma” events several days in a row]]. [[Madoff]].
 
It is an ongoing marvel how commercial organisations can be so reliably profitable given the median calibre of those they employ to steer themselves. Sure, our levels of formal accreditation may be unprecedented, but levels of “[[metis]]” — which can't be had from the academy — have stayed where they are. Market and organisational [[system]]s tend to be configured to ''[[mediocrity drift|ensure]]'' reversion to mediocrity over time.<ref>See {{br|The Peter Principle}}; {{br|Parkinson’s Law}}: for classic studies.</ref>
 
There is some irony here. As western economies shifted from the production of ''things'' to the delivery of ''services'' over the last half-century, the proportion of their workforce in “white collar work” has exploded. There are more people in the UK today than there were in 1970, and almost none of them now work down the mines, on the production line or even in the menswear department at Grace Brothers, as we are given to believe they did a generation ago.  


It is an ongoing marvel how commercial organisations can be so reliably profitable given the calibre of the hordes they employ to steer them. We have argued [[mediocrity drift| elsewhere]] that informal systems tend to be configured to ''ensure'' staff mediocrity over time. Others have too.<ref>See {{br|The Peter Principle}}; {{br|Parkinson’s Law}}: for classic studies.</ref>
All kinds of occupations that scarcely existed when our parents were young have emerged, evolved, and self-declared themselves to be professions. The ratio of jobs requiring, ''de facto'', university degrees (the modern lite professional qualification) has grown. The number of universities have expanded as polytechnics rebadged themselves. This, too, feels like a [[system effect]] of the [[modernist]] orthodoxy: if we can only assess people by reference to formal criteria, then industries devoted to contriving and dishing out those criteria are sure to flourish. The self-interests of different constituencies in the system contrive to entrench each other: this is how [[feedback loop]]s work.


There is some irony here. As western economies have shifted, from the production of ''things'' to the delivery of ''services'', the proportion of  their workforce in “white collar work” has exploded. All kinds of occupations that scarcely existed a generation ago have weaponised themselves into self-declared professions. The ratio of jobs requiring, de facto, university degrees (the modern lite professional qualification) has grown. The number of universities have expanded; polytechnics rebadging themselves. This, too, feels like a [[system effect]] of the modernist orthodoxy: if we can only assess people by reference to formal criteria, then industries devoted to contriving and dishing out those criteria are sure to emerge. The self-interests of different constituencies in the system contrive to entrench each other: this is how feedback loops work
As technologies expand and encroach, taming unbroken landscapes and disrupting already-cultivated ones, let us take a moment to salute the human ingenuity that it takes to defy the march of the machines. For every technology that “solves” a formal pursuit there arises a second order of required white collar oversight. [[ESG]] specialists have evolved to opine on and steward forward our dashboards and marshall criteria for our environmental and social governance strategies — strategies we were until not happy enough to leave in for guidance by Adam Smith’s invisible hand. [[Blockchain as a service]] has emerged to provide a solution to those who believe in the power of disintermediated networks but want someone else to find in and do the heavy-lifting for them. AI compliance officers and prompt engineers steward our relationship with machines supposedly so clever they should not need hand-holding.


Other feedback loops emerge to counteract them. The high modernist programme can measure by any observable criteria, not just formal qualifications. Its favourite is [[cost]].<ref>The great conundrum posed by [[Taiichi Ohno|Ohno-sensei]]’s [[Toyota Production System]] is why prioritise [[cost]] over [[waste]]? Because cost is numerical and can easily be measured by the dullest book-keeper. Knowing what is [[waste]]ful in a process requires analysis and understanding of the system, and so cannot be easily measured. We go eliminate ''cost'' as a lazy proxy. It makes you wonder why executives are paid so well.</ref> The basic proposition is, “Okay, we need a human resources operation. I can't gainsay that. But it need not be housed in London, nor staffed by Oxbridge grads, if Romanian school leavers, supervised by an alumnus from the Plovdiv Technical Institute with a diploma in personnel management, will do.”
Other feedback loops emerge to counteract them. The high modernist programme can measure by any observable criteria, not just formal qualifications. Its favourite is [[cost]].<ref>The great conundrum posed by [[Taiichi Ohno|Ohno-sensei]]’s [[Toyota Production System]] is why prioritise [[cost]] over [[waste]]? Because cost is numerical and can easily be measured by the dullest book-keeper. Knowing what is [[waste]]ful in a process requires analysis and understanding of the system, and so cannot be easily measured. We go eliminate ''cost'' as a lazy proxy. It makes you wonder why executives are paid so well.</ref> The basic proposition is, “Okay, we need a human resources operation. I can't gainsay that. But it need not be housed in London, nor staffed by Oxbridge grads, if Romanian school leavers, supervised by an alumnus from the Plovdiv Technical Institute with a diploma in personnel management, will do.”