Reports of our death are an exaggeration: Difference between revisions
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) m Amwelladmin moved page Rumours of our demise are greatly exaggerated to Reports of our death are an exaggeration |
||
(21 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
{{ | {{freeessay|technology|rumours of our demise|{{image|Robolawyer|jpg|John Cryan’s legal department yesterday.}}}} | ||
Latest revision as of 13:38, 18 November 2023
JC pontificates about technology
An occasional series. The Jolly Contrarian holds forth™
Resources and Navigation
|
The report of my death was an exaggeration.
- —Mark Twain
In 2017, then-CEO of Deutsche Bank John Cryan thought his employees’ days were numbered. Machines would do for them. Not just back office grunts: everyone. Even, presumably, Cryan himself.[1]
“Today,” he warned, “we have people doing work like robots. Tomorrow, we will have robots behaving like people”.
You can see where he was coming from: what with high-frequency trading algorithms, AI medical diagnosis, Alpha Go, self-driving cars: the machines were coming for us. And this was before GPT-3. It has only got worse since: The machines have taken over our routine tasks; soon they will take the hard stuff, too.
Now.
As long as there has been the lever, wheel or plough, humans have used technology to do tedious, repetitive tasks and to lend power and speed to our frail earthly shells. Humans have done this because it is smart: machines follow instructions better than we do — that’s what means is to be an “automaton”. At things they are good at, machines are quicker, stronger, nimbler, cheaper and less error-prone than humans.
But that’s an important condition: as George Gilder put it:
“The claim of superhuman performance seems rather overwrought to me. Outperforming unaided human beings is what machines are supposed to do. That’s why we build them.”[2]
The division of labour between human and machine
Nowadays, we must distinguish between traditional, obedient, rule-following machines, and randomly-make-it-up large language models — unthinking, probabilistic, pattern-matching machines. LLMs are the novelty act of the mid-2020s, at the top of their hype cycle right now, like blockchain, was a year ago, and like DLT they will struggle to find an enduring use case.
Traditional machines make flawless decisions, as long as both question and answer are pre-configured. We meatsacks are better in situations of ambiguity, conflict and novelty. We’re not flawless — that’s part of our charm — but wherever we find a conundrum — whether a mess, problem or puzzle — we can have a bash.
Humans don’t crash. Humans don’t hang until someone closes a buried dialogue box: we just carry on, as best we can. That’s the boon and bane of the meatware: you can’t always tell when a human goes off the rails.
Hence, how we’ve always used technology: to put dull stuff on the rails and keep it there: we figure out which field to plough when; the oxen plough it.
New technology creates labour
While new technology may prompt the odd short-term dislocation — the industrial revolution put a bunch of basket-weavers out of work — the long-term prognosis has been generally benign: technology has, for millennia, freed us to do things we previously had no time to try. Industrialisation has led to better work for more workers.
Technology opens up the design space. It reveals adjacent possibilities, expands the intellectual ecosystem, domesticates what we know and opens up frontiers to what we don’t.
Frontiers are places where we need smart people to figure out new tools and new ways of operating. Machines can’t do it.
Parkinson’s law
But technology also creates space and capacity to indulge ourselves. Parkinson’s law: work expands to fill the time available. Technology also frees us up to care about things we never has time to care about. The microcomputer made generating duplicating and distributing content far easier. There’s that boon and bane, again.
Is this time different?
So, before concluding that this time the machines will put us out of work we must explain how. Why is this time different? What has changed?
FAANGs ahoy?
In 2018, then head of the UBS Evidence Lab — a “sell-side team of experts that work across 55+ specialized areas creating insight-ready datasets” — remarked that the incipient competition for banks was not “challenger” banks, but Apple, Amazon, or Google.
The argument was this: banking comes mostly down to three components: technology, reputation, and regulation.
Two of these — technology and reputation — are substantial problems. The other — regulation — is formalistic, especially if you have a decent technology stack.
So, how do the banks stack up against the FAANGS?
Technology | Reputation | Regulation | |
---|---|---|---|
Banks | Generally legacy, dated, patched together, under-powered, under-funded, conflicting, liable to fall over, susceptible to hacking. | Everyone hates the Financial Services industry. | All over it. Capitalised, have access to reserve banks, connected, exchange memberships, etc. |
FAANGS | Awesome: state of the art, natively functional, at cutting edge, well-funded, well-understood, robust, resilient. Ok could be hacked | Who doesn’t love Amazon? Who wouldn’t love to have an account at the iBank? Imagine if banking worked like Google Maps! | OK there is a bit of investment required here — and regulatory capital is a thing — but nothing is insurmountable with the Amazon Flywheel no? |
Winner | Cmon: are you kidding me? FAANGS all the way! | FAANGS. Are banks even on the paddock? | Banks have the edge right now. But look out white-shoe types: The techbros are coming for you. |
That the FAANGS will wipe the floor with any bank when it comes to technology is taken res ipsa loquitur — it needs little supporting evidence, just based on how lousy bank tech is — and, sure, the FAANGs have better standing with the public. Who doesn’t love Apple? Who does love Wells Fargo?[3]
Therefore, the argument goes, the only place where banking presently has an edge is in regulatory licences and approvals, capital, and regulatory compliance. It’s wildly complex, fiendishly detailed, the rules differ between jurisdictions, and the perimeter between one jurisdiction and the next is not always obvious. To paraphrase Douglas Adams: “You might think GDPR is complicated, but that’s just peanuts compared to MiFID.”
But, but, but — any number of artificially intelligent startups can manage that regulatory risk, right?[4]
But really. Let’s park a few uncomfortable facts and give Evidence Labs the benefit of the doubt:
So where are they?
Firstly — if banking is such a sitting duck for predator FAANGS, where the hell are they? It is 2023, for crying out loud. Wells Fargo is still with us. None of Apple, Amazon, or Google as so much as cast a wanton glance in its direction of let alone the Vampire Squid’s. Something is keeping the techbros away.
Techbros aren’t natural at banking
And it’s not just fear of regulation, capital and compliance: if it were, you would expect tech firms to be awesome at unregulated financial services.
But — secondly — they’re not.
We’ve been treated to a ten-year, live-fire experiment with how good tech firms will be in unregulated financial services — crypto — from which the banks — “trad fi” if you please — and, notably, the FAANGS have mainly stayed away. It hasn’t gone well.
Credulous cryptobros have found, and promptly fallen down, pretty much every open manhole already known to the dullest money manager — and discovered some whole new ones of their own to fall down too. Helpfully, Molly White has keeping a running score. Crypto, despite its awesome tech and fabulous branding, has been a disaster.
Tech brand-love-ins won’t survive first contact with banking
Thirdly — cool gadget maker that pivots to banking and does it well has as much chance of maintaining millennial brand loyalty as does a toy factory that moves into dentistry.
Those Occupy Wall Street gang? Apple fanboys, the lot of them. At the moment. But it isn’t the way trad fi banks go about banking that tarnishes your brand. It’s banking. No-one likes money-lenders. It is a dull, painful, risky business. Part of the game is doing shitty things to customers when they lose your money. Repossessing the Tesla. Foreclosing on the condo. That isn’t part of the game of selling MP3 players.
The business of banking will trash the brand.
Bank regulation is hard
Fourthly — regulatory compliance is not formalistic, and it is not “the easy bit of banking”. If you could solve it with tech, the banks would have long since done it. They gave certainly tried. (Modern banks, by the way, absolutely are technology businesses, in a way that WeWork and X/Twitter are not). Regulations change, contradict, don't make sense, overlap, are fiddly, illogical, often counterproductive and they are subject to to interpretation by regulators, who are themselves fiddly, illogical and not known for their constructive approach to rule enforcement.
Getting regulations wrong can have bad consequences. Even apparently formalistic things like KYC and client asset protection. Banks already throw armies of bodies and legaltech[5] at this and still they are routinely breaching minimum standards and being fined millions of dollars.
The gorillas in the room
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.
- —Robert Heinlein
But in any case, park all the above, for it is beside the point. For it overlooks the same core banking competence that Mr. Cryan did: quality people, and quality leadership.
We have fallen into some kind of modernist swoon, in which we hold up ourselves up against machines, as if techne is a platonic ideal to which we should aspire.
So we set our children modernist criteria, too from the moment they set foot in the classroom. The education system selects for individuals by reference to how well they obey rules, how reliably, and quickly, they can identify, analyse and resolve known, pre-categorised, “problems”. But these are historical problems with known answers. This is a finite game. This is exactly what machines are best at. Why on earth we should be systematically educating our children to compete with machines at things machines are best at is well beyond this old codger.
If we tell ourselves that “machine-like qualities” are the highest human aspiration, we will naturally find ourselves wanting. We make it easy for the robots to take our jobs. We set ourselves up to fail.
But human qualities are different — humans can improvise, imagine, project in space and time, judge, narratise, analogise, interpret, assess — they can conceptualise Platonic ideals — in a way that algorithms cannot and LLMs can’t do except by pattern matching.
And there is the impish inconstancy, unreliability and unpredictability of the human condition — these make humans different, not inferior, to algorithms. They make us difficult to control and manage by algorithm.
And this is the point. We are not meant to be making it easy for machines to manage and control us. By suppressing our human qualities, we make ourselves more legible, machine readable, triageable, categorisable by algorithm. The economies of scale and process efficiencies this yields accrue to the machines and their owners, not us.
Why surrender before kick-off like that?
On being a machine
“Any sufficiently advanced technology is indistinguishable from magic.”
- —Arthur C. Clarke’s third law
We are in a machine age.
We call it that because machines have proven consistently good at doing things humans are too weak, slow, inconstant or easily bored to do well: mechanical things.
But state-of-the-art machines, per Arthur C. Clarke, aren’t magic: it just seems like it, sometimes. They are a two-dimensional, simplified model of human intelligence. A proxy: a modernist simulacrum. They are a shorthand way of mimicking a limited sort of sentience, potentially useful in known environments and constrained circumstances.
Yet we have begun to model ourselves upon machines. The most dystopian part of John Cryan’s opening quote was the first part — “today, we have people doing work like robots” — because it describes a stupid present reality. We have persuaded ourselves that “being machine-like’’ should be our loftiest aim. But if we are in a footrace where what matters is simply strength, speed, consistency, modularity, fungibility and mundanity — humans will surely lose.
But we aren’t in that foot race. Strength, speed, consistency, fungibility and patience are the loftiest aims only where there is no suitable machine.
If you have got a suitable machine, use it: let your people do something more useful.
If you haven’t, build one.
Body and mind as metaphors
We are used to the “Turing machine” as a metaphor for “mind” but it is a bad metaphor. It is unambitious. It does not do justice to the human mind.
Perhaps we could invert it. We might instead use “body” — in that dishonourably dualist, Cartesian sense — as a metaphor for a Turing machine, and “mind” for natural human intelligence. “Mind” and “body” in this sense, are a practical guiding principle for the division of labour between human and machine: what goes to “body”, give to a machine: motor skills; temperature regulation; the pulmonary system; digestion; aspiration — the conscious mind has no business there. There is little it can add. It only gets in the way. There is compelling evidence that when the conscious mind takes over motor skills, things go to hell.[6]
But leave interpersonal relationships, communication, perception, construction, decision-making in times of uncertainty, imagination and creation to the mind. Leave the machines out of this. They will only bugger it up. Let them report, by all means. Let them assist: triage the “conscious act” to hive off the mechanical tasks on which it depends.[7] Let the machines loose on those mechanical tasks. Let them provide, on request, the information the conscious mind needs to make its build its models and make its plans, but do not let them intermediate that plan.
The challenge is not to automate indiscriminately, but judiciously. To optimise, so humans are set free of tasks they are not good at, and thereby not diverted from their valuable work by formal process better suited to a machine. This can’t really be done by rote.
Here, “machine” carries a wider meaning than “computer”. It encompasses any formalised, preconfigured process. A playbook is a machine. A policy battery. An approval process.
AI overreach
Nor should we be misdirected by the “magic” of sufficiently advanced technology, like artificial intelligence, to look too far ahead.
We take one look at the output of an AI art generator and conclude the highest human intellectual achievements are under siege. However good humans artists may be, they cannot compete with the massively parallel power of LLMs, which can generate billions of images some of which, by accident, will be transcendentally great art.
Not only does reducing art to its “Bayesian priors” like this stunningly miss the point about art, but it suggests those who would deploy artificial intelligence have their priorities dead wrong. There is no shortage of sublime human expression: quite the opposite. The internet is awash with “content”: there is far more than our collected ears and eyes can take in.
And, here: have some more!
We don’t need more content. What we need is dross management and needle-from-haystack extraction. Machines ought to be really good at this.
Digression: Nietzsche, Blake and the Camden Cat
The Birth of Tragedy sold 625 copies in six years; the three parts of Thus Spoke Zarathustra sold fewer than a hundred copies each. Not until it was too late did his works finally reach a few decisive ears, including Edvard Munch, August Strindberg, and the Danish-Jewish critic Georg Brandes, whose lectures at the University of Copenhagen first introduced Nietzsche’s philosophy to a wider audience.
- —The Sufferings of Nietzsche, Los Angeles Review of Books, 2018
The Bayesian priors are pretty damning. ...When Shakespeare wrote, almost all of Europeans were busy farming, and very few people attended university; few people were even literate — probably as low as about ten million people. By contrast, there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564?
- —Sam Bankman Fried’s “sophomore college blog”
Sam Bankman-Fried had a point here, though not the one he thought.
Friedrich Nietzsche died in obscurity, as did William Blake and Emily Dickinson. They were lucky that the improbability engine worked its magic for them, even if not in their lifetimes.
But how many undiscovered Nietzsches, Blakes and Dickinsons are there, now sedimented into unreachably deep strata of the human canon?
How many living artists are currently ploughing an under-appreciated furrow, stampeding towards an obscurity a large language model might save them from, cursing their own immaculate “Bayesian priors”?
(I know of at least one: the Camden Cat, who for thirty years has plied his trade with a beat-up acoustic guitar on the Northern Line, and once wrote and recorded one of the great rockabilly singles of all time. It remains bafflingly unacknowledged. Here it is, on SoundCloud.)
If AI is a cheapest-to-deliver strategy you’re doing it wrong
Cheapest-to-deliver
/ˈʧiːpɪst tuː dɪˈlɪvə/ (adj.)
Of the range of possible ways of discharging your contractual obligation to the letter, the one that will cost you the least and irritate your customer the most should you choose it.
Imagine having personal large language models at our disposal that could pattern-match against our individual reading and listening histories, our engineered prompts, our instructions and the recommendations of like-minded readers.
Our LLM would search through the billions of existing books, plays, films, recordings and artworks, known and unknown that comprise the human oeuvre but, instead of making its own mashups, it would retrieve existing works that its patterns said would specifically appeal to us?
This is not just the Spotify recommendation algorithm, as occasionally delightful as that is. Any commercial algorithm has its own primary goal to maximise revenue. A certain amount of “customer delight” might be a by-product, but only as far as it intersects with that primary commercial goal. As long as customers are just delighted enough to keep listening, the algorithm doesn’t care how delighted they are.[8]
Commercial algorithms need only follow a cheapest to deliver strategy: they “satisfice”. Being targeted at optimising revenue, they converge upon what is likely to be popular, because that is easier to find. Why scan ocean deeps for human content when you can skim the top and keep the punters happy enough?
This, by the way, has been the tragic commons of the collaborative internet: despite Chris Anderson’s forecast in 2006, that universal interconnectedness would change economics for the better[9] — that, suddenly, it would be costless to service the long tail of global demand, prompting some kind of explosion in cultural diversity — the exact opposite has happened.[10] The overriding imperatives of scale have obliterated the subtle appeals of diversity, while sudden, unprecedented global interconnectedness has had the system effect of homogenising demand.
This is the counter-intuitive effect of a “cheapest-to-deliver” strategy: while it has become ever easier to target the “fat head”, the long tail has grown thinner.[11] As the tail contracts, the commercial imperative to target the lowest common denominators inflates. This is a highly undesirable feedback loop. It will homogenise us. We will become less diverse. We will become more fragile. We will resemble machines. We are not good at being machines.
Shouldn’t we be more ambitious about what artificial intelligence could do for us? Isn’t “giving you the bare minimum you’ll take to keep stringing you along” a bit underwhelming? Isn’t using it to “dumb us down” a bit, well, dumb?
Digression: Darwin’s profligate idea
The theory of evolution by natural selection really is magic: it gives a comprehensive account of the origin of life that reduces to a mindless, repetitive process that we can state in a short sentence:
In a population of organisms with individual traits whose offspring inherit those traits only with random variations, those having traits most suited to the prevailing environment will best survive and reproduce over time.[12]
The economy of design in this process is staggering. The economy of effort in execution is not. Evolution is tremendously wasteful. Not just in how it does adapt, but in how often it does not. For every serendipitous mutation, there are millions and millions of duds.
The chain of adaptations from amino acids to Lennon & McCartney may have billions of links in it, but that is a model of parsimony compared with the number of adaptations that didn’t go anywhere — that arced off into one of design space’s gazillion dead ends and just fizzled out.
Evolution isn’t directed — that is its very super-power — so it fumbles blindly around, fizzing and sparking, and only a vanishingly small proportion of mutations ever do anything useful. Evolution is a random, stochastic process. It depends on aeons of time and burns unimaginable resources. Evolution solves problems by brute force.
Even though it came about through undirected evolution, “natural” mammalian intelligence, whose apogee is homo sapiens, is directed. In a way that DNA cannot, humans can hypothesise, remember, learn, and rule out plainly stupid ideas without having to go through the motions of trying them.
All mammals can do this to a degree; even retrievers.[13] Humans happen to be particularly good at it. It took three and a half billion years to get from amino acid to the wheel, but only 6,000 to get from the wheel to the Nvidia RTX 4090 GPU.
Now. Large language models are, like evolution, a “brute force”, undirected method. They can get better by chomping yet more data, faster, in more parallel instances, with batteries of server farms in air-cooled warehouses full of lightning-fast multi-core graphics processors. But this is already starting to get harder. We are bumping up against computational limits, as Moore’s law conks out, and environmental consequences, as the planet does.
For the longest time, “computing power” has been the cheap, efficient option. That is ceasing to be true. More processing is not a zero-cost option. We will start to see the opportunity cost of devoting all these resources to something that, at the moment, creates diverting sophomore mashups we don’t need.[14]
We have, lying unused around us, petabytes of human ingenuity, voluntarily donated into the indifferent maw of the internet. We are not lacking content. Surely the best way of using these brilliant new machines is to harness what we already have. The one thing homo sapiens doesn’t need is more unremarkable information.
LibraryThing as a model
So, how about using AI to better exploit our existing natural intelligence, rather than imitating or superseding it? Could we, instead, create system effects to extend the long tail?
It isn’t hard to imagine how this might work. A rudimentary version exists in LibraryThing’s recommendation engine. It isn’t new or wildly clever — as far as I know, LibraryThing doesn’t use AI — each user lists, by ISBN, the books in her personal library. The LibraryThing algorithm will tell you with some degree of confidence, based on combined metadata, whether it thinks you will like any other book. Most powerfully, it will compare all the virtual “libraries” on the site and return the most similar user libraries to yours. The attraction of this is not the books you have in common, but the ones you don’t.
Browsing doppelganger libraries is like wandering around a library of books you have never read, but which are designed to appeal specifically to you.
Note how this role — seeking out delightful new human creativity — satisfies our criteria for the division of labour in that it is quite beyond the capability of any group of humans to do it, and it would not devalue, much less usurp genuine human intellectual capacity. Rather, it would empower it.
Note also the system effect it would have: if we held out hope that algorithms were pushing us down the long tail of human creativity, and not shepherding people towards its monetisable head, this would incentivise us all to create more unique and idiosyncratic things.
It also would have the system effect of distributing wealth and information — that is, strength, not power — down the curve of human diversity, rather than concentrating it at the top.
Division of labour, redux
So, about that “division of labour”. When it comes to mechanical tasks of the “body”, Turing machines scale well, and humans scale badly.
Body scaling
“Scaling”, when we are talking about computational tasks, means doing them over and over again, in series or parallel, quickly and accurately. Each operation can be identical; their combined effect astronomical. Nvidia graphics chips are so good for AI because they can do 25,000 trillion basic operations per second. Of course machines are good at this: this is why we build them. They are digital: they preserve information indefinitely, however many processors we use, with almost no loss of fidelity.
You could try to use networked humans to replicate a Turing machine, but the results would be slow, costly, disappointing and the humans would not enjoy it.[15] With each touch the signal-to-noise ratio degrades: this is the premise for the parlour game “Chinese Whispers”.
A game of Chinese Whispers among a group of Turing machines would make for a grim evening.
In any case, you could not assign a human, or any number of humans, the task of “cataloguing the entire canon of human creative output”: this is quite beyond their theoretical, never mind practical, ability. With a machine, at least in concept, you could.[16]
Mind scaling
“Scaling”, for imaginative tasks, is different. Here, humans scale well. We don’t want identical, digital, high-fidelity duplications: ten thousand copies of Finnegans Wake will contribute no more to the human canon than does one.[17] Multiple humans doing “mind stuff” contribute precisely that idiosyncrasy, variability and difference in perspective that generates the wisdom, or brutality, of crowds. A community of readers independently parse, analyse, explain, narratise, contextualise, extend, criticise, extrapolate, filter, amend, correct, and improvise the information and each others’ reactions to it.
This community of expertise is what Sam Bankman-Fried misses in dismissing Shakespeare’s damning “Bayesian priors”. However handsome William Shakespeare’s personal contribution was to the Shakespeare canon — the text of the actual folio[18] — it is finite, small and historic. It was complete by 1616 and hasn’t been changed since. The rest of the body of work we think of as “Shakespeare” is a continually growing body of interpretations, editions, performances, literary criticisms, essays, adaptations and its (and their) infusion into the vernacular. This vastly outweighs the bard’s actual folio.
This kind of “communal mind scaling” creates its own intellectual energy and momentum from a small, wondrous seed planted and nurtured four hundred years ago.[19]
The wisdom of the crowd thus shapes itself: community consensus has a directed intelligence all of its own. It is not magically benign, of course, as Sam Bankman-Fried might tell us, having been on both ends of it.[20]
Bayesian priors and the canon of ChatGPT
The “Bayesian priors” argument which fails for Shakespeare also fails for a large language model.
Just as most of the intellectual energy needed to render a text into the three-dimensional metaphorical universe we know as King Lear comes from the surrounding cultural milieu, so it does with the output of an LLM. The source, after all, is entirely drawn from the human canon. A model trained only on randomly assembled ASCII characters would return only randomly assembled ASCII characters.
But what if the material is not random? What if the model augments its training data with its own output? Might that create an apocalyptic feedback loop, whereby LLMs bootstrap themselves into some kind of hyper-intelligent super-language, beyond mortal cognitive capacity, whence the machines might dominate human discourse?
Are we inadvertently seeding Skynet?
Just look at what happened with Alpha Go. It didn’t require any human training data: it learned by playing millions of games against itself. Programmers just fed it the rules, switched it on and, with indecent brevity, it worked everything out and walloped the game’s reigning grandmaster.
Could LLMs do that? This fear is not new:
And to this end they built themselves a stupendous super-computer which was so amazingly intelligent that even before its databanks had been connected up it had started from “I think, therefore I am” and got as far as deducing the existence of rice pudding and income tax before anyone managed to turn it off.
But brute-forcing outcomes in fully bounded, zero-sum environments with simple, fixed rules — in the jargon of complexity theory, a “tame” environment — is what machines are designed to do. We should not be surprised that they are good at this, nor that humans are bad at it. This is exactly where we would expect a Turing machine to excel.
By contrast, LLMs must operate in complex, “wicked” environments. Here conditions are unbounded, ambiguous, inchoate and impermanent. This is where humans excel. Here, the whole environment, and everything in it, continually changes. The components interact with each other in non-linear ways. The landscape dances. Imagination here is an advantage: brute force mathematical computation won’t do.
Think how hard physics would be if particles could think.
- — Murray Gell-Mann
An LLM works by compositing a synthetic output from a massive database of pre-existing text. It must pattern-match against well-formed human language. Degrading its training data with its own output will progressively degrade its output. Such “model collapse” is an observed effect.[21] LLMs will only work for humans if they’re fed human generated content. Alpha Go is different.
I see a difference between large language models with Alpha Go learning to play super human Go through self-play.
When Alpha Go adds one of its own self-vs-self games to its training database, it is adding a genuine game. The rules are followed. One side wins. The winning side did something right.
Perhaps the standard of play is low. One side makes some bad moves, the other side makes a fatal blunder, the first side pounces and wins. I was surprised that they got training through self-play to work; in the earlier stages the player who wins is only playing a little better than the player who loses and it is hard to work out what to learn. But the truth of Go is present in the games and not diluted beyond recovery.
But an LLM is playing a post-modern game of intertextuality. It doesn’t know that there is a world beyond language to which language sometimes refers. Is what an LLM writes true or false? It is unaware of either possibility. If its own output is added to the training data, that creates a fascinating dynamic. But where does it go? Without Alpha Go’s crutch of the “truth” of which player won the game according to the hard-coded rules, I think the dynamics have no anchorage in reality and would drift, first into surrealism and then psychosis.
One sees that Alpha Go is copying the moves that it was trained on and an LLM is also copying the moves that it was trained on and that these two things are not the same.[22]
There is another contributor to the cultural milieu surrounding any text: the reader. It is the reader, and her “cultural baggage”, who must make head and tail of the text. She alone determines, for her own case, whether it stands or falls. This is true however rich is the cultural milieu that supports the text. We know this because the overture from Tristan und Isolde can reduce different listeners to tears of joy or boredom. One contrarian can see, in the Camden Cat, a true inheritor of the great blues pioneers, others might see an unremarkable busker.
Construing natural language, much less visuals or sound, is no matter of mere symbol-processing. Humans are not Turing machines. A text only sparks meaning, and becomes art, in the reader’s head. This is just as true of magic — the conjurer’s skill is to misdirect the audience into imagining something that isn’t there. The audience supplies the magic.
The same goes of an LLM — it is simply digital magic. We imbue what an LLM generates with meaning. We are doing the heavy lifting.
Coda
“Und wenn du lange in einen Abgrund blickst, blickt der Abgrund auch in dich hinein”
- —Nietzsche
Man, this got out of control.
So is this just Desperate-Dan, last-stand pattern-matching from an obsolete model, staring forlornly into the abyss? Being told to accept his obsolescence is an occupational hazard for the JC, so no change there.
But if this really is the time that is different, something about it feels underwhelming. If this is the hill we die on, we’ve let ourselves down.
Don’t be suckered by parlour tricks. Don’t redraw our success criteria to suit the machines. To reconfigure how we judge each other to make easier for technology to do it at scale is not to be obsolete. It is to surrender.
Humans can’t help doing their own sort of pattern-matching. There are common literary tropes where our creations overwhelm us — Frankenstein, 2001: A Space Odyssey, Blade Runner, Terminator, Jurassic Park, The Matrix. They are cautionary tales. They are deep in the cultural weft, and we are inclined to see them everywhere. The actual quotidian progress of technology has a habit of confounding science fiction and being a bit more boring.
LLMs will certainly change things, but we’re not fit for battery juice just yet.
Buck up, friends: there’s work to do.
A real challenger bank
Optimised automation has its place. All other things being equal, an organisation that has optimised its machines will do better, in peacetime and when at war, than one that hasn’t.
An organisation where machines are optimised is one whose people are also optimised: maximally free to work their irreducible, ineffable, magic; hunting out new lands, opening up new frontiers, identifying new threats, forging new alliances — playing the infinite game — while uncomplaining drones in their service till the fields, harvest crops, tend the flock, work the pits, carry the rubble away from the coalface and manage known pitfalls to minimise the chance of human error.
Machines are historical
Machines are dispositionally historical. They look backward, narratising by reference to available data and prior experience, all of which hails from the past. They have no apparatus for navigating any part of the future that is not like the past.[23]
Now, the future is not entirely dissimilar from the past. We should be grateful for that. For the human sense of “continuity” to have an adaptive advantage, great swathes of what it encounters day to day must be the same, or passably similar. But the parts of the future that are most like the past are the uninteresting parts. They are the aspects of our shared experience where people continue to do what they have always done: our routines; our commonplaces. If a million people bought sliced bread yesterday, it is a safe bet that a similar number will tomorrow, and on next Tuesday.
These regularities are the safe, boring, on-piste, already-optimised part of the future. The eighty percent. Here the risks, and therefore the returns, are slimmest. Stampeding for this part of the demand curve is a dumb idea.
Rory Sutherland puts it well in his excellent book Alchemy: The Surprising Power of Ideas that Don’t Make Sense: To converge on the same spot as all your most mediocre competitors, leaving the rest of design-space to the unconventional thinkers, is a rum game.
This is what machine-oriented solutions inevitably do. Even ones using artificial intelligence. (Especially ones using artificial intelligence.) Machines are cheap, quick and easy solutions to hard problems. Everyone who takes the same easy solution will end up at the same place — a traffic jam — a local maximum that, as a result, will be systematically driven into the ground by successive, mediocre market entrants seeking to get a piece of the same action.
In principle, humans can make “educated improvisations” in the face of unexpected opportunities in a way that machines can’t. [24]
There is an ineffable, valuable role optimising those machines, adjusting them, steering them, directing them, feeding in your human insight optimising for the environment as it evolves.
Now, the dilemma. If, over thirty years, you have systematically recruited for those who best display machine-like qualities — if that is what your education system targets, your qualification system credentialises and your recruitment and promotion system rewards — your people won't be very good at weaving magic.
You will have built carbon-based Turing machines. We already know that humans are bad at being computers. That is why we build computers. But if we raise our children to be automatons they won’t be good at human magic either.
Nor, most likely, will the leaders of banking organisations who employ them. These executives will have made it to the top of their respective greasy poles by steadfast demonstration of the qualities to which their organisations aspire. If a bank elevates algorithms over all else, you should expect its chief executive to say things like, “tomorrow, we will have robots behaving like people”. This can only be true, or a good thing, if you expect your best people to behave like robots.
Robotic people do not generally have a rogue streak. They are not loose cannons. They no not call “bullshit”. They do not question their orders. They do not answer back.
And so we see: financial services organisations do not value people who do. They value the fearful. They elevate the rule-followers. They distrust “human magic”, which they characterise as human weakness. They find it in the wreckage of Enron, or Kerviel, or Madoff, or Archegos. Bad apples. Operator error. They emphasise this human stain over the failure of management that inevitably enabled it. People who did not play by the rules over systems of control that allowed, .or even obliged, them to.
The run post mortems: with the rear-facing forensic weaponry of internal audit, external counsel they reconstruct the fog of war and build a narrative around it. The solution: more systems. More control. More elaborate algorithms. More rigid playbooks. The object of the exercise: eliminate the chance of human error. Relocate everything to process.
Yet the accidents keep coming. Our financial crashes roll of honour refers. They happen with the same frequency, and severity, notwithstanding the additional sedimentary layers of machinery we develop to stop them.
Hypothesis: these disasters are not prevented by high-modernism. They are a symptom of it. They are its products.
Zero-day vulnerabilities
“Bad apples” find and exploit zero-day flaws in the modernist system, which is what we should expect bad apples to do. They will seek out the vunerabilities and they will exploit them. They will find them exactly where the modernist machines are not looking: Apparently harmless, sleepy backwaters.
But who the bad apples are depends on who is asking, and when.
After-the-fact-bad-apples: Nick Leeson, Jeff Skilling, Ken Lay, Jerome Kerviel, Kweku Abodoli, Elizabeth Holmes, Arif Naqvid, Charlie Javis, Jo Lo, Bernie Madoff, Sam Bankman-Fried.
None of these were bad apples before the fact. They were Heroes. Chairman of NASDAQ. Visionary innovators.
For a fully taxonomised system, that runs entirely by algorithm, however smart, derived from the scar tissue of the past, is literally blind to zero-day vulnerabilities. Unless mediated by people thinking and viewing the world unconventionally, it will repeatedly fail. And this has been the tale of the financial markets since Hammurabi published his code.
The age of the machines — our complacent faith in them — has made matters worse. Machines will conspire to ignore “human magic”, when offered, especially when it says “this is not right”.
That kind of magic was woven by Bethany McLean. Michael Burry. Harry Markopolos. Dan McCrum. The formalist system systematically ignored them, fired them, tried to put them in prison.
The difference between excellent banks and hopeless ones: the transparent informal networks by which a good institution mysteriously avoids landmines, pitfalls and ambushes, while a poor one walks into every one.
You can’t code for that. The same human expertise the banks need to hold their creaking systems together, to work around their bureaucratic absurdities and still sniff out new business opportunities and take a pragmatic and prudent view of the risk — this is not a bug in the system, but a feature.
This is the view from the executive suite. They measure individuals by floorspace occupied, salary, benefits, pension contributions, revenue generated. Employees who don’t generate legible revenue show up on the map only as a liability. The calculus is obvious: why pay someone to do badly what a machine could do cheaper, quicker, and more reliably for free?
Thus, Cryan says and Evidence Lab implies: prepare for the coming of the machines. Automate every process. Reduce the cost line. Remove people, because when they come for us, Amazon won’t be burdened by people.
Yes, bank tech is rubbish
To be sure, the tech stacks of most banks are dismal. Most are sedimented, interdependent concatenations of old mainframes, Unix servers, IBM 386s, and somewhere in the middle of the thicket will be a wang box from 1976 with a CUI interface that can’t be switched off without crashing the entire network. These patchwork systems are a legacy of dozens of mergers and acquisitions and millions of lazy, short-term decisions to patch obsolescent systems with sellotape and glue rather than overhauling and upgrading them properly.
They are over-populated, too, with low-quality staff. Citigroup claims to employ 70,000 technology experts worldwide, and, well, Revlon.
It is hard to imagine Amazon accidentally wiring half a billion dollars to customers because of crappy software and playbook- following school leavers in a call centre in Bucharest. (Can we imagine Amazon using call centres in Bucharest? Absolutely). Banks have a first-mover disadvantage here, as most didn’t start thinking of themselves as tech companies until the last twenty years, by which stage their tech infrastructure was intractably shot.
But the banks have had decades to recover, and if they didn’t then, they definitely do now think of themselves as tech companies. Bits of the banking system — high-frequency trading algorithms, blockchain and data analytics used on global strategies — are as sophisticated as anything Apple can design.[25]
We presume Apple, Google and Amazon, who always have thought of themselves as tech companies, are naturally better at tech and more disciplined about their infrastructure.[26] But you never know.
In any case, a decent technology platform is a necessary, but not sufficient condition to success in banking. You still need gifted humans to steer it, and human relationships to give it somewhere to steer to. The software won’t steer itself.
Bank technology is not, of itself, a competitive threat. It is just the ticket to play.
Yes, bank staff are rubbish
Now, to lionise the human spirit in the abstract, as we do, is not to say we should sanctify bank employees as a class in the particular. The JC has spent a quarter century among them. They — we — may be unusually well-paid, for all the difference we make to the median life on planet Earth, but we are not unusually gifted or intelligent. Come on, guys: — backtesting. Debt value adjustments. “Six sigma” events several days in a row. Madoff.
It is an ongoing marvel how commercial organisations can be so reliably profitable given the median calibre of those they employ to steer themselves. Sure, our levels of formal accreditation may be unprecedented, but levels of “metis” — which can't be had from the academy — have stayed where they are. Market and organisational systems tend to be configured to ensure reversion to mediocrity over time.[27]
There is some irony here. As western economies shifted from the production of things to the delivery of services over the last half-century, the proportion of their workforce in “white collar work” has exploded. There are more people in the UK today than there were in 1970, and almost none of them now work down the mines, on the production line or even in the menswear department at Grace Brothers, as we are given to believe they did a generation ago.
All kinds of occupations that scarcely existed when our parents were young have emerged, evolved, and self-declared themselves to be professions. The ratio of jobs requiring, de facto, university degrees (the modern lite professional qualification) has grown. The number of universities have expanded as polytechnics rebadged themselves. This, too, feels like a system effect of the modernist orthodoxy: if we can only assess people by reference to formal criteria, then industries devoted to contriving and dishing out those criteria are sure to flourish. The self-interests of different constituencies in the system contrive to entrench each other: this is how feedback loops work.
As technologies expand and encroach, taming unbroken landscapes and disrupting already-cultivated ones, let us take a moment to salute the human ingenuity that it takes to defy the march of the machines. For every technology that “solves” a formal pursuit there arises a second order of required white collar oversight. ESG specialists have evolved to opine on and steward forward our dashboards and marshall criteria for our environmental and social governance strategies — strategies we were until not happy enough to leave in for guidance by Adam Smith’s invisible hand. Blockchain as a service has emerged to provide a solution to those who believe in the power of disintermediated networks but want someone else to find in and do the heavy-lifting for them. AI compliance officers and prompt engineers steward our relationship with machines supposedly so clever they should not need hand-holding.
Other feedback loops emerge to counteract them. The high modernist programme can measure by any observable criteria, not just formal qualifications. Its favourite is cost.[28] The basic proposition is, “Okay, we need a human resources operation. I can't gainsay that. But it need not be housed in London, nor staffed by Oxbridge grads, if Romanian school leavers, supervised by an alumnus from the Plovdiv Technical Institute with a diploma in personnel management, will do.”
It is as if management is not satisfied that we are mediocre enough. For a generation management orthodoxy has been to use machines, networks and our burgeoning digital interconnectedness to downskill the workforce, while the scope of that workforce has only grown. Why pay for expert staff to do drudgery in London when you can have school leavers do it for a quarter the cost in Bucharest?
But are few, expensive, talented, centralised workers really better than many, cheap mediocre ones? Is the management resource dedicated to overseeing this sprawling complex of pencil pushing justified?
This signals a lack of respect for the work — if you are happy having school leavers doing it off a playbook, it can’t be that hard — but, equally, a lack of respect for machines — if it really is just a case of following a playbook (an analogue algorithm, after all), then why not just program a machine to do it and save all that HR cost and organisational overhead?
But, thirdly, why are you suffering drudgery at all? Why aren’t you using the great experience and expertise of your people to eliminate drudgery?
There is a negative feedback loop here: the experts in London are able and incentivised to eliminate drudgery. Able because they understand the product and the market, and know well what matters and what doesn’t. Incentivised because this stuff is boring.
Outsourced school-leavers in Romania are not: by design they don’t understand the process — they are only on the park because of a playbook — and they’re not incentivised to remove drudgery because doing so would puts them out of a job. Recall the agency paradox.
So we construct the incentives inside the organisation to cultivate a will to bureaucracy. Complicatedness is somewhere between a necessary evil and a virtue.
We continue to get away with it because of the scale these businesses run on, and because all the competitors engage the same narrow group of management consultancy firms who are all infected with exactly the same philosophy. And no-one got fired for hiring McKinsey.
Sunlit uplands
If you were setting up a challenger bank today, what would you do?
Imagine setting them free: automating the truly quotidian stuff, re-emphasising away from bureaucracy as the greatest good, and towards relationship management and expertise?
Don’t complicate the operational organisation for the sake of unit cost. Properly account for your infrastructure. This means taking a long-cycle view. The cost to Citigroup of its “saving” through under-investment in technology and outsourcing operations teams to the Philippines included it's loss — of reputation, credibility legal costs and management resources — from the Revlon loan debâcle. A proximate outcome of Credit Suisse’s thinning out and downskilling of its risk management function was — well, a laundry list of avoidable disasters.
Decide whether you are offering a product or a service. Products are unitary, homogeneous, and should — if properly designed — require no after-sale service. Mortgages. Deposit accounts. Investment funds. These you should fully automate — you should know enough about the product and your consumers that all variables should be known and coded for.
If you are offering a service, make sure it is a service, and not just a poorly-designed product. A service is a relationship between valuable humans. You can’t outsource a relationship to a low-cost jurisdiction without degrading your service. The cost of excellent client management is not wasted in a service. That is the service.
Just as you shouldn’t confuse products with services, also don’t confuse services with products. There are parts of the client life cycle that feel tedious, high cost, low value things — client onboarding — that are unusually formative of a client’s impression during onboarding. Precisely because they have the potential to be so painful, and they crop up at the start of the relationship. Turning these into client selling points — can you imagine legal docs being a marketing tool? — can turn this into a service. Treating an opportunity to handhold a new client as a product and not an opportunity to relationship build misses a trick.
- ↑ The horror! The horror! The irony! The irony!
- ↑ Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy (2018)
- ↑ Though we think this rather confuses the product for its manufacturer. We might feel different about Apple if, rather than making neat space-aged knick-knacks it made a business of coldly foreclosing mortgages, and charging usurious rates on credit card balances. You don’t think it would? Have you seen the cut it takes from the play store?
- ↑ The JC’s legaltech roll of honour refers.
- ↑ Our legaltech roll of honour refers.
- ↑ This is the premise of Daniel Kahneman’s Thinking, Fast and Slow, and for that matter, Matthew Syed’s Bounce.
- ↑ Julian Jaynes has a magnificent passage in his book The Origins of Consciousness in the Breakdown of the Bicameral Mind where he steps through all the aspects of consciousness that we assume are conscious, but which are not. “Consciousness is a much smaller part of our mental life than we are conscious of, because we cannot be conscious of what we are not conscious of. How simple that is to say; how difficult to appreciate!”
- ↑ As with the JC’s school exam grades: anything more than 51% is wasted effort.Try as he might, the JC was never able to persuade his dear old Mutti about this.
- ↑ The Long Tail: How Endless Choice is Creating Unlimited Demand (2006)
- ↑ It is called “cultural convergence”.
- ↑ Anita Elberse’s Blockbusters is excellent on this point.
- ↑ This is the “ugliest man” by which, Nietzsche claims, we killed God.
- ↑ Actually come to think of it Lucille, bless her beautiful soul, didn’t seem to do that very often. But still.
- ↑ We would do well to remember Arthur C. Clarke’s law here. The parallel processing power an LLM requires is already massive. It may be that the cost of expanding it in the way envisioned would be unfeasibly huge — in which case the original “business case” for technological redundancy falls away. See also the simulation hypothesis: it may be that the most efficient way of simulating the universe with sufficient granularity to support the simulation hypothesis is to actually build and run a universe in which case, the hypothesis fails.
- ↑ Cixin Liu runs exactly this thought experiment in his Three Body Problem trilogy.
- ↑ Though the infinite fidelity of machines is overstated, as I discovered when trying to find the etymology of the word “satisfice”. Its modern usage was coined by Herbert Simon in a paper in 1956, but the Google ngram suggests its usage began to tick up in the late 1940s. On further examination, the records transpire to be optical character recognition errors. So there is a large part of the human oeuvre — the pre-digital bit that has had be digitised— that does suffer from analogue copy errors.
- ↑ Or possibly, even none: Wikipedia tells us that, “due to its linguistic experiments, stream of consciousness writing style, literary allusions, free dream associations, and abandonment of narrative conventions, Finnegans Wake has been agreed to be a work largely unread by the general public.”
- ↑ What counts as “canon” in Shakespeare’s own written work is a matter of debate. There are different, inconsistent editions. Without Shakespeare to tell us, we must decide for ourselves. Even were Shakespeare able to tell us, we could still decide for ourselves.
- ↑ On this view Shakespeare is rather like a “prime mover” who created a universe but has left it to develop according to its own devices.
- ↑ See also Lindy Chamberlain, Peter Ellis and the sub-postmasters wrongly convicted in the horizon debâcle.
- ↑ https://www.techtarget.com/whatis/feature/Model-collapse-explained-How-synthetic-training-data-breaks-AI
- ↑ An excellent post by user:felis-parenthesis on Reddit.
- ↑ This may seem controversial but should not be: narratising an unseen future requires a reflexive concept of self and a sense of continuity in spacetime, neither of which a Turing machine can have.
- ↑ Sure: machines can make random improvisations, and after iterating for long enough may arrive at the same local maxima, but undirected evolution is extraordinarily inefficient way to “frig around and find out”.
- ↑ That said, the signal processing capability of Apple’s consumer music software is pretty impressive.
- ↑ See the Bezos memo.
- ↑ See The Peter Principle; Parkinson’s Law: for classic studies.
- ↑ The great conundrum posed by Ohno-sensei’s Toyota Production System is why prioritise cost over waste? Because cost is numerical and can easily be measured by the dullest book-keeper. Knowing what is wasteful in a process requires analysis and understanding of the system, and so cannot be easily measured. We go eliminate cost as a lazy proxy. It makes you wonder why executives are paid so well.
See also
Template:M sa technology rumours of our demise