Template:M intro technology rumours of our demise: Difference between revisions
Amwelladmin (talk | contribs) Tags: Mobile edit Mobile web edit |
Amwelladmin (talk | contribs) Tags: Mobile edit Mobile web edit |
||
Line 140: | Line 140: | ||
An organisation where machines are optimised is one whose ''people'' are also optimised: maximally free to work their irreducible, ineffable, magic; hunting out new lands, opening up new frontiers, identifying new threats, forging new alliances — ''playing the [[Finite and Infinite Games|infinite game]]'' — while uncomplaining drones in their service till the fields, harvest crops, tend the flock, work the pits, carry the rubble away from the coalface and manage known pitfalls to minimise the chance of [[human error]]. | An organisation where machines are optimised is one whose ''people'' are also optimised: maximally free to work their irreducible, ineffable, magic; hunting out new lands, opening up new frontiers, identifying new threats, forging new alliances — ''playing the [[Finite and Infinite Games|infinite game]]'' — while uncomplaining drones in their service till the fields, harvest crops, tend the flock, work the pits, carry the rubble away from the coalface and manage known pitfalls to minimise the chance of [[human error]]. | ||
====Machines are historical==== | ====Machines are historical==== | ||
Machines are dispositionally historical. They look backward, narratising by reference to available data and prior experience, all of which hails [[Big data|from the ''past'']]. | Machines are dispositionally historical. They look backward, narratising by reference to available data and prior experience, all of which hails [[Big data|from the ''past'']]. They have no apparatus for navigating any part of the future that is not like the past.<ref>This may seem controversial but should not be: narratising an unseen future requires a reflexive concept of self and a sense of continuity in spacetime, neither of which a Turing machine can have.</ref> | ||
Now, the future is not entirely dissimilar from the past. We should be grateful for that. For the human sense of “continuity” to have an adaptive advantage, great swathes of what it encounters day to day must be the ''same'', or passably similar. But the parts of the future that are most like the past are the ''uninteresting parts''. They are the aspects of our shared experience where people continue to do what they have always done: our routines; our commonplaces. If a million people bought sliced bread yesterday, it is a safe bet that a similar number will tomorrow, and on next Tuesday. | |||
These regularities are the safe, boring, ''[[Off-piste|on-piste]]'', already-optimised future. Here the risks, and therefore the returns, are slimmest. Stampeding for this part of the demand curve is a ''dumb'' idea. | These regularities are the safe, boring, ''[[Off-piste|on-piste]]'', already-optimised part of the future. The eighty percent. Here the risks, and therefore the returns, are slimmest. Stampeding for ''this'' part of the demand curve is a ''dumb'' idea. | ||
[[Rory Sutherland]] | [[Rory Sutherland]] puts it well in his excellent book {{br|Alchemy: The Surprising Power of Ideas that Don’t Make Sense}}: To converge on the same spot as all your most mediocre competitors, leaving the rest of design-space to the unconventional thinkers, is a rum game. | ||
This is what machine-oriented solutions inevitably do. Even ones using artificial intelligence. (''Especially'' ones using artificial intelligence.) Machines, | ''This is what machine-oriented solutions inevitably do''. Even ones using [[artificial intelligence]]. (''Especially'' ones using [[artificial intelligence]].) Machines are cheap, quick and easy solutions to hard problems. Everyone who takes the same easy solution will end up at the same place — a traffic jam — a local maximum that, as a result, will be systematically driven into the ground by successive, mediocre market entrants seeking to get a piece of the same action. | ||
''In principle'', humans can make “educated improvisations” in the face of unexpected opportunities in a way that machines can’t. <Ref> | ''In principle'', humans can make “educated improvisations” in the face of unexpected opportunities in a way that machines can’t. <Ref>Sure: machines can make random improvisations, and after iterating for long enough may arrive at the same local maxima, but undirected evolution is extraordinarily inefficient way to “frig around and find out”.</ref> | ||
There is an ineffable, valuable role optimising those machines, adjusting them, steering them, directing them, feeding in your human insight optimising | There is an ineffable, valuable role optimising those machines, adjusting them, steering them, directing them, feeding in your human insight optimising for the environment as it evolves. | ||
Now, the dilemma. If, over thirty years, you have systematically recruited for those who best display machine-like qualities — if that is what your education system targets, your qualification system credentialises and your recruitment and promotion system rewards — ''your people won't be very good at weaving magic''. | Now, the dilemma. If, over thirty years, you have systematically recruited for those who best display machine-like qualities — if that is what your education system targets, your qualification system credentialises and your recruitment and promotion system rewards — ''your people won't be very good at weaving magic''. | ||
You will have built carbon-based Turing machines. We already know that humans are bad at being computers. ''That is why we | You will have built carbon-based Turing machines. We already know that humans are bad at being computers. ''That is why we build computers''. But if we raise our children to be automatons they won’t be good at human magic either. | ||
Nor, most likely, will the leaders of banking organisations who employ them. These executives will have made it to the top of their respective greasy poles by steadfast demonstration of | Nor, most likely, will the leaders of banking organisations who employ them. These executives will have made it to the top of their respective greasy poles by steadfast demonstration of the qualities to which their organisations aspire. If a bank elevates algorithms over all else, you should expect its chief executive to say things like, “tomorrow, we will have ''robots behaving like people”. This can only be true, or a good thing, ''if you expect your best people to behave like robots''. | ||
Robotic people do not generally have a rogue streak. They are not loose cannons. They no not call “bullshit”. They do not question their orders. They do not answer back. | Robotic people do not generally have a rogue streak. They are not loose cannons. They no not call “bullshit”. They do not question their orders. They do not answer back. |
Revision as of 17:09, 10 November 2023
In 2017, then-CEO of Deutsche Bank John Cryan thought his employees’ days are numbered. Machines would do for them. Not just back office grunts: everyone. Even, presumably, Cryan himself.[1]
“Today,” he warned, “we have people doing work like robots. Tomorrow, we will have robots behaving like people”.
You can see where he was coming from: what with high-frequency trading algorithms, AI medical diagnosis, Alpha Go, self-driving cars: the machines were coming for us. The machines have taken over our routine tasks; soon they will take the hard stuff, too. And Cryan articulated himself before “large language model” was even part of the vernacular.
Now.
As long as there has been the lever, wheel or plough, humans have used technology to do tedious, repetitive tasks and to lend power and speed to our frail earthly shells. Humans have done this because it is smart: machines follow instructions better than we do — that’s what means is to be an “automaton”. At things they are good at, machines are quicker, stronger, nimbler, cheaper and less error-prone than humans.
But that’s an important condition: as George Gilder put it:
“The claim of superhuman performance seems rather overwrought to me. Outperforming unaided human beings is what machines are supposed to do. That’s why we build them.”[2]
The division of labour
Nowadays, we must distinguish between traditional, obedient, rule-following machies, and randomly-make-it-up large language models — unthinking, probabilistic, pattern-matching machines. LLMs are the novelty act of 2023, at the top of their hype cycle right now, like blockchain, was a year ago, and like DLT they will struggle to find an enduring use case.
Traditional machines make flawless decisions, as long as both question and answer are pre-configured. Meatsacks are better at handling ambiguity, conflict and novel situations. We’re not flawless — that’s part of the charm — but wherever we find a conundrum we can at least have a bash. We don’t crash. We don’t hang until dialogue boxes close. That’s the boon and the bane of the meatware: you can’t always tell when a human makes a syntax error.
This is how we’ve always used technology: the human figures out which field to plough and when; the horse ploughs it.
While technology may have prompted the odd short-term dislocation — the industrial revolution put a bunch of basket-weavers out of work — the long-term prognosis has been benign: technology has, for millennia, freed us to do things we previously had no time to try.
Technology opens up the design space. It reveals adjacent possibilities, expands the intellectual ecosystem, domesticates what we know and opens up frontiers to what we don’t.
Frontiers are places where we need smart people to figure out new tools and new ways of operating. Machines can’t do it.
But technology also creates space and capacity to indulge ourselves. Parkinson’s law states: work expands to fill the time allotted for its completion. Technology also frees us up to care about things we never used to care about. The microcomputer made generate duplicate and distribute documents far, far easier. There’s that boon and bane, again.
So, before concluding that this time the machines will put us out of work we must explain how. Why is this time different? What has changed?
FAANGs ahoy?
In 2018, then head of the UBS Evidence Lab — a “sell-side team of experts that work across 55+ specialized areas creating insight-ready datasets” — remarked that the incipient competition for banks was not “challenger” banks, but Apple, Amazon, or Google.
The argument was this: banking comes mostly down to three components: technology, reputation, and regulation.
Two of these — technology and reputation — are substantial problems. The other — regulation — is formalistic, especially if you have a decent technology stack.
So, how do the banks stack up against the FAANGS?
Technology | Reputation | Regulation | |
---|---|---|---|
Banks | Generally legacy, dated, patched together, under-powered, under-funded, conflicting, liable to fall over, susceptible to hacking. | Everyone hates the Financial Services industry. | All over it. Capitalised, have access to reserve banks, connected, exchange memberships, etc. |
FAANGS | Awesome: state of the art, natively functional, at cutting edge, well-funded, well-understood, robust, resilient. Ok could be hacked | Who doesn’t love Amazon? Who wouldn’t love to have an account at the iBank? Imagine if banking worked like Google Maps! | OK there is a bit of investment required here — and regulatory capital is a thing — but nothing is insurmountable with the Amazon Flywheel no? |
Winner | Cmon: are you kidding me? FAANGS all the way! | FAANGS. Are banks even on the paddock? | Banks have the edge right now. But look out white-shoe types: The techbros are coming for you. |
That the FAANGS will wipe the floor with any bank when it comes to technology is taken res ipsa loquitur — it needs little supporting evidence, just based on how lousy bank tech is — and, sure, the FAANGs have better standing with the public. Who doesn’t love Apple? Who does love Wells Fargo?[3]
Therefore, the argument goes, the only place where banking presently has an edge is in regulatory licences and approvals, capital, and regulatory compliance. It’s wildly complex, fiendishly detailed, the rules differ between jurisdictions, and the perimeter between one jurisdiction and the next is not always obvious. To paraphrase Douglas Adams: “You might think GDPR is complicated, but that’s just peanuts compared to MiFID.”
But, but, but — any number of artificially intelligent startups can manage that regulatory risk, right?[4]
But really. Let’s park a few uncomfortable facts and give Evidence Labs the benefit of the doubt:
So where are they?
Firstly — if banking is such a sitting duck for predator FAANGS, where the hell are they? It is 2023, for crying out loud. Wells Fargo is still with us. None of Apple, Amazon, or Google as so much as cast a wanton glance in its direction of let alone the Vampire Squid’s. Something is keeping the techbros away.
Techbros aren’t natural at banking
And it’s not just fear of regulation, capital and compliance: if it were, you would expect tech firms to be awesome at unregulated financial services.
But — secondly — they’re not.
We’ve been treated to a ten-year, live-fire experiment with how good tech firms will be in unregulated financial services — crypto — from which the banks — “trad fi” if you please — and, notably, the FAANGS have mainly stayed away. It hasn’t gone well.
Credulous cryptobros have found, and promptly fallen down, pretty much every open manhole already known to the dullest money manager — and discovered some whole new ones of their own to fall down too. Helpfully, Molly White has keeping a running score. Crypto, despite its awesome tech and fabulous branding, has been a disaster.
Tech brand-love-ins won’t survive first contact with banking
Thirdly — cool gadget maker that pivots to banking and does it well has as much chance of maintaining millennial brand loyalty as does a toy factory that moves into dentistry.
Those Occupy Wall Street gang? Apple fanboys, the lot of them. At the moment. But it isn’t the way trad fi banks go about banking that tarnishes your brand. It’s banking. No-one likes money-lenders. It is a dull, painful, risky business. Part of the game is doing shitty things to customers when they lose your money. Repossessing the Tesla. Foreclosing on the condo. That isn’t part of the game of selling MP3 players.
The business of banking will trash the brand.
Bank regulation is hard
Fourthly — regulatory compliance is not formalistic, and it is not “the easy bit of banking”. If you could solve it with tech, the banks would have long since done it. They gave certainly tried. (Modern banks, by the way, absolutely are technology businesses, in a way that WeWork and X/Twitter are not). Regulations change, contradict, don't make sense, overlap, are fiddly, illogical, often counterproductive and they are subject to to interpretation by regulators, who are themselves fiddly, illogical and not known for their constructive approach to rule enforcement.
Getting regulations wrong can have bad consequences. Even apparently formalistic things like KYC and client asset protection. Banks already throw armies of bodies and legaltech[5] at this and still they are routinely breaching minimum standards and being fined millions of dollars.
The gorillas in the room
A human being should be able to change a diaper, plan an invasion, butcher a hog, conn a ship, design a building, write a sonnet, balance accounts, build a wall, set a bone, comfort the dying, take orders, give orders, cooperate, act alone, solve equations, analyze a new problem, pitch manure, program a computer, cook a tasty meal, fight efficiently, die gallantly. Specialization is for insects.
- —Robert Heinlein
But in any case, park all the above, for it is beside the point. For it overlooks the same core banking competence that Mr. Cryan did: quality people, and quality leadership.
We have fallen into some kind of modernist swoon, in which we hold up ourselves up against machines, as if techne is a platonic ideal to which we should aspire.
So we set our children modernist criteria, too from the moment they set foot in the classroom. The education system selects for individuals by reference to how well they obey rules, how reliably, and quickly, they can identify, analyse and resolve known, pre-categorised, “problems”. But these are historical problems with known answers. This is a finite game. This is exactly what machines are best at. Why on earth we should be systematically educating our children to compete with machines at things machines are best at is well beyond this old codger.
If we tell ourselves that “machine-like qualities” are the highest human aspiration, we will naturally find ourselves wanting. We make it easy for the robots to take our jobs. We set ourselves up to fail.
But human qualities are different — humans can improvise, imagine, project in space and time, judge, narratise, analogise, interpret, assess — they can conceptualise Platonic ideals — in a way that algorithms cannot and LLMs can’t do except by pattern matching.
And there is the impish inconstancy, unreliability and unpredictability of the human condition — these make humans different, not inferior, to algorithms. They make us difficult to control and manage by algorithm.
And this is the point. We are not meant to be making it easy for machines to manage and control us. By suppressing our human qualities, we make ourselves more legible, machine readable, triageable, categorisable by algorithm. The economies of scale and process efficiencies this yields accrue to the machines and their owners, not us.
Why surrender before kick-off like that?
On being a machine
“Any sufficiently advanced technology is indistinguishable from magic.”
- —Arthur C. Clarke’s third law
We are in a machine age.
We call it that because machines have proven consistently good at doing things humans are too weak, slow, inconstant or easily bored to be good at: mechanical things.
Machines — quelle surprise — are good at mechanical things. But state-of-the-art machines, per Arthur C. Clarke, aren’t magic: it just seems like it, sometimes.
Yet we have convinced ourselves, and trained our children, that machine-like qualities — strength, speed, consistency, modularity, fungibility and mundanity — should be their loftiest aims.
But executing a task with strength, speed, consistency, fungibility and patience are lofty aims only if you haven’t got a suitable machine. If you have got a machine, use it. Let your people do something more useful.
If you haven’t, build one.
The body as a metaphor
We are used to using the “Turing machine” as a metaphor for “mind”: how about inverting that? How what about “body” — yes, in that dishonourable dualist, Cartesian sense — as a metaphor for “Turing machine”? How about using “mind” and “body” as a guiding principle for the division of labour between human and machine?
What goes to “body”, give to a machine. Motor skills. Temperature regulation. The pulmonary system. Digestion. Aspiration. The conscious mind has no business here. There is little it can add. It only gets in the way. There is compelling evidence that when the conscious mind takes over motor skills, things quickly go to hell.[6]
But leave interpersonal relationships, communication, perception, literary construction, decision-making in times of uncertainty, imagination and creation to the conscious mind. Leave the machines out of this. Let them report, by all means. Let them provide, on request, the information the conscious mind needs to make its build its models and make its plans, but do not let them intermediate that plan.
The challenge is not to automate indiscriminately, but judiciously. To optimise, so the conscious mind is set free of tasks it is not good at, not diverted from its valuable work by formalistic requirements of the machine.
Here “machine” carries a wider meaning than “computer”. It encompasses any formalised, preconfigured process. A playbook is a machine. A policy battery. An approval process.
A real challenger bank
Optimised automation has its place. All other things being equal, an organisation that has optimised its machines will do better, in peacetime and when at war, than one that hasn’t.
An organisation where machines are optimised is one whose people are also optimised: maximally free to work their irreducible, ineffable, magic; hunting out new lands, opening up new frontiers, identifying new threats, forging new alliances — playing the infinite game — while uncomplaining drones in their service till the fields, harvest crops, tend the flock, work the pits, carry the rubble away from the coalface and manage known pitfalls to minimise the chance of human error.
Machines are historical
Machines are dispositionally historical. They look backward, narratising by reference to available data and prior experience, all of which hails from the past. They have no apparatus for navigating any part of the future that is not like the past.[7]
Now, the future is not entirely dissimilar from the past. We should be grateful for that. For the human sense of “continuity” to have an adaptive advantage, great swathes of what it encounters day to day must be the same, or passably similar. But the parts of the future that are most like the past are the uninteresting parts. They are the aspects of our shared experience where people continue to do what they have always done: our routines; our commonplaces. If a million people bought sliced bread yesterday, it is a safe bet that a similar number will tomorrow, and on next Tuesday.
These regularities are the safe, boring, on-piste, already-optimised part of the future. The eighty percent. Here the risks, and therefore the returns, are slimmest. Stampeding for this part of the demand curve is a dumb idea.
Rory Sutherland puts it well in his excellent book Alchemy: The Surprising Power of Ideas that Don’t Make Sense: To converge on the same spot as all your most mediocre competitors, leaving the rest of design-space to the unconventional thinkers, is a rum game.
This is what machine-oriented solutions inevitably do. Even ones using artificial intelligence. (Especially ones using artificial intelligence.) Machines are cheap, quick and easy solutions to hard problems. Everyone who takes the same easy solution will end up at the same place — a traffic jam — a local maximum that, as a result, will be systematically driven into the ground by successive, mediocre market entrants seeking to get a piece of the same action.
In principle, humans can make “educated improvisations” in the face of unexpected opportunities in a way that machines can’t. [8]
There is an ineffable, valuable role optimising those machines, adjusting them, steering them, directing them, feeding in your human insight optimising for the environment as it evolves.
Now, the dilemma. If, over thirty years, you have systematically recruited for those who best display machine-like qualities — if that is what your education system targets, your qualification system credentialises and your recruitment and promotion system rewards — your people won't be very good at weaving magic.
You will have built carbon-based Turing machines. We already know that humans are bad at being computers. That is why we build computers. But if we raise our children to be automatons they won’t be good at human magic either.
Nor, most likely, will the leaders of banking organisations who employ them. These executives will have made it to the top of their respective greasy poles by steadfast demonstration of the qualities to which their organisations aspire. If a bank elevates algorithms over all else, you should expect its chief executive to say things like, “tomorrow, we will have robots behaving like people”. This can only be true, or a good thing, if you expect your best people to behave like robots.
Robotic people do not generally have a rogue streak. They are not loose cannons. They no not call “bullshit”. They do not question their orders. They do not answer back.
And so we see: financial services organisations do not value people who do. They value the fearful. They elevate the rule-followers. They distrust “human magic”, which they characterise as human weakness. They find it in the wreckage of Enron, or Kerviel, or Madoff, or Archegos. Bad apples. Operator error. They emphasise this human stain over the failure of management that inevitably enabled it. People who did not play by the rules over systems of control that allowed, .or even obliged, them to.
The run post mortems: with the rear-facing forensic weaponry of internal audit, external counsel they reconstruct the fog of war and build a narrative around it. The solution: more systems. More control. More elaborate algorithms. More rigid playbooks. The object of the exercise: eliminate the chance of human error. Relocate everything to process.
Yet the accidents keep coming. Our financial crashes roll of honour refers. They happen with the same frequency, and severity, notwithstanding the additional sedimentary layers of machinery we develop to stop them.
Hypothesis: these disasters are not prevented by high-modernism. They are a symptom of it. They are its products.
Zero-day vulnerabilities
“Bad apples” find and exploit zero-day flaws in the modernist system, which is what we should expect bad apples to do. They will seek out the vunerabilities and they will exploit them. They will find them exactly where the modernist machines are not looking: Apparently harmless, sleepy backwaters.
But who the bad apples are depends on who is asking, and when.
After-the-fact-bad-apples: Nick Leeson, Jeff Skilling, Ken Lay, Jerome Kerviel, Kweku Abodoli, Elizabeth Holmes, Arif Naqvid, Charlie Javis, Jo Lo, Bernie Madoff, Sam Bankman-Fried.
None of these were bad apples before the fact. They were Heroes. Chairman of NASDAQ. Visionary innovators.
For a fully taxonomised system, that runs entirely by algorithm, however smart, derived from the scar tissue of the past, is literally blind to zero-day vulnerabilities. Unless mediated by people thinking and viewing the world unconventionally, it will repeatedly fail. And this has been the tale of the financial markets since Hammurabi published his code.
The age of the machines — our complacent faith in them — has made matters worse. Machines will conspire to ignore “human magic”, when offered, especially when it says “this is not right”. That kind of magic was woven by Bethany McLean. Michael Burry. Harry Markopolos. Dan McCrum. The formalist system systematically ignored them, fired them, tried to put them in prison.
The difference between excellent banks and hopeless ones: the transparent informal networks by which a good institution mysteriously avoids landmines, pitfalls and ambushes, while a poor one walks into every one.
You can’t code for that. The same human expertise the banks need to hold their creaking systems together, to work around their bureaucratic absurdities and still sniff out new business opportunities and take a pragmatic and prudent view of the risk — this is not a bug in the system, but a feature.
This is the view from the executive suite. They measure individuals by floorspace occupied, salary, benefits, pension contributions, revenue generated. Employees who don’t generate legible revenue show up on the map only as a liability. The calculus is obvious: why pay someone to do badly what a machine could do cheaper, quicker, and more reliably for free?
Thus, Cryan says and Evidence Lab implies: prepare for the coming of the machines. Automate every process. Reduce the cost line. Remove people, because when they come for us, Amazon won’t be burdened by people.
Yes, bank tech is rubbish
To be sure, the tech stacks of most banks are dismal. Most are sedimented, interdependent concatenations of old mainframes, Unix servers, IBM 386s, and somewhere in the middle of the thicket will be a wang box from 1976 with a CUI interface that can’t be switched off without crashing the entire network. These patchwork systems are a legacy of dozens of mergers and acquisitions and millions of lazy, short-term decisions to patch obsolescent systems with sellotape and glue rather than overhauling and upgrading them properly.
They are over-populated, too, with low-quality staff. Citigroup claims to employ 70,000 technology experts worldwide, and, well, Revlon.
It is hard to imagine Amazon accidentally wiring half a billion dollars to customers because of crappy software and playbook- following school leavers in a call centre in Bucharest. (Can we imagine Amazon using call centres in Bucharest? Absolutely). Banks have a first-mover disadvantage here, as most didn’t start thinking of themselves as tech companies until the last twenty years, by which stage their tech infrastructure was intractably shot.
But the banks have had decades to recover, and if they didn’t then, they definitely do now think of themselves as tech companies. Bits of the banking system — high-frequency trading algorithms, blockchain and data analytics used on global strategies — are as sophisticated as anything Apple can design.[9]
We presume Apple, Google and Amazon, who always have thought of themselves as tech companies, are naturally better at tech and more disciplined about their infrastructure.[10] But you never know.
In any case, a decent technology platform is a necessary, but not sufficient condition to success in banking. You still need gifted humans to steer it, and human relationships to give it somewhere to steer to. The software won’t steer itself.
Bank technology is not, of itself, a competitive threat. It is just the ticket to play.
Yes, bank staff are rubbish
Now, to lionise the human spirit in the abstract, as we do, is not to say we sanctify bank employees as a class in the particular. The JC has a quarter century among them. They are unusually paid, but not gifted or intelligent.
It is an ongoing marvel how commercial banking organisations can be so reliably profitable given the calibre of the hoardes they employ to steer them. We have argued elsewhere that informal systems tend to be configured to ensure staff mediocrity over time. Others have too.[11]
But the current orthodoxy contributes to this system. Use machines , networks and connectivity to downskill. Why pay for expert staff to do drudgery in London when you can have School-leavers in Bucharest do it for a quarter the cost?
Why are you suffering drudgery at all? Why aren’t you using the great experience and expertise of your people to eliminate drudgery?
There is a negative feedback loop here: the experts in London are able and incentivised to eliminate drudgery. Able because they understand the product and the market, and know well what matters and what doesn’t. Incentivised because this stuff is boring.
Outsourced school-leavers in Romania are not: by design they don’t understand the process — they are only on the park because of a playbook — and they’re not incentivised to remove drudgery because doing so would puts them out of a job. Recall the agency paradox.
So we construct the incentives inside the organisation to cultivate a will to bureaucracy. Complicatedness is somewhere between a necessary evil and a virtue.
We continue to get away with it because of the scale these businesses run on, and because most are infected with exactly the same philosophy.
Sunlit uplands
If you were setting up a challenger bank today, what would you do?
Imagine setting them free: automating the truly quotidian stuff, re-emphasising away from bureaucracy as the greatest good, and towards relationship management and expertise?
Don’t complicate the operational organisation for the sake of unit cost. Properly account for your infrastructure. This means taking a long-cycle view. The cost to Citigroup of its “saving” through under-investment in technology and outsourcing operations teams to the Philippines included it's loss — of reputation, credibility legal costs and management resources — from the Revlon loan debâcle. A proximate outcome of Credit Suisse’s thinning out and downskilling of its risk management function was — well, a laundry list of avoidable disasters.
Decide whether you are offering a product or a service. Products are unitary, homogeneous, and should — if properly designed — require no after-sale service. Mortgages. Deposit accounts. Investment funds. These you should fully automate — you should know enough about the product and your consumers that all variables should be known and coded for.
If you are offering a service, make sure it is a service, and not just a poorly-designed product. A service is a relationship between valuable humans. You can’t outsource a relationship to a low-cost jurisdiction without degrading your service. The cost of excellent client management is not wasted in a service. That is the service.
Just as you shouldn’t confuse products with services, also don’t confuse services with products. There are parts of the client life cycle that feel tedious, high cost, low value things — client onboarding — that are unusually formative of a client’s impression during onboarding. Precisely because they have the potential to be so painful, and they crop up at the start of the relationship. Turning these into client selling points — can you imagine legal docs being a marketing tool? — can turn this into a service. Treating an opportunity to handhold a new client as a product and not an opportunity to relationship build misses a trick.
- ↑ The horror! The horror! The irony! The irony!
- ↑ Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy (2018)
- ↑ Though we think this rather confuses the product for its manufacturer. We might feel different about Apple if, rather than making neat space-aged knick-knacks it made a business of coldly foreclosing mortgages, and charging usurious rates on credit card balances. You don’t think it would? Have you seen the cut it takes from the play store?
- ↑ The JC’s legaltech roll of honour refers.
- ↑ Our legaltech roll of honour refers.
- ↑ This is the premise of Thinking: Fast and Slow
- ↑ This may seem controversial but should not be: narratising an unseen future requires a reflexive concept of self and a sense of continuity in spacetime, neither of which a Turing machine can have.
- ↑ Sure: machines can make random improvisations, and after iterating for long enough may arrive at the same local maxima, but undirected evolution is extraordinarily inefficient way to “frig around and find out”.
- ↑ That said, the signal processing capability of Apple’s consumer music software is pretty impressive.
- ↑ See the Bezos memo.
- ↑ See The Peter Principle; Parkinson’s Law: for classic studies.