Reports of our death are an exaggeration

From The Jolly Contrarian
Revision as of 09:32, 8 September 2017 by Amwelladmin (talk | contribs)
Jump to navigation Jump to search

Deutsche Bank’s CEO John Cryan thinks his employees’ days are numbered. The rise of the machines will do for them, in due course: not just back office grunts, cranking out settlements and reconciliation: everyone. “Today,” he warns, “we have people doing work like robots. Tomorrow, we will have robots behaving like people”.

Cryan’s high-rolling bankers are vulnerable. Even, we suppose, Cryan himself. No bad thing, some might say — who will miss a few liquidated bankers? But it implies a view, widely held, that technology is about to reach a tipping point: no longer just faster, cheaper, better and less aggravating than the sacks of meat who carry out your routine tasks, but equal —- even better than — the sacks of meat who do the hard stuff.

There is much millenarian hand-wringing about this, on blogs and on the new media.

But technology is not new. Since someone invented the lever, the wheel and the plough, humans have used machines to get boring things done: repetitive things; things require brute strength. Things which don’t require judgment. The constraint has always, only, been available technology.

Start with this observation: machines follow unambiguous logical instruction sets better than humans do. By definition - that’s what it is to be a machine. They’re quicker, stronger, nimbler, cheaper, less error-prone. Always have been and always will be.

But machines can only operate in constrained environments. They can react, flawlessly, to pre-conceptualised decisions with pre-configured responses. But take a machine out of its environment and it is useless. (Good luck getting a Jacquard loom to plough a field).

Buried in that observation is this one too: humans are better than machines at handling ambiguity, conflict, novel situations. They’re not perfect — God knows we’re not perfect — but we can at least give it a go. We are great at configuring machines: we can form theories of operational theories, test them, adjust them - diagnose what is wrong with them. We have insight. Machines have no insight.

Throughout history Technology has created short-term dislocations — some big ones — and we are going through one now. But, to date, their long-term prognosis has been uniformly benign: labour-saving devices have freed the human race to do things it previously had no time to do, or hadn’t realized it was possible to do, before the technology came along. Technology opens up design-space. It stretches the intellectual ecosystem: It takes us places we couldn't go before.

Technology domesticates the ground you know, and opens up frontiers that you don’t.

Frontiers. The Wild West. Here be dragons. Places to boldly go, to cavalierly split infinitives that no-one has split before. A frontier is, by definition, new: novel: unseen, untested. Good luck pointing a Jacquard loom at the Wild West.

In any case, look at the stats: as technology has developed, the world's population has grown. The rates of change have tracked each other. There are more people on the world than ever before, but fewer in poverty or indolence. Whatever Technological change is doing, it isn’t making us redundant. We are working harder than ever.

So, if you want to argue that this trend has changed - that henceforth, suddenly, faster, cheaper, more flexible “recipe followers” will, net, put people out of work, you'll need to explain how. What has changed? Why is this time different? Remember, we have had sage pronouncements of shifting paradigms before: the dotcom boom, so we were told, changed the valuation of businesses forever. That didn’t work out so well.

Mr Cryan's assertion: This time is different. Robots are going to put us out of work.

Mine: That’s a big shout.

For one thing, remember that Cryan is talking his own book. Banking is a harder business than it used to be. Opportunities to develop new businesses (read: opening new frontiers) are diminished; managing to margin is de rigeur. That being so, Mr. Cryan should fire as many people as he can. If he doesn’t automate, his competitors will, and they’ll take his lunch. “We’re ditching the meat sacks”: that is what DB’s investors want to hear.

On that model, investment banking is far less judgment-based and evaluative than it used to be. Much of it can be boiled down to formulating rules and following them by rote. Only the edge cases — where pioneers stand on the frontier gazing into the horizon — require judgment. But the edge cases are the situations of real risk: the “unknown unknowns”.

Automation as a strategy for coping with “known knowns” is only good business. Humans are bad at following rules. They are expensive. They occupy real estate. They require human resources departments. They misunderstand. The cheaper they are, the more they misunderstand. They screw up. They leave. They don't write things down. Automation is a no-brainer.

But the race to automate “known knowns” is a race to the bottom. The value in a product is the resounces and skill required in producing it. Banking products require no fields, they require no raw materials: they are only skill. A process that can be automated, can be replicated. The value of the “skill” required to produce it drops to nil. The margins it will generate tend to zero: everyone with a decent PC will be at it.

If Mr. Cryan thinks that is the future of his business, he needs his head read. Your future, sir, is in your people. Your robots may accelerate that future, but they can't conceive of it, and they can't deliver it.

High frequency algo trading is the obvious place where the machines already run the show, handling all the trading, routing and interrogation if venues for liquidity. It's a complex business, done at immense volume at lightning speed, and humans have no hope of competing. But there are plenty of humans still employed in programme trading. They code the algorithms. They monitor the algorithms’ performance. They intervene when the algorithms go haywire, as sometimes they do. The algo can only follows its rules instructions: it doesn't know it is going haywire - it can't introspect - let alone what to do if it does. The humans modify the algorithms to stop them going haywire again. They sell. But in any case, the fundamental division of responsibility is the same: machines follow the rules, humans figure them out.

Algo-trading is the poster for artificial intelligence: elsewhere, the barriers to implemention are human, not technological. Every bank of onboarding is a disaster : legacy systems piled on legacy systems creaking to deal with hopelessly convoluted approach documentation, credit risk management and regulatory compliance that date from the nineteen-nineties when derivatives trading was a new and exciting idea. Instead of shaking this mess down, slimming down, commoditising, firms have outsourced large parts of it, making the whole mess exponentially worse and less soluble.