Talk:The Bayesian: Difference between revisions

From The Jolly Contrarian
Jump to navigation Jump to search
 
(17 intermediate revisions by the same user not shown)
Line 2: Line 2:


====Hot in the City ====
====Hot in the City ====
{{drop|L|ike so many}} before them, Mike Lynch and Richard Gaunt, a pair of post-doctoral researchers into [[neural network]]s at Cambridge University, dreamed of commuting their intellectual achievements into city success. They had scaled the dreaming spires. They could see how revolutionary their technology would be: they were uniquely placed to lead the vanguard. They founded Autonomy Corporation plc in 1996. They imbued it with an aggressive, sales-led culture. Each quarter, Autonomy is said to have fired the bottom 5% of its sales force. It treated its top producers “like rock stars”.
{{drop|L|ike so many}} before them, Mike Lynch and Richard Gaunt, a pair of post-doctoral researchers into [[neural network]]s at Cambridge University, dreamed of commuting their intellectual achievements into city success. They had scaled the dreaming spires. They could see how revolutionary their technology would be: they were uniquely placed to lead the vanguard.  
 
So, they founded a start up in 1996. They called it Autonomy. They imbued it with an aggressive, sales-led culture. They treated their top producers “like rock stars” and, allegedly, fired the bottom 5% of the sales force each quarter.


====Vital Idol====
====Vital Idol====
{{Drop|A|utonomy’s flagship}} product was IDOL: an “intelligent data operating layer” platform using, for the time, advanced pattern recognition and [[algorithm]]ic [[Bayesian reasoning|inferential]] reasoning techniques to extract data from the morass of unstructured information with which modern corporations are inundated: documents, correspondence, contracts, instant messages, email and phone data.  
{{Drop|A|utonomy’s flagship}} product was IDOL: an “intelligent data operating layer” platform using, for the time, advanced pattern recognition and [[algorithm]]ic [[Bayesian reasoning|inferential]] reasoning techniques to extract data from the morass of unstructured information with which modern corporations are inundated: documents, correspondence, contracts, instant messages, email and phone data.  


IDOL promised a revolution: pure crystal business intelligence from an unnavigable swamp.
IDOL promised a revolution: pure crystalline business intelligence out of a previously unnavigable swamp.


Now, socially awkward brainboxes hawking monstrous machines powered by mystical tools to convert oceans of muck into nuggets of brass had quite the cachet in 2005. It has quite the cachet now, come to think of it.  
Now, socially awkward brainboxes hawking monstrous machines powered by mystical tools to convert oceans of muck into nuggets of brass had quite the cachet in 2005. It has quite the cachet now, come to think of it.  


Autonomy targeted [[legal department]]s. One obvious application was in document management — a subject dear to the heart of any budding [[legal COO|legal operationalist]] — so, unusually, we [[legal eagle]]s were a prime target for the marketing blitz.
One obvious application was in document management — a subject dear to the heart of any budding [[legal COO|legal operationalist]] — so, unusually, we [[legal eagle]]s were a prime target for the marketing blitz.


I remember it well: in a nutshell: awesome client entertainment; cool pitch; premium pricing; cluttered website, poor demo. Didn’t buy.  
I remember it well: in a nutshell: awesome client entertainment; cool pitch; premium pricing; cluttered website, poor demo. Didn’t buy.  
Line 20: Line 22:


====White Wedding====
====White Wedding====
{{drop|T|he other city}} players noticed. Big Tech was not about to let the plucky Fenland brainboxes eat the world by themselves. They wanted in. In 2011, American megasaur Hewlett Packard pounced. Swooned by Lynch’s honeyed projections of hoverboards, time-travelling deLoreans and [[machines of loving grace]] watching over us and effortlessly converting the dreary white noise of the corporate monologue into monetisable intelligence, HP acquired Autonomy lock, stock and barrel US$11 billion.  
{{drop|T|he other city}} players saw what Autonomy was doing and took notice. Big Tech was not about to let the plucky Fenland brainboxes eat the world by all by themselves. It wanted in.  


This was 70% more than its prevailing market valuation. Autonomy’s board wasted no time in recommending the offer to shareholders.  
In 2011, American megasaur Hewlett Packard pounced. Swooned by honeyed projections of hoverboards, time-travelling deLoreans and alchemical [[machines of loving grace]] turning our manifold mumblings into pure gold, HP acquired Autonomy lock, stock and barrel for a shade over US$11 billion. This was about seventy per cent over its prevailing market valuation.  


The acquisition was an utter disaster: within the year HP ejected Lynch and wrote down its investment by, well, about 70%. Five years after that, it had flogged the whole thing off for glue.  
''Seventy per cent.''
Autonomy’s board wasted no time in seeking shareholder approval. HP’s board didn’t. Its shareholders went ''nuts''. Its stock plummeted 20%. Within a month, the board had fired the CEO<ref> This was not entirely due to the autonomy acquisition, but it was a significant component.</ref> but HP carried on and closed the deal anyway.


The question arose: whose fault was that? On one hand, Autonomy’s presentations and projections were on the “optimistic” side. On the other, HP really wasn’t good at transformative acquisition: its previous deals with Compaq (2002), EDS (2008), and Palm (2010) had all been ''catastrophic''. There was a pattern here.
It was a disaster.  


====(Forgot) to be a lover====
Within a year HP had ejected Lynch and written down its investment by, coincidentally, about seventy per cent. Within five years, HP had flogged the whole thing off for glue.  
{{Drop|C|ue ''all manner''}} of litigation. HP shareholders sued the HP board for its conduct of the acquisition — HP’s M&A strategy was notoriously bad. The SFO, SEC and FBI launched criminal investigations into potential fraudulent misrepresentation on Autonomy’s part. HP sued Lynch and his management team, accusing them of tricking HP into overpaying for the company. Lynch countered that HP had misunderstood the product, integrated it poorly into the HP business and then mismanaged it after acquisition. He might have also said “that, my dudes, is what the [[due diligence]] process is for”.


The various civil actions rumbled on for years.  
====Catch my fall====
{{drop|W|hose fault was}} that? On one hand, Autonomy’s presentations and projections were on the “optimistic” side. On the other, HP had a truly dismal record when it came to transformative acquisition: its previous deals (Compaq in 2002, EDS  in 2008, and Palm in 2010) had all been ''catastrophic''. There was a pattern here.


In 2022, having spent USD100m pursuing its suit, HP won a pyrrhic billion dollars of the five it sought from Lynch. Lynch appealed.
Cue ''all manner'' of litigation. HP shareholders sued the board for its conduct of the acquisition. HP sued Lynch and his management team, accusing them of tricking HP into overpaying for the company. The Serious Fraud Office, [[Securities and Exchange Commission|SEC]] and FBI launched parallel investigations into Autonomy’s potentially fraudulent misrepresentations. It seems extraordinary that three criminal agencies were intervening on the part of an organisation as sophisticated as Hewlett Packard.


Meanwhile, the FBI brought wire fraud and conspiracy charges against Lynch and his former Vice President of Finance, Stephen Chamberlain. The criminal standard of proof being much higher things went better for the Autonomy executives this time. In June 2024, Lynch and Chamberlain were acquitted on all counts.
Autonomy’s position was that HP had simply misunderstood the product, integrated it poorly and then mismanaged it after acquisition.


Chamberlain returned to his home in Cambridgeshire. Lynch headed to the Mediterranean with his family, where he hosted some close friends on his superyacht, ''The Bayesian''.
The various civil actions rumbled on for years. In 2022, having spent a further USD100m, HP won a pyrrhic billion dollars of the five it sought from Lynch. Lynch appealed but, before the appeal could be heard, he and his former Vice President of Finance, Stephen Chamberlain were extradited to the United States to face wire fraud and conspiracy charges.
 
The criminal standard of proof being higher, things went better for the Autonomy executives this time. In June 2024, Lynch and Chamberlain were acquitted of all criminal charges.
 
Following the verdict, Stephen Chamberlain returned to his home in Cambridgeshire. Lynch headed with his family to the Mediterranean, where he hosted friends on his superyacht, ''The Bayesian''.


====One night, one chance====
====One night, one chance====
{{drop|O|n 17 August}} 2024, while jogging near his home, Stephen Chamberlain tragically was hit by a car. He died of his injuries two days later, on the 19th of August.
{{drop|O|n 17 August}} 2024, while jogging near his home, Stephen Chamberlain was hit by a car. Tragically, he died of his injuries two days later.


Early on the morning of ''that same day'', the ''Bayesian'' was hit by a freak storm while at anchor off the coast of Sicily. She capsized and sank within 16 minutes, killing Lynch, his daughter and five others.
Early on the morning of the 19th of August — ''that day Chamberlain died'' — Mike Lynch’s yacht was hit by a freak storm while at anchor off the coast of Sicily. She capsized and sank within 16 minutes, killing Lynch, his daughter and five others.


You could not, many were inclined to believe, make it up. The improbable circumstances of these accidents, within days of each other and just weeks after their acquittals, seemed an ''extraordinary'' coincidence, although the mainstream media quickly rationalised that an ''actual'' conspiracy here was implausible.  
You could not, many were inclined to believe, make it up. The improbable circumstances of these accidents, within days of each other and just weeks after their acquittals, seemed an ''extraordinary'' coincidence, although the mainstream media quickly rationalised that an ''actual'' conspiracy here was implausible.  


There were no obvious conspirators, for one thing HP had little to gain, and orchestrating any weather at all, let alone a freak storm powerful enough to sink a 550-ton yacht was surely beyond the coordinating powers of an organisation which has publicly struggled to manage basic [[due diligence]].  
There were no obvious conspirators, for one thing. HP had little to gain, and orchestrating a freak storm powerful enough to sink a 550-ton yacht was surely beyond the coordinating powers of an organisation which repeatedly struggled to manage basic [[due diligence]].  


Plus, if you ''did'' want to “off” an executive, there are easier ways than summoning up a biblical storm. Apparently.
Plus, if you ''did'' want to “off” an executive, there are easier ways of doing it than summoning a biblical storm. Apparently.


Yet, still, the unfiltered maw of uninformed public speculation — from which I write, dear correspondent — found this all very, well, fishy.
Yet, still, the unfiltered maw of uninformed public speculation — from which I write — found this all very, well, fishy.


That must have been ''some kind of storm''. How often do massive ships sink, at anchor, in bad weather? Does that ''ever'' happen? What, in other words, are the odds of that?
That must have been ''some kind of storm''. How often do massive ships sink, at anchor, in bad weather? Does that ''ever'' happen? What, in other words, are the odds of that?


And so we find ourselves looping reflexively into another one of an improbably large number of ironies — oop: there’s ''another'' one — with which this this story is shot through:
And so we face one more in this story's remarkable collection of ironies: the way to determine whether the Bayesian was sabotaged or merely the victim of a ghastly misfortune is to use ''[[Bayesian inference]]''.
 
What the odds were is a question of ''[[Bayesian inference]]''.


====The Right Way ====
====The Right Way ====
{{Drop|W|e hear more}} and more about “[[Bayesian reasoning]]”. This is a method of inferential reasoning developed by Thomas Bayes, a Presbyterian minister from Tunbridge Wells, who died in 1761. The theorem was found in his papers after his death, and published posthumously by his friend Richard Price.
{{drop|B|ayesian statistics is}} an approach to inferential reasoning that takes a “prior probability” (an initial assessment of the probability of an event based on general knowledge) and updates it with specific observations to create a revised “posterior probability” of the specific event given the available evidence.  
 
Bayes’ theorem calculates the probability of ''this'' event being true (a “''posterior'' probability”) based on the abstract general probability of events ''like'' this one in ''general'' being true (the “''prior'' probability”) once you update it with new information about ''this'' specific event. Bayes devised a mathematical formula to calculate how these two pieces of information interact to give an updated posterior probability.
 
{{Quote|
From a full deck, there is a ¼ (i.e., a 13/52) chance I will draw a heart. If I draw a second card, the odds change: are now only 51 cards. If I know the first card drawn was a heart, I can update the probability assessment: there is now a 12/51 chance it is a heart, and there are 13/51 chances for each of spades, diamonds and clubs. If I don't know see the first card, I have no new information so my assessment of the probabilities stays the same}}
 
This kind of reasoning dominates modern [[information technology]]. Automony’s IDOL platform was a pioneer in using Bayesian inferential techniques to analyse information. It was important enough to Mike Lynch to name his superyacht after it.
 
Still, prior probabilities are not magic. They can still mislead. Drawing a second, or a third consecutive heart is less likely but that is not to say it could not happen.
 
Bayesian inference might have told Mike Lynch, as he dropped anchor at Porticello, that his yacht was most unlikely to sink in the night.


====Daytime drama====
There is no better example of [[Bayesian inference]] at work than the [[Monty Hall problem]]. What follows will seem either trite, or completely Earth-shattering, depending on whether you have come across it before or not. It indicates, too, how tremendously bad our instinctive grasp of probabilities is:
{{Quote|
{{Monty Holdem poker}}}}
{{Drop|T|here is no}} better example of [[Bayesian inference]] at work than the [[Monty Hall problem]]. What follows will seem either trite, or completely Earth-shattering, depending on whether you have come across it before or not. It indicates, too, how tremendously bad our instinctive grasp of probabilities is:


{{monty hall capsule}}
{{monty hall capsule}}
Line 81: Line 73:
Bayesian inference invites us to update our assessment of probabilities based on what we have since learned. We do this quite naturally in certain environments — when playing cards, for example: once the Ace, has been played, you know your King is high, and that to update a Bayesian prior— but not in others. We tend to be better at Bayesian reasoning in familiar circumstances, like card games and not in novel ones like Ferrari and goat competitions.
Bayesian inference invites us to update our assessment of probabilities based on what we have since learned. We do this quite naturally in certain environments — when playing cards, for example: once the Ace, has been played, you know your King is high, and that to update a Bayesian prior— but not in others. We tend to be better at Bayesian reasoning in familiar circumstances, like card games and not in novel ones like Ferrari and goat competitions.


====Shock to the system====
====Come on, come on====
{{drop|T|he blame and}} recrimination started quickly. Italian police considered charging the ''Bayesian''’s crew with manslaughter, or “culpable shipwreck”.<ref>https://apnews.com/article/italy-sicily-superyacht-sinking-investigation-7b26e40e9efd69c4d08e189093f67ad9</ref>
{{drop|W|e are pattern}}-seeking animals. Narratisers. We are drawn to dramatic stories with heroes and villains. When key figures in a meme-cluster of ill-advised litigation tragically die in freak conditions we demand an explanation. It does not seem satisfactory to conclude it was “just one of those things”.
 
The yacht’s builder, The Italian Sea Group,<ref>Strictly speaking, the ''successor'' to the boat’s builder: Perini Navi, which constructed the boat in 2008, had been acquired by TISG in 2021.</ref> made unusually strident public statements blaming the crew, and the yacht’s owner, for mishandling the vessel. By not stopping it sinking they had ''defamed'' the boat-builder’s reputation. Later TISG, a yacht builder with the best part of half a billion euros in annual revenues, launched, then quickly disowned, a civil suit seeking compensation for reputational damage.<ref>https://maritimeobserver.com/bayesian-superyacht-sinking/.</ref>
 
Now, here’s a strategic litigation tactic — and I am ''absolutely not'' suggesting anyone did this; just that the financial incentives are such that someone ''might'': imagine a yacht builder whose boat unexpectedly sinks publicly slandering the boat’s skipper, instantly and privately settling the defamation for, say, EUR 20m, on condition that the skipper must never publicly challenging the slander. It would be handsomely more than a professional sailor could expect to earn in a lifetime on one hand, but an order of magnitude less ''than'' the potential losses to the shipbuilder from the reputational damage of having designed a faulty ship. Again: I categorically do not suggest anyone did this: I mention it to illustrate the perverse incentives of financial settlement. If the stakes are high enough, a litigant can have all kinds of ulterior motives to take legal action.
 
===Linear and systems: two competing theories===
{{quote| A conspiracy theorist is someone who’s never tried to organise a surprise party.
— John F. Kennedy}}
{{drop|H|old this as}} a provisional grand theory of the world. Yes: ''another'' one. We can divide our ways of explaining the world into [[Linear interaction|''linear'' explanations]] and [[systems theory|''systems'' theories]].
 
“[[Linear interaction|Linear explanations]]” view the world as a ''[[complicated system|complicated]]'', but essentially ''logical'' system: one that, therefore can, and would benefit from, strategic, top-down design. This is a utopian, grandiose, cybernetic perspective. Failures of the world to produce hoped-for outcomes are always ''blamable'' on something: the universe is essentially stable, and its linear causes and effects, though sometimes unwanted,  are fundamentally ''calculable''. Especially in hindsight. [[The past is a different country]].
 
“[[Complicated system]]s” we describe in linear terms are bounded, rule-based ''processes'': they may contain interacting, autonomous agents, but they operate within fixed boundaries and according to predetermined and static rules of engagement. All relevant information is ''available to'', even if not necessarily ''known by'', all participants in the system.
 
We can solve complicated systems. We can forward~ or backward-engineer them. They may require great skill to navigate — to beat Pelé at football you must be very good — but it is nevertheless, in theory, doable. Everything is predictable, in the sense that all events in the system are caused, have effects and operate according to a single, consistent set of known rules.
 
We can ''explain'' the system’s overall behaviour in terms of ''cause and effect''. The “[[root causes]]” for what happens will dominate the incidental circumstances in which they occur. An inspirational CEO, Galácticos, the [[Bad apple|bad apples]] who ruined it for everyone, the human [[operator]] whose error caused the air crash: these are linear explanations of aberrations in linear systems.
 
Now the nature of complicated systems is that, while they may be integral, whole and consistent, we cannot always see them in their entirety. Sometimes our [[root cause]] analysis reaches a dead, unsatisfactory end, simply due to a lack of information.  
 
The natural response, in a linear system is simply to ''seek more information''. The system has a complete state — there is a “bottom”:<ref>In information theory, the “entropy” in a system is a function of the information capacity of the system. A given string of symbols has a maximum information capacity.</ref> we just need to find it. Where that information is available, we should get it.
 
Where it is not — where an event happens for which there is no apparent cause, [[inference]] is required. We must hypothesise, backwards from what we know about the system to infer what happened.
 
Much of the time, this is easy to do. We are natural inference engines. All of science proceeds this way.
 
Conspiracy theories are typically linear explanations in that they put ultimate blame (or credit) for a given state of affairs things down to the intentional actions, be they malign or well-meant, of a limited number of disproportionately influential people.
====Complex systems=====
“Systems theories” view the world as a non-linear [[complex system]].<ref>[[Complex systems]] are “unbounded, interactive process. Involves interaction with autonomous agents without boundaries, without pre-agreed rules, and where information is limited and asymmetric. Rules, boundaries and each participant’s objectives are dynamic and change interactively. Impossible to predict.”</ref> attribute outcomes to the behaviour of a wider interlocking system of relationships where individuals’ acumen or motivations contribute to, but rarely determine the outcomes the system produces. They are usually non-linear: in a [[complex system]] ''unexpectable'' things can and do happen. This is generally not a single person’s fault, but an unexpected consequence of the design of the system. As systems experience unexpected consequences they tend to learn and adjust to them.
 
Old systems, having been around for longer and having been exposed to more variations in condition, are better stress-tested and therefore throw up fewer unexpected consequences than ''new'' systems.<ref>This is sometimes called the “[[Lindy effect]]”, but is also explained in terms of [[Pace layering|pace layering]]: old systems occupy deeper layers.</ref>
 
 
I will grant you at once that this is a wide conception indeed of “conspiracy theory”: it includes not just gunpowder plots and Russian bots in Western elections but the general idea that great art is the product of singular genius, commercial success is the outcome of exceptional leadership, and jazz is not just a succession of happy accidents.
 
By contrast, systems theory says in a nutshell, into a bit more complicated than that. In the case of great artists and great visionary business people, their input into the artistic process is not discounted altogether but instead aggregated with a great deal of other system information to generate an outcome. Shakespeare was indeed a genius but would yet have died in anonymity were it not for his sponsors, publishers, patrons, theatres, actors, critics and audience: the magnificent cultural establishment that we now know as the Shakespeare canon contains a lot of stuff that was nothing to do with William Shakespeare.
 
====Systems and paradigms====
I rabbit on ''a lot'' on this site about power structures and [[paradigm]]s. These are ''systems'' of political, scientific and cultural control.
 
====Systemantics====
The best place to start [[systems theory]] is {{author|John Gall}}’s short, acerbic, funny and devastatingly incisive book {{br|Systemantics: The Systems Bible}}. System theories have an acronym: “POSIWID”: the “''purpose of a system is what it does''”. This, Gall gently points out, Is by inevitable outcome not what those who designed the system had in mind. The System tends to oppose its own intended function so therefore to blame conspirators Who occupy positions of ostensible influence and power within the system is rather to miss the point. They are as much victims of systemantics as anyone else.
 
====Operator error====
We should not be surprised to hear operator error advanced as a cause. It is a clear, linear root cause — suitable for a linear theory of complicated system where a trained expert fell short of a required standard — and it has the additional advantage of ''not challenging the overall integrity of the system''. It was not management’s fault, nor failure to supervise, nor the ship’s design, but component failure. Where the component that fails is an autonomous agent there is only so much a system administrator can do.  The strategic end-point is to ''design out the need for autonomous components in the system'' at all — until very recently this has been an impossible Idealistic end point but with natural language models equipped with, powers of, well, Bayesian reasoning (I told you there would be some irony in this post), perhaps that point is not so far away.
 
But as long as fallible human autonomous agents are still needed for edge case tasks — for all the mindboggling technology aboard a state of the art super yacht, it still needs a skipper an engineer and a watch detail — the scope for human error is unavoidable.
 
====Defamation====
The claim was brought at Italian law, and withdrawn before anyone got much of a look at it, but it is fun to imagine it was essentially one of [[defamation]]:
 
{{Qd|Defamation|ˌdɛfəˈmeɪʃᵊn|n|A false statement about a person, communicated to others, that harms the person’s reputation.
 
The statement must be untrue, specific enough to identify the person and published or shared with third parties such that it damages the person’s reputation or standing.}}
 
Can your careless ''conduct'' — not “representational” gestures; actually non-communicative acts, like steering (or not steering) a boat — be a ''statement''?
 
The defamatory “statement”, we presume, would be that ''the boat was designed so badly that, even at anchor, a crew of competent professionals sailors could not stop it scuttling in a severe storm''.
 
That allegation would certainly be defamatory to a shipbuilder. ''If it were untrue''.
 
But the problem the shipbuilder has is that the ship ''did'' sink at anchor in a storm. This does normally happen to well designed ships. Presuming it was well designed, then either (i) the storm’s ferocity or (ii) the sailors’ negligence, was truly unprecedented.


And here, again Bayesian reasoning can help us. We have 3500 years of Mediterranean seafaring experience. This plays out across two dimensions firstly the experience of the kind of weather conditions that can occur in the Mediterranean. Our “Lindy database” of freak events — which include eruptions on Etna and the Aeolian islands, is massive. Each one of those experiences in is buried deep in the pace layers of Mediterranean seafaring knowledge. We know the conditions ship builders must design for. The other dimension is the parameters of ship design. Ships falling comfortably within the tested limits of ship design are unlikely to be unsuitable for the conditions in the Mediterranean. The bayesian was not unusually long, nor unusually heavy, nor was its retractable keel and unusual design. While it is a relatively new innovation, it's a specific advantage in the comparatively shallow and tranquil waters of the Mediterranean.
We are tempted to update the probability assessment dramatically because of the timing and the freak nature of the storm in Sicily. But here Bayesian inference — the mathematical foundations for Autonomy’s IDOL machines — tell us a different story: sometimes patterns are just noise. Ships sink — especially ships with unusually tall masts and retractable keels. Sometimes joggers get hit by cars. Not often, but more often than corporations summon freak storms and coordinate random traffic to commit executive murder.
So the Bayesian was in most respects not an unusual boat. But it was in one colon at the time of its construction in 2008, the bayesians mast was the tallest single yacht mast in history. The next tallest, one of the five masts on the royal clipper, constructed in 2000, with just 60 meters, some 12 meters shorter than the bayesian’s 73 m mast. And the royal clipper is a 179 metre long, permanently keeled square rigged ship. The Bayesian’s mast, relative to its Hull length and given its retractable keel, was extraordinarily disproportionate.


A third angle of prior probability is the acumen of the crew. We should review The historical information for situations in which crew failure was the sole attributed cause of the sinking of a ship at anchor.
The sceptical principle “absence of evidence is not evidence of absence” holds only where we have not looked for evidence. If you have, and there is none, there ''is'' evidence of absence.
Sometimes there aren’t enough data to update our “priors”. The rational response is to acknowledge the limitations of our inference — even when the coincidences seem to cry out for a more dramatic explanation.


This is really another way of asking this question: is it possible to design out of a ship the risk that it would be capsized and sunk purely by inclement weather? Can you solve for human failure, or is the nature of seafaring so dependent on human decision that it is impossible for a ship to design a way the risk of human error?
HP’s folly was to draw too many conclusions from too little data during its Autonomy due diligence. We should avoid making the same mistake when analysing tragic events.

Latest revision as of 12:05, 8 November 2024

Bayesian reasoning
/beɪːzˈiːən ˈriːzᵊnɪŋ/ (n.)

A method of statistical inference that updates an event’s probability from a general baseline probability as more information about the specific event becomes available .

Hot in the City

Like so many before them, Mike Lynch and Richard Gaunt, a pair of post-doctoral researchers into neural networks at Cambridge University, dreamed of commuting their intellectual achievements into city success. They had scaled the dreaming spires. They could see how revolutionary their technology would be: they were uniquely placed to lead the vanguard.

So, they founded a start up in 1996. They called it Autonomy. They imbued it with an aggressive, sales-led culture. They treated their top producers “like rock stars” and, allegedly, fired the bottom 5% of the sales force each quarter.

Vital Idol

Autonomy’s flagship product was IDOL: an “intelligent data operating layer” platform using, for the time, advanced pattern recognition and algorithmic inferential reasoning techniques to extract data from the morass of unstructured information with which modern corporations are inundated: documents, correspondence, contracts, instant messages, email and phone data.

IDOL promised a revolution: pure crystalline business intelligence out of a previously unnavigable swamp.

Now, socially awkward brainboxes hawking monstrous machines powered by mystical tools to convert oceans of muck into nuggets of brass had quite the cachet in 2005. It has quite the cachet now, come to think of it.

One obvious application was in document management — a subject dear to the heart of any budding legal operationalist — so, unusually, we legal eagles were a prime target for the marketing blitz.

I remember it well: in a nutshell: awesome client entertainment; cool pitch; premium pricing; cluttered website, poor demo. Didn’t buy.

Got tickets to White Hart Lane, though and, a bit later, a catbird seat as a hurricane-force shitstorm blew up out of nowhere.

There are so many ironies in this story. Anyway.

White Wedding

The other city players saw what Autonomy was doing and took notice. Big Tech was not about to let the plucky Fenland brainboxes eat the world by all by themselves. It wanted in.

In 2011, American megasaur Hewlett Packard pounced. Swooned by honeyed projections of hoverboards, time-travelling deLoreans and alchemical machines of loving grace turning our manifold mumblings into pure gold, HP acquired Autonomy lock, stock and barrel for a shade over US$11 billion. This was about seventy per cent over its prevailing market valuation.

Seventy per cent.

Autonomy’s board wasted no time in seeking shareholder approval. HP’s board didn’t. Its shareholders went nuts. Its stock plummeted 20%. Within a month, the board had fired the CEO[1] but HP carried on and closed the deal anyway.

It was a disaster.

Within a year HP had ejected Lynch and written down its investment by, coincidentally, about seventy per cent. Within five years, HP had flogged the whole thing off for glue.

Catch my fall

Whose fault was that? On one hand, Autonomy’s presentations and projections were on the “optimistic” side. On the other, HP had a truly dismal record when it came to transformative acquisition: its previous deals (Compaq in 2002, EDS in 2008, and Palm in 2010) had all been catastrophic. There was a pattern here.

Cue all manner of litigation. HP shareholders sued the board for its conduct of the acquisition. HP sued Lynch and his management team, accusing them of tricking HP into overpaying for the company. The Serious Fraud Office, SEC and FBI launched parallel investigations into Autonomy’s potentially fraudulent misrepresentations. It seems extraordinary that three criminal agencies were intervening on the part of an organisation as sophisticated as Hewlett Packard.

Autonomy’s position was that HP had simply misunderstood the product, integrated it poorly and then mismanaged it after acquisition.

The various civil actions rumbled on for years. In 2022, having spent a further USD100m, HP won a pyrrhic billion dollars of the five it sought from Lynch. Lynch appealed but, before the appeal could be heard, he and his former Vice President of Finance, Stephen Chamberlain were extradited to the United States to face wire fraud and conspiracy charges.

The criminal standard of proof being higher, things went better for the Autonomy executives this time. In June 2024, Lynch and Chamberlain were acquitted of all criminal charges.

Following the verdict, Stephen Chamberlain returned to his home in Cambridgeshire. Lynch headed with his family to the Mediterranean, where he hosted friends on his superyacht, The Bayesian.

One night, one chance

On 17 August 2024, while jogging near his home, Stephen Chamberlain was hit by a car. Tragically, he died of his injuries two days later.

Early on the morning of the 19th of August — that day Chamberlain died — Mike Lynch’s yacht was hit by a freak storm while at anchor off the coast of Sicily. She capsized and sank within 16 minutes, killing Lynch, his daughter and five others.

You could not, many were inclined to believe, make it up. The improbable circumstances of these accidents, within days of each other and just weeks after their acquittals, seemed an extraordinary coincidence, although the mainstream media quickly rationalised that an actual conspiracy here was implausible.

There were no obvious conspirators, for one thing. HP had little to gain, and orchestrating a freak storm powerful enough to sink a 550-ton yacht was surely beyond the coordinating powers of an organisation which repeatedly struggled to manage basic due diligence.

Plus, if you did want to “off” an executive, there are easier ways of doing it than summoning a biblical storm. Apparently.

Yet, still, the unfiltered maw of uninformed public speculation — from which I write — found this all very, well, fishy.

That must have been some kind of storm. How often do massive ships sink, at anchor, in bad weather? Does that ever happen? What, in other words, are the odds of that?

And so we face one more in this story's remarkable collection of ironies: the way to determine whether the Bayesian was sabotaged or merely the victim of a ghastly misfortune is to use Bayesian inference.

The Right Way

Bayesian statistics is an approach to inferential reasoning that takes a “prior probability” (an initial assessment of the probability of an event based on general knowledge) and updates it with specific observations to create a revised “posterior probability” of the specific event given the available evidence.

There is no better example of Bayesian inference at work than the Monty Hall problem. What follows will seem either trite, or completely Earth-shattering, depending on whether you have come across it before or not. It indicates, too, how tremendously bad our instinctive grasp of probabilities is:

You are a game-show contestant. The host shows you three doors and tells you: “Behind one of those doors is a Ferrari. Behind the other two are goats.[2] You may choose one door.

Knowing you have a ⅓ chance, you choose a door at random.

Now the host theatrically opens one of the doors you didn’t choose, revealing a goat.

Two closed doors remain. She offers you the chance to reconsider your choice.

Do you stick with your original choice, switch, or does it not make a difference?

Intuition suggests it makes no difference. At the beginning, each door carries an equal probability: ⅓, After the reveal, the remaining doors still do: ½.

So, while your odds have improved, the odds remain equal for each unopened door. So, it still doesn’t matter which you choose: Right?

Wrong. The best odds are if you switch: there remains a ⅓ chance the car is behind the first door you picked; there is now a ⅔ chance the Ferrari is behind the other door. Staying put is to commit to a choice you made then the odds were worse.

We know this thanks to Bayesian inference. There are two categories of door; ones you chose, and ones you didn’t. There’s only one door in the “chosen” category and two doors in the “unchosen” category. At the start you knew each was equally likely to hold the car. This was the “prior probability”. There was a ⅓ chance per door or, if we categorise the doors, a ⅓ chance it was behind a chosen door and a ⅔ chance it was behind an unchosen door.

Then you got some updated information, but only about the “unchosen door” category: One of those doors definitely doesn’t hold the car. You have no new information about the “chosen door” category, however.

You can update your prior probability estimates about the unchosen doors. One now has a zero chance of holding the car. Therefore, it follows the other door has a ⅔ chance. All the odds of the unchosen category now sit behind its single unopened door.

Therefore you have a better chance of winning the car (though not a certainty — one time in three you’ll lose) if you switch.

Bayesian inference invites us to update our assessment of probabilities based on what we have since learned. We do this quite naturally in certain environments — when playing cards, for example: once the Ace, has been played, you know your King is high, and that to update a Bayesian prior— but not in others. We tend to be better at Bayesian reasoning in familiar circumstances, like card games and not in novel ones like Ferrari and goat competitions.

Come on, come on

We are pattern-seeking animals. Narratisers. We are drawn to dramatic stories with heroes and villains. When key figures in a meme-cluster of ill-advised litigation tragically die in freak conditions we demand an explanation. It does not seem satisfactory to conclude it was “just one of those things”.

We are tempted to update the probability assessment dramatically because of the timing and the freak nature of the storm in Sicily. But here Bayesian inference — the mathematical foundations for Autonomy’s IDOL machines — tell us a different story: sometimes patterns are just noise. Ships sink — especially ships with unusually tall masts and retractable keels. Sometimes joggers get hit by cars. Not often, but more often than corporations summon freak storms and coordinate random traffic to commit executive murder.

The sceptical principle “absence of evidence is not evidence of absence” holds only where we have not looked for evidence. If you have, and there is none, there is evidence of absence. Sometimes there aren’t enough data to update our “priors”. The rational response is to acknowledge the limitations of our inference — even when the coincidences seem to cry out for a more dramatic explanation.

HP’s folly was to draw too many conclusions from too little data during its Autonomy due diligence. We should avoid making the same mistake when analysing tragic events.

  1. This was not entirely due to the autonomy acquisition, but it was a significant component.
  2. Why goats? — Ed