Talk:The Bayesian
Bayesian reasoning
/beɪːzˈiːən ˈriːzᵊnɪŋ/ (n.)
A method of statistical inference that updates an event’s probability from a general baseline probability as more information about the specific event becomes available .
Hot in the City
Like so many before them, Mike Lynch and Richard Gaunt, a pair of post-doctoral researchers into neural networks at Cambridge University, dreamed of commuting their intellectual achievements into city success. They had scaled the dreaming spires. They could see how revolutionary their technology would be: they were uniquely placed to lead the vanguard.
So, they founded a start up in 1996. They called it Autonomy. They imbued it with an aggressive, sales-led culture. They treated their top producers “like rock stars” and, allegedly, fired the bottom 5% of the sales force each quarter.
Vital Idol
Autonomy’s flagship product was IDOL: an “intelligent data operating layer” platform using, for the time, advanced pattern recognition and algorithmic inferential reasoning techniques to extract data from the morass of unstructured information with which modern corporations are inundated: documents, correspondence, contracts, instant messages, email and phone data.
IDOL promised a revolution: pure crystalline business intelligence out of a previously unnavigable swamp.
Now, socially awkward brainboxes hawking monstrous machines powered by mystical tools to convert oceans of muck into nuggets of brass had quite the cachet in 2005. It has quite the cachet now, come to think of it.
One obvious application was in document management — a subject dear to the heart of any budding legal operationalist — so, unusually, we legal eagles were a prime target for the marketing blitz.
I remember it well: in a nutshell: awesome client entertainment; cool pitch; premium pricing; cluttered website, poor demo. Didn’t buy.
Got tickets to White Hart Lane, though and, a bit later, a catbird seat as a hurricane-force shitstorm blew up out of nowhere.
There are so many ironies in this story. Anyway.
White Wedding
The other city players saw what Autonomy was doing and took notice. Big Tech was not about to let the plucky Fenland brainboxes eat the world by all by themselves. It wanted in.
In 2011, American megasaur Hewlett Packard pounced. Swooned by honeyed projections of hoverboards, time-travelling deLoreans and alchemical machines of loving grace turning our manifold mumblings into pure gold, HP acquired Autonomy lock, stock and barrel for a shade over US$11 billion. This was about seventy per cent over its prevailing market valuation.
Seventy per cent.
Autonomy’s board wasted no time in seeking shareholder approval. HP’s board didn’t. Its shareholders went nuts. Its stock plummeted 20%. Within a month, the board had fired the CEO[1] but HP carried on and closed the deal anyway.
It was a disaster.
Within a year HP had ejected Lynch and written down its investment by, coincidentally, about seventy per cent. Within five years, HP had flogged the whole thing off for glue.
Catch my fall
Whose fault was that? On one hand, Autonomy’s presentations and projections were on the “optimistic” side. On the other, HP had a truly dismal record when it came to transformative acquisition: its previous deals (Compaq in 2002, EDS in 2008, and Palm in 2010) had all been catastrophic. There was a pattern here.
Cue all manner of litigation. HP shareholders sued the board for its conduct of the acquisition. HP sued Lynch and his management team, accusing them of tricking HP into overpaying for the company. The Serious Fraud Office, SEC and FBI launched parallel investigations into Autonomy’s potentially fraudulent misrepresentations. It seems extraordinary that three criminal agencies were intervening on the part of an organisation as sophisticated as Hewlett Packard.
Autonomy’s position was that HP had simply misunderstood the product, integrated it poorly and then mismanaged it after acquisition.
The various civil actions rumbled on for years. In 2022, having spent a further USD100m, HP won a pyrrhic billion dollars of the five it sought from Lynch. Lynch appealed but, before the appeal could be heard, he and his former Vice President of Finance, Stephen Chamberlain were extradited to the United States to face wire fraud and conspiracy charges.
The criminal standard of proof being higher, things went better for the Autonomy executives this time. In June 2024, Lynch and Chamberlain were acquitted of all criminal charges.
Following the verdict, Stephen Chamberlain returned to his home in Cambridgeshire. Lynch headed with his family to the Mediterranean, where he hosted friends on his superyacht, The Bayesian.
One night, one chance
On 17 August 2024, while jogging near his home, Stephen Chamberlain was hit by a car. Tragically, he died of his injuries two days later.
Early on the morning of the 19th of August — that day Chamberlain died — Mike Lynch’s yacht was hit by a freak storm while at anchor off the coast of Sicily. She capsized and sank within 16 minutes, killing Lynch, his daughter and five others.
You could not, many were inclined to believe, make it up. The improbable circumstances of these accidents, within days of each other and just weeks after their acquittals, seemed an extraordinary coincidence, although the mainstream media quickly rationalised that an actual conspiracy here was implausible.
There were no obvious conspirators, for one thing. HP had little to gain, and orchestrating a freak storm powerful enough to sink a 550-ton yacht was surely beyond the coordinating powers of an organisation which repeatedly struggled to manage basic due diligence.
Plus, if you did want to “off” an executive, there are easier ways of doing it than summoning a biblical storm. Apparently.
Yet, still, the unfiltered maw of uninformed public speculation — from which I write — found this all very, well, fishy.
That must have been some kind of storm. How often do massive ships sink, at anchor, in bad weather? Does that ever happen? What, in other words, are the odds of that?
And so we face one more in this story's remarkable collection of ironies: the way to determine whether the Bayesian was sabotaged or merely the victim of a ghastly misfortune is to use Bayesian inference.
The Right Way
Bayesian statistics is an approach to inferential reasoning that takes a “prior probability” (an initial assessment of the probability of an event based on general knowledge) and updates it with specific observations to create a revised “posterior probability” of the specific event given the available evidence.
There is no better example of Bayesian inference at work than the Monty Hall problem. What follows will seem either trite, or completely Earth-shattering, depending on whether you have come across it before or not. It indicates, too, how tremendously bad our instinctive grasp of probabilities is:
You are a game-show contestant. The host shows you three doors and tells you: “Behind one of those doors is a Ferrari. Behind the other two are goats.[2] You may choose one door.
Knowing you have a ⅓ chance, you choose a door at random.
Now the host theatrically opens one of the doors you didn’t choose, revealing a goat.
Two closed doors remain. She offers you the chance to reconsider your choice.
Do you stick with your original choice, switch, or does it not make a difference?
Intuition suggests it makes no difference. At the beginning, each door carries an equal probability: ⅓, After the reveal, the remaining doors still do: ½.
So, while your odds have improved, the odds remain equal for each unopened door. So, it still doesn’t matter which you choose: Right?
Wrong. The best odds are if you switch: there remains a ⅓ chance the car is behind the first door you picked; there is now a ⅔ chance the Ferrari is behind the other door. Staying put is to commit to a choice you made then the odds were worse.
We know this thanks to Bayesian inference. There are two categories of door; ones you chose, and ones you didn’t. There’s only one door in the “chosen” category and two doors in the “unchosen” category. At the start you knew each was equally likely to hold the car. This was the “prior probability”. There was a ⅓ chance per door or, if we categorise the doors, a ⅓ chance it was behind a chosen door and a ⅔ chance it was behind an unchosen door.
Then you got some updated information, but only about the “unchosen door” category: One of those doors definitely doesn’t hold the car. You have no new information about the “chosen door” category, however.
You can update your prior probability estimates about the unchosen doors. One now has a zero chance of holding the car. Therefore, it follows the other door has a ⅔ chance. All the odds of the unchosen category now sit behind its single unopened door.
Therefore you have a better chance of winning the car (though not a certainty — one time in three you’ll lose) if you switch.
Bayesian inference invites us to update our assessment of probabilities based on what we have since learned. We do this quite naturally in certain environments — when playing cards, for example: once the Ace, has been played, you know your King is high, and that to update a Bayesian prior— but not in others. We tend to be better at Bayesian reasoning in familiar circumstances, like card games and not in novel ones like Ferrari and goat competitions.
Come on, come on
We are pattern-seeking animals. Narratisers. We are drawn to dramatic stories with heroes and villains. When key figures in a meme-cluster of ill-advised litigation tragically die in freak conditions we demand an explanation. It does not seem satisfactory to conclude it was “just one of those things”.
We are tempted to update the probability assessment dramatically because of the timing and the freak nature of the storm in Sicily. But here Bayesian inference — the mathematical foundations for Autonomy’s IDOL machines — tell us a different story: sometimes patterns are just noise. Ships sink — especially ships with unusually tall masts and retractable keels. Sometimes joggers get hit by cars. Not often, but more often than corporations summon freak storms and coordinate random traffic to commit executive murder.
The sceptical principle “absence of evidence is not evidence of absence” holds only where we have not looked for evidence. If you have, and there is none, there is evidence of absence. Sometimes there aren’t enough data to update our “priors”. The rational response is to acknowledge the limitations of our inference — even when the coincidences seem to cry out for a more dramatic explanation.
HP’s folly was to draw too many conclusions from too little data during its Autonomy due diligence. We should avoid making the same mistake when analysing tragic events.