Talk:The Bayesian: Difference between revisions

From The Jolly Contrarian
Jump to navigation Jump to search
Tags: Mobile edit Mobile web edit Advanced mobile edit
 
(3 intermediate revisions by the same user not shown)
Line 1: Line 1:
=== The acquisition and trial ===
=== Cambridge brainboxes and their supercomputers ===
Autonomy was a British tech darling of the early 2000s founded by Cambridge mathematics PhD Mike Lynch. Autonomy's flagship product was the IDOL platform: an “Intelligent data operating layer” using advanced — for the time — pattern recognition techniques to extract data from the sort of unstructured information with which most corporations are inundated — documents, correspondence, contracts, instant messages, email and phone data. IDOL promised a revolutionary kind of business intelligence from a previously unnavigable swamp.
{{drop|A|utonomy, a British}} tech darling of the noughties, was founded by Mike Lynch and Richard Gaunt<ref>Interesting: while the Internet is, and always has been, awash with information about Mike Lynch, it contains almost nothing about Gaunt. No Wikipedia page, and no references at all, since he last showed up in the Sunday Times rich list in 2009.  LLMs produce a picture of Mike Lynch, and ask you whether that is who you meant. Perhaps this in itself is a testament to his ability to ''protect'' unstructured data.
 
Sayeth Claude:
{{quote|Richard Gaunt was indeed an early employee and senior executive at Autonomy, but his movements after the HP acquisition and subsequent controversy are not well documented in my database.}}
Sayeth Gemini:
{{quote|I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited. Is there anything else I can do to help you with this request?}}
Sayeth ChatGPT:
{{quote|Gaunt played a critical role in the development of Autonomy's technology but has since largely stayed out of the spotlight. There is little recent information suggesting his current activities, though it's likely he remains involved in the technology or software sectors, potentially through more private ventures or investments.}}</ref> a pair of Cambridge neural networks postdocs, in 1996. Autonomy's flagship product was the IDOL platform: an “intelligent data operating layer” using advanced — for the time — pattern recognition techniques and [[Bayesian reasoning|Bayesian inference]] to extract data from the sort of unstructured information with which most corporations are inundated — documents, correspondence, contracts, instant messages, email and phone data. IDOL promised a revolutionary kind of business intelligence from a previously unnavigable swamp.


Now, awkward-looking brainboxes using monstrous machines powered by mystical tools like “Bayesian inference” to convert oceans of muck into premium brass had quite the cachet in 2005. It has quite the cachet now, come to think of it.
Now, awkward-looking brainboxes using monstrous machines powered by mystical tools like “Bayesian inference” to convert oceans of muck into premium brass had quite the cachet in 2005. It has quite the cachet now, come to think of it.


One obvious application for IDOL was in document management — a subject dear to the heart of any budding legal operationalist — so, unusually, we [[legal eagle]]s, a prime target for Autonomy’s hyper-enthusiastic sales effort, had a catbird seat. If you had to sum up this experience in a sentence: the pitch was awesome, the product was inordinately expensive, the demo didn’t really come up to brief.  
One obvious application for IDOL was in document management — a subject dear to the heart of any budding legal operationalist — so, unusually, we [[legal eagle]]s, a prime target for Autonomy’s hyper-enthusiastic sales effort, had a catbird seat as a hurricane force corporate scandal unfolded. If you had to sum up the Autonomy marketing experience in a sentence: awesome pitch, pricey product, disappointing demo. Got tickets to White Hart Lane though.


This more or less describes Hewlett Packard’s experience when they acquired Autonomy in 2011. Swooned no doubt be Autonomy’s honeyed presentations about the future of machine intelligence and its advanced positioning in the field HP paid US$11 billion — a 70% premium over the company’s stock market valuation. Plainly the market had a more sanguine view, but even that may have been optimistic, as events would demonstrate.
This more or less describes Hewlett Packard’s experience when they acquired Autonomy in 2011. Swooned no doubt be Autonomy’s honeyed presentations about the future of machine intelligence and its advanced positioning in the field HP paid US$11 billion — a 70% premium over the company’s stock market valuation. Plainly the market had a more sanguine view, but even that may have been optimistic, as events would demonstrate.
Line 41: Line 48:


Bayesian inference is not, of course, flawless: prior probabilities can still turn out to be misleading. Correct application of Bayes’ principles might have told Lynch as they dropped anchor at Porticello that his yacht was most unlikely to sink in the night.
Bayesian inference is not, of course, flawless: prior probabilities can still turn out to be misleading. Correct application of Bayes’ principles might have told Lynch as they dropped anchor at Porticello that his yacht was most unlikely to sink in the night.
====Goats and Ferraris====
There is no better example of Bayesian inference than the [[Monty Hall problem]]. This is either trite, or completely Earth-shattering, depending on whether you have come across it before or not. It indicates, too, how tremendously bad our instinctive grasp of probabilities is:
{{monty hall capsule}}
Bayesian inference invites us to update our assessment of probabilities based on what we have since learned. We do this quite naturally in certain environments — when playing cards, for example: once the Ace, King and Queen have been played, you know your Jack is high, and no-one can beat it — but not as a general rule.


===Blame and exculpation===
===Blame and exculpation===

Latest revision as of 16:22, 25 September 2024

Cambridge brainboxes and their supercomputers

Autonomy, a British tech darling of the noughties, was founded by Mike Lynch and Richard Gaunt[1] a pair of Cambridge neural networks postdocs, in 1996. Autonomy's flagship product was the IDOL platform: an “intelligent data operating layer” using advanced — for the time — pattern recognition techniques and Bayesian inference to extract data from the sort of unstructured information with which most corporations are inundated — documents, correspondence, contracts, instant messages, email and phone data. IDOL promised a revolutionary kind of business intelligence from a previously unnavigable swamp.

Now, awkward-looking brainboxes using monstrous machines powered by mystical tools like “Bayesian inference” to convert oceans of muck into premium brass had quite the cachet in 2005. It has quite the cachet now, come to think of it.

One obvious application for IDOL was in document management — a subject dear to the heart of any budding legal operationalist — so, unusually, we legal eagles, a prime target for Autonomy’s hyper-enthusiastic sales effort, had a catbird seat as a hurricane force corporate scandal unfolded. If you had to sum up the Autonomy marketing experience in a sentence: awesome pitch, pricey product, disappointing demo. Got tickets to White Hart Lane though.

This more or less describes Hewlett Packard’s experience when they acquired Autonomy in 2011. Swooned no doubt be Autonomy’s honeyed presentations about the future of machine intelligence and its advanced positioning in the field HP paid US$11 billion — a 70% premium over the company’s stock market valuation. Plainly the market had a more sanguine view, but even that may have been optimistic, as events would demonstrate.

It may not be a surprise to the reader — though it was to HP’s executive — that the acquisition was a disaster: HP was unable to make the business workable and eventually wrote down its investment by $8.8 billion.

HP sued Lynch and his management team, accusing them of misleading HP into overpaying for the company through fraudulent accounting, and overstated earnings. Lynch countered that HP had misunderstood the product, integrated it poorly into the HP business and then mismanaged it after acquisition.

The litigation rumbled on for years. In 2022, having spent a further USD100m prosecuting it HP — somewhat pyrrhically — won the case. Lynch was ordered to pay a billion dollars in damages. HP had claimed five.

Now, as Matt Levine often reminds us, in the US, everything is securities fraud. While the civil case continued, Lynch and Autonomy’s former Vice President of Finance Stephen Chamberlain were charged with criminal wire fraud and conspiracy and in May 2023, extradited to the US to face criminal trial. The standard of proof being much higher in a criminal proceeding, things went better for the Autonomy executives.

In June 2024, Lynch and Chamberlain were acquitted on all counts.

After the acquittal, Chamberlain returned to his home in Cambridgeshire while, by way of celebration, Lynch treated his daughter and some close friends to a fortnight on his superyacht, The Bayesian in the Mediterranean.

On 17 August 2024 Stephen Chamberlain was hit by a car while jogging in Longstanton Cambridgeshire. He died of his injuries two days later.

Early on the morning of the same day, while at anchor half a mile off the port of Porticello on the north coast of Sicily, The Bayesian was hit by a freak storm and, in the space of about 16 minutes, capsized and sank, tragically killing Lynch, his daughter and five others.

Conspiracy theory

The improbable circumstances of these accidents, within two days of each other and just weeks after their acquittals, raised eyebrows. It seemed, if nothing else, to be an extraordinary coincidence, although the mainstream media quickly rationalised that an actual conspiracy here was highly unlikely — no one had anything obvious to gain, for one thing, and orchestrating any freak storm at all, let alone powerful enough to capsize and sink a 55-metre, 550-ton yacht is beyond the capacity even of the deep state. If you wanted to “off” a business executive, there were far easier ways of doing it.

Yet, still, the unfiltered maw of uninformed public speculation — from which I write, dear correspondent — found this all very fishy.

How could a $40m state-of-the-art superyacht, crewed by experienced mariners just sink, in sixteen minutes, while at anchor, during a summer storm?

Conspiracy v irony: Bayesian reasoning

We hear more and more about “Bayesian reasoning”. This method of statistical reasoning derived from the work of Thomas Bayes, an (at the time) obscure 18th-century Presbyterian minister from Tunbridge Wells, now dominates modern information technology and was important enough to Mike Lynch's business for him to name his superyacht in its honour.

Bayesian reasoning
/beɪːzˈiːən ˈriːzᵊnɪŋ/ (n.)

A method of statistical inference that updates a probability as more information becomes available about the hypothesis. Bayesian reasoning ensures comprehensive use of available information:

  1. improves risk assessment and forecasting.
  2. is used in artificial intelligence to allow machine learning algorithms to learn and adapt from their data more effectively.
  3. helps evaluate evidence and enables more informed judgments, ensuring that decisions are based on a comprehensive analysis of all available information

Automony’s IDOL platform used Bayesian inference to manage and analyze unstructured data, enabling it to make probabilistic predictions and improve information retrieval.

Bayesian inference is not, of course, flawless: prior probabilities can still turn out to be misleading. Correct application of Bayes’ principles might have told Lynch as they dropped anchor at Porticello that his yacht was most unlikely to sink in the night.

Goats and Ferraris

There is no better example of Bayesian inference than the Monty Hall problem. This is either trite, or completely Earth-shattering, depending on whether you have come across it before or not. It indicates, too, how tremendously bad our instinctive grasp of probabilities is:

You are a game-show contestant. The host asks you to choose a prize from behind one of three doors. She tells you: behind one door there is a Ferrari. Behind the other two are goats. [Why goats? — Ed] Choose your door.

You choose a door.

Before opening your door, the host theatrically opens one of the other two doors, and reveals a goat. She offers you the chance to reconsider.

Would you reconsider?

Intuition suggests it should not make a difference. At the beginning, each door carried an equal probability, ⅓, and after the reveal, the remaining doors still do: ½. So, while your odds have improved, it still doesn’t matter. Whether you stick or twist, you should be indifferent.

Bayesian inference shows that intuition to be wrong.

Staying put is to commit to a choice you made then the odds were worse. You have no more information about the choice you made: you already knew it may or may not contain the car. You do, however, know something new about one of the doors you didn’t choose. The odds as between the other two doors change, from ⅓ each to nil for the open door — it definitely doesnʼt hold the car — and ⅔ for the other closed one, which still might.

The remaining probabilities are therefore ⅓ for your original choice and ⅔ for the other door. You have a better chance of winning the car if you switch.

It is true: a person who now arrives and is given the choice without your knowledge, would calculate the probability at 50:50. The calculation would be wrong because an important assumption in calculating probabilities — that the car and goat were normally distributed between two doors — does not hold. A third door has been unrandomly eliminated.

So you should switch doors. This proposal outrages some people, at first. Especially when explained to them at a pub, it outrages them later. But it is true.

It is easier to see if instead there are one thousand doors, not three, and after your first pick the host opens 998 of the other doors.

Bayesian inference invites us to update our assessment of probabilities based on what we have since learned. We do this quite naturally in certain environments — when playing cards, for example: once the Ace, King and Queen have been played, you know your Jack is high, and no-one can beat it — but not as a general rule.

Blame and exculpation

The blame and recrimination process started quickly after the accident. Italian police launched an investigation and raised the possibility of manslaughter or “culpable shipwreck” charges against the Bayesian’s skipper, chief engineer and the sailor on watch duty at the time of the incident. All were professional sailors.[2]

In the meantime, the yacht’s builder, The Italian Sea Group,[3] made a series of unusually strident public statements blaming the crew for mishandling the boat and thereby injuring the boat builders reputation. In September 2024 TISG launched, then quickly disowned, civil suit against the Bayesian’s crew and owner — a company controlled by Mike Lynch’s widow Angela Bacares — seeking compensation for reputational damage and loss of earnings, alleging among other things that the crew was inappropriately selected, did not make necessary preparations for the storm despite advanced weather warnings, and their actions during the storm contributed to the sinking.

TISG itself is under police investigation in connection with the tragedy: this may explain its precipitate behaviour in launching formal legal proceedings: one way of getting ahead of suspicion as a wrongdoer is to cast yourself as the victim.

Linear and systems: two competing theories

A conspiracy theorist is someone who’s never tried to organise a surprise party.

— John F. Kennedy

Hold this as a provisional hypothesis: We can bifurcate explanations of the world into linear theories and systems theories.

“Linear theories” view the world as a complicated, but essentially logical system.[4] They maybe forward- or backward-engineered. They may require great skill to navigate, but they are nevertheless, in theory, predictable, in the sense that all events in them are caused, have effects and operate according to a single, consistent set of rules. One can regard the behaviour of the system as a case of causes and effects. There are, therefore, “root causes” for what happens which dominate the incidental circumstances in which they operate. The inspirational CEO, the star striker, the bad apples who ruined it for everyone, the operator whose human error caused the air crash: these are linear explanations.

Now the nature of complicated systems is that, while it may be an integral, whole, consistent system, we cannot always see the whole thing. Sometimes our root cause regression reaches a dead, unsatisfactory end simply due to a lack of information. The natural response in a linear system is to therefore seek more information. The system has a complete state — there is a “bottom”:[5] we just need to find it. Where that information is available, we should get it.

Where it is not — where an event happens for which there is no apparent cause, inference is required. We must hypothesise, backwards from what we know about the system to infer what happened.

Much of the time, this is easy to do. We are natural inference engines. All of science proceed

Conspiracy theories are typically linear explanations in that they put ultimate blame (or credit) for a given state of affairs things down to the intentional actions, be they malign or well-meant, of a limited number of disproportionately influential people.

“Systems theories” view the world as a non-linear complex system.[6] attribute outcomes to the behaviour of a wider interlocking system of relationships where individuals’ acumen or motivations contribute to, but rarely determine the outcomes the system produces. They are usually non-linear: in a complex system unexpectable things can and do happen. This is generally not a single person’s fault, but an unexpected consequence of the design of the system. As systems experience unexpected consequences they tend to learn and adjust to them.

Old systems, having been around for longer and having been exposed to more variations in condition, are better stress-tested and therefore throw up fewer unexpected consequences than new systems.[7]


I will grant you at once that this is a wide conception indeed of “conspiracy theory”: it includes not just gunpowder plots and Russian bots in Western elections but the general idea that great art is the product of singular genius, commercial success is the outcome of exceptional leadership, and jazz is not just a succession of happy accidents.

By contrast, systems theory says in a nutshell, into a bit more complicated than that. In the case of great artists and great visionary business people, their input into the artistic process is not discounted altogether but instead aggregated with a great deal of other system information to generate an outcome. Shakespeare was indeed a genius but would yet have died in anonymity were it not for his sponsors, publishers, patrons, theatres, actors, critics and audience: the magnificent cultural establishment that we now know as the Shakespeare canon contains a lot of stuff that was nothing to do with William Shakespeare.

Systems and paradigms

I rabbit on a lot on this site about power structures and paradigms. These are systems of political, scientific and cultural control.

Systemantics

The best place to start systems theory is John Gall’s short, acerbic, funny and devastatingly incisive book Systemantics: The Systems Bible. System theories have an acronym: “POSIWID”: the “purpose of a system is what it does”. This, Gall gently points out, Is by inevitable outcome not what those who designed the system had in mind. The System tends to oppose its own intended function so therefore to blame conspirators Who occupy positions of ostensible influence and power within the system is rather to miss the point. They are as much victims of systemantics as anyone else.

Operator error

We should not be surprised to hear operator error advanced as a cause. It is a clear, linear root cause — suitable for a linear theory of complicated system where a trained expert fell short of a required standard — and it has the additional advantage of not challenging the overall integrity of the system. It was not management’s fault, nor failure to supervise, nor the ship’s design, but component failure. Where the component that fails is an autonomous agent there is only so much a system administrator can do. The strategic end-point is to design out the need for autonomous components in the system at all — until very recently this has been an impossible Idealistic end point but with natural language models equipped with, powers of, well, Bayesian reasoning (I told you there would be some irony in this post), perhaps that point is not so far away.

But as long as fallible human autonomous agents are still needed for edge case tasks — for all the mindboggling technology aboard a state of the art super yacht, it still needs a skipper an engineer and a watch detail — the scope for human error is unavoidable.

Defamation

The claim was brought at Italian law, and withdrawn before anyone got much of a look at it, but it is fun to imagine it was essentially one of defamation:

Defamation
ˌdɛfəˈmeɪʃᵊn (n.)

A false statement about a person, communicated to others, that harms the person’s reputation.

The statement must be untrue, specific enough to identify the person and published or shared with third parties such that it damages the person’s reputation or standing.

Can your careless conduct — not “representational” gestures; actually non-communicative acts, like steering (or not steering) a boat — be a statement?

The defamatory “statement”, we presume, would be that the boat was designed so badly that, even at anchor, a crew of competent professionals sailors could not stop it scuttling in a severe storm.

That allegation would certainly be defamatory to a shipbuilder. If it were untrue.

But the problem the shipbuilder has is that the ship did sink at anchor in a storm. This does normally happen to well designed ships. Presuming it was well designed, then either (i) the storm’s ferocity or (ii) the sailors’ negligence, was truly unprecedented.

And here, again Bayesian reasoning can help us. We have 3500 years of Mediterranean seafaring experience. This plays out across two dimensions firstly the experience of the kind of weather conditions that can occur in the Mediterranean. Our “Lindy database” of freak events — which include eruptions on Etna and the Aeolian islands, is massive. Each one of those experiences in is buried deep in the pace layers of Mediterranean seafaring knowledge. We know the conditions ship builders must design for. The other dimension is the parameters of ship design. Ships falling comfortably within the tested limits of ship design are unlikely to be unsuitable for the conditions in the Mediterranean. The bayesian was not unusually long, nor unusually heavy, nor was its retractable keel and unusual design. While it is a relatively new innovation, it's a specific advantage in the comparatively shallow and tranquil waters of the Mediterranean. So the Bayesian was in most respects not an unusual boat. But it was in one colon at the time of its construction in 2008, the bayesians mast was the tallest single yacht mast in history. The next tallest, one of the five masts on the royal clipper, constructed in 2000, with just 60 meters, some 12 meters shorter than the bayesian’s 73 m mast. And the royal clipper is a 179 metre long, permanently keeled square rigged ship. The Bayesian’s mast, relative to its Hull length and given its retractable keel, was extraordinarily disproportionate.

A third angle of prior probability is the acumen of the crew. We should review The historical information for situations in which crew failure was the sole attributed cause of the sinking of a ship at anchor.

This is really another way of asking this question: is it possible to design out of a ship the risk that it would be capsized and sunk purely by inclement weather? Can you solve for human failure, or is the nature of seafaring so dependent on human decision that it is impossible for a ship to design a way the risk of human error?

  1. Interesting: while the Internet is, and always has been, awash with information about Mike Lynch, it contains almost nothing about Gaunt. No Wikipedia page, and no references at all, since he last showed up in the Sunday Times rich list in 2009. LLMs produce a picture of Mike Lynch, and ask you whether that is who you meant. Perhaps this in itself is a testament to his ability to protect unstructured data. Sayeth Claude:

    Richard Gaunt was indeed an early employee and senior executive at Autonomy, but his movements after the HP acquisition and subsequent controversy are not well documented in my database.

    Sayeth Gemini:

    I do not have enough information about that person to help with your request. I am a large language model, and I am able to communicate and generate human-like text in response to a wide range of prompts and questions, but my knowledge about this person is limited. Is there anything else I can do to help you with this request?

    Sayeth ChatGPT:

    Gaunt played a critical role in the development of Autonomy's technology but has since largely stayed out of the spotlight. There is little recent information suggesting his current activities, though it's likely he remains involved in the technology or software sectors, potentially through more private ventures or investments.

  2. https://apnews.com/article/italy-sicily-superyacht-sinking-investigation-7b26e40e9efd69c4d08e189093f67ad9
  3. Strictly speaking, the successor to the boat’s builder: Perini Navi, which constructed the Bayesian in 2008, had been acquired by TISG in 2021.
  4. Complicated systems” are bounded interactive processes: they involve interaction with autonomous agents but within fixed boundaries and according to preconfigured, known and static rules of engagement. All relevant information is available to, even if not necessarily known by, all participants in the system.
  5. In information theory, the “entropy” in a system is a function of the information capacity of the system. A given string of symbols has a maximum information capacity.
  6. Complex systems are “unbounded, interactive process. Involves interaction with autonomous agents without boundaries, without pre-agreed rules, and where information is limited and asymmetric. Rules, boundaries and each participant’s objectives are dynamic and change interactively. Impossible to predict.”
  7. This is sometimes called the “Lindy effect”, but is also explained in terms of pace layering: old systems occupy deeper layers.