Reductionism

From The Jolly Contrarian
(Redirected from Reductionist)
Jump to navigation Jump to search
The JC’s amateur guide to systems theory
Index: Click to expand:
Tell me more
Sign up for our newsletter — or just get in touch: for ½ a weekly 🍺 you get to consult JC. Ask about it here.

The preposterous idea that there is a single transcendent truth, that all things can be reduced, ultimately, to causal relationships between fundamental particles.

Bunk, but it provides succour to digital prophets and thought leaders who foresee a world of robot-assisted leisure, and a universe which of its own initiative rewrites the laws of physics etc.

The irreducibility of things

We are, as the JC frequently complains, in a swoon to the reducibility of all things.

This usually involves converting all the irreducible things that we do and that happen to us into numerical data points. Numbers submit easily to aggregation, symbolic manipulation, calculation, and statistical technique: means, modes, medians, standard deviations and so on.

But “things that we do and that happen to us” — henceforth, “things” — do not. They are not numbers. They are unique, four-dimensional, sociological constructions. They exist partly in the universe, partly in our minds, and partly in what Iain McGilchrist might call the “betweenness” — that immaterial cultural and linguistic layer that lies between us. Things are ineffable.

Reducing them even to words involves some loss of information. Reducing them to numbers even more. This is not a matter of mere data compression. We cannot restore this information by reversing the symbolic operations, the way we could restore “things that we do and that happen to us” from “things” as defined in this essay. We cannot restore the ineffable once we have reduced it to data. We can mimic it, but that is different.

We can run statistical operations on data in a way we cannot on “things”. But statistical analytics is only interesting when performed on things.

Observation 1: statistical manipulation of “things” depends first on reduction.

Observation 2: The reduction of things to numbers converts physical things that are different into abstract things that are the same.

This is the singular benefit of datafication. To simplify a complex artefact down to a number, or set of numbers, is to symbolise it. We can process symbols. Manipulate them. Subject them to Turing operations.

But we have switched domains: we have left the ineffable offline and gone online. We have left the world of the signified and entered that of the signifier. [1]

Assigning a number to a thing is no less a creative linguistic operation than giving it a name. The calculations we perform, on that number, tell us about the mathematical properties of that number.[2] They do not tell us anything about the artefact it signifies. This is easiest to see with an average: the average height of the passengers in this carriage tells us nothing about any single passenger’s height. Yet so much of the modern world measures against the average!

We say the average is an emergent property of the group, the the say that wetness is an emergent property of a group of water molecules. But is it?

We harvest information from artefacts, convert it into data, generalise it, manipulate it mathematically, and then apply it back to similar artefacts. A statistical method is legitimate if it applies to identical artefacts. We suppose it to be largely legitimate if it applies to similar artefacts.

Dice are not machined perfectly. But they are similar. The broad principles of probability apply to them generally, roughly.

But “similar” is a word, and therefore a value judgment. It exists in the domain of signifiers, not signified. We are similar in that we are all homo sapiens. But that similarity is not enough to draw conclusions about our breakfast preferences.

In the same way that we can calculate the probability of rolling consecutive sixes so, it seems, can we calculate the probability of rain tomorrow, a cut in stamp duty in the spring, or a thirty-point intraday drop in the NASDAQ.

This is depends on the artefacts being, in the first place, sufficiently and relevantly similar. The sides of a dice are, to a large degree. Clouds and weather patterns are, to a lesser degree. The conditions propelling the NASDAQ — humans — are not.

But we notice regularities in the behaviour of the market and we impute to them regularity all the same. And once we do this we can dispense with the messy, ineffable, incalculable domain of signifiers, and perform our operations in the clean, tidy, nomological world of signifiers. We move from the physical to the synthetic.

For numbers are alluring. They are under our control. They behave. They bend to the spreadsheet’s will. The spreadsheet’s will is our will.

Except, as David Viniar’s immortal words remind us, the events these numbers represent — the territory for which they are a map — are wont to have other ideas.

“We were seeing things that were 25-standard deviation moves, several days in a row”[3]

Rolling dice are not like the stock market.

The map and the territory

Mr Viniar’s model, he hoped, would tell him something about the market’s behaviour. The model is the map, the market is the territory. We judge the success of a model by how close its prediction is to our subsequent lived experience. There is a natural dissonance: models are drawn from past experience, and that is singular, static and unalterable. It is dead. Our future experience is, as far as we know, none of these things.

You would not expect a “twenty-five sigma” day once in several lifetimes of the universe. Goldman’s model was in effect saying, this kind of event will not happen.

This would be the equivalent of all the molecules in a cup of tea spontaneously jumping to the right at the same moment. The molecules are bouncing around randomly — Brownian motion, right? — and so conceptually they could all jump left at once[4] but the sheer odds of every single atom doing so at once are so infinitesimally small that it would never happen in several billion lives of the universe. Neither the cup nor the tea in it would last that long, of course.

But that is the scale of likelihood of a twenty-five sigma event.

That Mr Viniar thought there were several such days in a row — in a market history measured in decades, not universe lifetimes — must mean the model was wrong.[5]

See also

References

  1. Note: the simulation hypothesis depends on these two domains being identical, because they are, for all intents, mathematically identical. This sleight of hand subtle enough to have eluded the notice of people as smart as Neil De Grasse Tyson.
  2. There is a sort of numerology about this. The letters in Adolf Hitler’s name, when divided by his mother’s birth month and multiplied by his father’s age at his birth add up to 666!! They don’t? Then that cannot be his real father!
  3. explaining why the vampire squid’s flagship hedge funds lost over a quarter of their value in a week, in 2008.
  4. it may be that, conceptually, they couldn't — Brownian motion depends on collisions. For all I know, this implies that half the molecules are jumping the other way.
  5. It was, for reasons we explore elsewhere.