Data modernism: Difference between revisions

From The Jolly Contrarian
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
{{a|devil|}}A prelude to the [[great delamination]].
{{a|devil|}}A prelude to the [[great delamination]].


There is a strand of [[High modernism|modernist]] thinking that flows from Robert Moses, Le Corbusier, that there is an optimisable configuration for human interaction and it can be derived from a rigorously scientific, or at least mathematical, method: that the only obstacle to implementing it has been the lack of a sufficiently powerful machine to run the calculation.
There is a strand of [[High modernism|modernist]] thinking that flows from [[The Death and Life of Great American Cities|Robert Moses]], Le Corbusier, that there is an optimisable configuration for human interaction and it can be derived from a rigorously scientific, or at least mathematical, method: that the only obstacle to implementing it has been the lack of a sufficiently powerful machine to run the calculation.
 
===Data ''modernism''? Or ''post''-modernism?===
An initial objection: in Moses’ classical high-modernist view there is a central, top-down, beneficent dictator who has, in good faith, derived a theory from deterministic first principles; a sort of [[cogito ergo sum]] begets [[income tax and rice pudding]] begets a mechanised [[High modernist|modernist]] way of life. Data modernism dispenses with the need for the beneficent dictator, or at any rate yields that position to a more or less ineffable ''[[algorithm]]''. We don’t know how it works, how it gets to its conclusions, but we are fixed with the conviction that, being the massed output of the wise crowd, it has greater intelligence than any one of us.
 


===Data modernism? Or post-modernism?===
An initial objection: in Moses’ classical high-modernist view there is a central, top-down, beneficent dictator who has, in good faith, derived a theory frin deterministic first principles; a sort of [[cogito ergo sum]] begets [[income tax and rice pudding]] begets a mechanised [[High modernist|modernist]] way of life. Data modernism dispenses with the need for the beneficent dictator, or at any rate yields that position to a more or less ineffable ''[[algorithm]]''. We don’t know how it works, how it gets to its conclusions, but we are fixed with the conviction that, being the massed output of the wise crowd, it has greater intelligence than any one of us.


That time has now arrived, or is close at hand, whereby the means is at our disposal. We now have the processing power to take massive amounts of [[unstructured data]] — “[[noise]]” in the vernacular —  and from it extrapolate a [[Signal-to-noise ratio|signal]]. We don’t necessarily understand ''how'' the [[algorithm]]<nowiki/>s extrapolate a signal; they just do — this inscrutability  is part of the appeal of it: there is no “all-too-human” bias<ref>At least, until the algo goes rogue and becomes a Nazi.</ref> — but there is a belief which stretches from paid-up Randian anarcho-capitalists through to certified latter-day socialists, that ''we can solve our problems with data''.  
That time has now arrived, or is close at hand, whereby the means is at our disposal. We now have the processing power to take massive amounts of [[unstructured data]] — “[[noise]]” in the vernacular —  and from it extrapolate a [[Signal-to-noise ratio|signal]]. We don’t necessarily understand ''how'' the [[algorithm]]<nowiki/>s extrapolate a signal; they just do — this inscrutability  is part of the appeal of it: there is no “all-too-human” bias<ref>At least, until the algo goes rogue and becomes a Nazi.</ref> — but there is a belief which stretches from paid-up Randian anarcho-capitalists through to certified latter-day socialists, that ''we can solve our problems with data''.  

Revision as of 15:43, 30 October 2022

In which the curmudgeonly old sod puts the world to rights.
Index — Click ᐅ to expand:
Tell me more
Sign up for our newsletter — or just get in touch: for ½ a weekly 🍺 you get to consult JC. Ask about it here.

A prelude to the great delamination.

There is a strand of modernist thinking that flows from Robert Moses, Le Corbusier, that there is an optimisable configuration for human interaction and it can be derived from a rigorously scientific, or at least mathematical, method: that the only obstacle to implementing it has been the lack of a sufficiently powerful machine to run the calculation.

Data modernism? Or post-modernism?

An initial objection: in Moses’ classical high-modernist view there is a central, top-down, beneficent dictator who has, in good faith, derived a theory from deterministic first principles; a sort of cogito ergo sum begets income tax and rice pudding begets a mechanised modernist way of life. Data modernism dispenses with the need for the beneficent dictator, or at any rate yields that position to a more or less ineffable algorithm. We don’t know how it works, how it gets to its conclusions, but we are fixed with the conviction that, being the massed output of the wise crowd, it has greater intelligence than any one of us.


That time has now arrived, or is close at hand, whereby the means is at our disposal. We now have the processing power to take massive amounts of unstructured data — “noise” in the vernacular — and from it extrapolate a signal. We don’t necessarily understand how the algorithms extrapolate a signal; they just do — this inscrutability is part of the appeal of it: there is no “all-too-human” bias[1] — but there is a belief which stretches from paid-up Randian anarcho-capitalists through to certified latter-day socialists, that we can solve our problems with data.

Now data, as it comes, is an incoherent, imperfect, meaningless thing. It is the pre-theatre chat; a “hubbub”: made up of millions of individual communications, conversations and interactions actions, all of which have their own (possibly imperfect) meanings between their participants, but which taken as a whole have no particular meaning at all.

Imagine taking every one of the pre-performance conversations between all the patrons at the Saturday matinee performance of Eureka Day at The Old Vic[2] — that meaningless hubbub — and summarising it into a single sentence, designed to reflect what “the theatre was thinking”. Then you feed that single confabulated sentence back to all the theatre patrons and say “this is the conversation which the theatre was having. Now, which side were you on?” People will tend to take sides, and will invest themselves in that conversation.

But, remember, the hubbub was just noise all along. None of the individual conversations had anything to do with each other. All had their own, independent meanings. They are immune to aggregation.

We say “we have unconscious biases and they inform our reactions”. Well, no shit.

To extract signal from noise is to filter, limit compress and selectively amplify on the predication that there is a signal; that that hubbub is something like a de-tuned radio, or we are looking for pulsars, quasars and intelligent life on the SETI array. But we are not. There isn’t always a signal. the SETI array is a bad metaphor: here we are trying to tease out a bilateral signal that is there from a spectrum of other kinds of radiation that qualitatively different, but just broadcast on the same frequency. With the human hubbub there are a spectrum of unconnected communications and no real “signal”. We are not trying to isolate a single conversation out of all the other ones — that is the direct analogy — but trying to extract a an aggregated message that is not actually there, and to treat is as an emergent property of all those conversations. This is a different thing entirely. There is no emergent property from millions of unrelated conversations. The result is brown, warm and even: maximum entropy.

To make something out of nothing is to deliberately bias. It is to carve David out of a marble block. Bias creates meaning. There may be local meanings — maybe — based on local interactions and echo chambers but these are informal, incomplete, and impossible to delimit.

So we tend to “extrapolate” central figures from random noise: economic growth. The intention behind expressed electoral preference. Average wages. The wage gap. Why the stock market went up. That the stock market went up: these are spectral figures. They are ghosts, gods, monsters and devils. They are no more real than religions, just because they are the product of “science” and “techne”.

We have, on occasion, some convenient proxies, but they are just proxies: for example, in an election, a manifesto. Without a manifesto, a binary vote for a single candidate in a local electorate (I am assuming FPP, but in honesty it isn’t wildly different for proportional represerntation) tells us nothing whatever about the individual motivation to vote as she did. A manifesto helps, by a process of deemery.

Did every Conservative voter read the party’s manifesto? Almost certainly, no. Did every Conservative voter who did read it subscribe to every line? Again, almost certainly no. Did anyone subscribe to every line in it? Perhaps, but by no means certainly. So, can we legitimately infer uniform support for the Conservatives’ manifesto from all who voted Conservative? No. We only do by dint of the political convention that those who vote for a party are deemed to support a manifesto (if one is published). But even that convention is a spectre. And where your vote is an issue-based referendum, there is not even a manifesto. Who knows why 33 million people voted for Brexit? Who could possibly presume to aggregate all those individual value judgments into a single guiding principle? There were 33 million reasons for voting leave. They tell us nothing except... leave.

But yet the delaminated Onworld — especially as it feeds back its simplified “signal” and thereby amplifies it — we draw our battle lines and attack based on these, invented, signals. We take them, and make them our own. We truck in archetypes of our own devising.[3]

So, to take the issue du jour — fools rush in etc — how you feel about gender identity might depend on how you envisage the quintessential gender-fluid individual: if you see an exotic, beautiful, fragile, elfen, teenaged creature of beguiling androgyny you will see trans people as harmless, vulnerable and in need of all the protections society can offer. If your personal archetype is six-foot male self-identifying to compete in women's sport, or to access women's changing rooms, you will see trans people as predatory and dangerous.

The argument between people holding alternative visions will be fruitless.

Yet such patently ludicrous arguments animate the public squares in the Onworld.

Hence the delamination: the online world is a world of extruded ghoulish signals aggregated from the unfiltered noise of discourse. The offline world — can we call it the offworld? — is a world of bilateral conversations, one on one. A world of shades, nuance, detail, richness, complexity's and — for the most part — civility.

Feedback loopsand feeding that signal back into the memeplex, without necessarily surveilling it or taking anything out of it. So it would include machine learning, AI, etc

See also

References

  1. At least, until the algo goes rogue and becomes a Nazi.
  2. Real-life example, needless to say.
  3. Our personal conceptualisations of archetypes never quite map to the world: the “Google Disappointment Effect” when an image search (or AI prompt) never quite returns the image you had in mind. This is the variation of the “no average fighter pilot” effect.