The problem with solving problems: Difference between revisions
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) No edit summary Tags: Mobile edit Mobile web edit Advanced mobile edit |
||
(One intermediate revision by the same user not shown) | |||
Line 5: | Line 5: | ||
Economies that embraced this technologisation — specifically the US, Japan, East Asia and Western Europe — generally did a lot better than those that stuck with utopian models of centralised control, to the point where by 1990 political scientist Francis Fukuyama was confident to declare the ''[[The End of History and the Last Man|end of history]] itself'': | Economies that embraced this technologisation — specifically the US, Japan, East Asia and Western Europe — generally did a lot better than those that stuck with utopian models of centralised control, to the point where by 1990 political scientist Francis Fukuyama was confident to declare the ''[[The End of History and the Last Man|end of history]] itself'': | ||
{{quote|What we may be witnessing is not just the end of the Cold War, or the passing of a particular period of postwar history, but the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.<ref>Francis Fukuyama, ''The National Interest | {{quote|What we may be witnessing is not just the end of the Cold War, or the passing of a particular period of postwar history, but the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.<ref>Francis Fukuyama, ''The End of History?'' National Interest, Summer 1989.</ref>}} | ||
History, it turned out, had other ideas. Confidence in data turned out to be its own kind of utopianism. It led naturally to | History, it turned out, had other ideas. Confidence in data turned out to be its own kind of utopianism. It led naturally to [[cybernetic]]s, which is a kind of [[systems thinking]] that clings to to the idea that a [[complex system]] can be centrally managed and controlled as long as it is well enough understood. | ||
But — and this is how the laws of unintended consequences work — that leads, for matters of convenience and ease of technical management, to a further consolidation of available models, with the smaller ones being consolidated into the bigger ones. This ease of convergence works both ways: just as technology better manages fewer categories, so do we: we can struggle to have our peripheral interests and requirements catered for, but they are peripheral | But here the thing. Take a complex system — for the sake of argument a relatively stable and predictable one like a transport system, in which say 1,000,000 commuters must use a transport network from different sources to different destinations. It is one thing to “program“ a single agent effectively to navigate that system, quite another to centrally manage the trajectory of ''most'' of the agents, much less ''every'' agent in it. We can see the optimal management model is to give each agents a simple set of instructions — heuristics — and let them manage their own progress across the network. | ||
There are a limited number of “if then“ calculations needed to centrally manage one agent through this system: they are the same calculations the agent could make itself. Every commuter manages such a calculation everyday, while listening to podcasts and reading books, after all. But the complexity of the required ''centralised’' instructions increases geometrically the more agents are being centrally controlled. Unexpected contingencies emerge from the large numbers. This is a matter of simple mathematics. We are trying to calculate trajectories of particles in a sandstorm. | |||
If the calculation complexity of a single grain of sand is 1.1, say, then the complexity of calculating for a million grains of sand is 1.1 to the power of 1m. The calculation | |||
but it also leads to a culture of homogenisation, whereby peripheral differences between people, groups, opinions and values were trimmed from models to make the computer models at all meaningful and manageable. Diverse perspectives were “bucketed”, that is to say: and this led to the unintended consequence that, for reasons of ease and convenience, people were happy enough to eschew these peripheral differences and gravitate towards the nearest convenient bucket. | |||
But — and this is how the laws of unintended consequences work — that leads, for matters of convenience and ease of technical management, to a further consolidation of available models, with the smaller ones being consolidated into the bigger ones. This ease of convergence works both ways: just as technology better manages fewer categories, so do we: we can struggle to have our peripheral interests and requirements catered for, but since they are peripheral, if is not the end of the world if no-one caters for them. So we tend to drift ourselves, through convenience, into the larger buckets technology wishes in any case to put us. Hence, the falsification of [[the Long Tail]]: Even though the internet networked information revolution has made it possible to cater for every possible variation and interest, in practise it has done the opposite. The “long tail” has evolved into a “fat head”. | |||
Yet, a conviction that we are on a kind of threshold — that prior problems have been solved, and all is about to come finally right — persists notwithstanding plenty of evidence to the contrary. Progressive Western liberal democracy is undergoing something of a middle-order collapse, which you would not expect if it was the ultimate answer to everyone’s problems. |
Latest revision as of 09:04, 9 December 2024
Anyone who has watched an Adam Curtis documentary — those who haven’t, fill your boots, they’re great — will be familiar with the idea that the Western world has moved from theocracy to ideocracy to bureaucracy over the last century.
Into the religious vacuum created by Darwin and Nietzsche came the disastrous utopian ideologies of the left and right, which failed pretty catastrophically (Fascism in 1945 and Communism by 1990) leaving only management of the polity as a meaningful model of governance model, and this is more or less how government has worked since the 1970s, aided immeasurably by the emergence of technology as a neutral, sterile means tool of organisation.
Economies that embraced this technologisation — specifically the US, Japan, East Asia and Western Europe — generally did a lot better than those that stuck with utopian models of centralised control, to the point where by 1990 political scientist Francis Fukuyama was confident to declare the end of history itself:
What we may be witnessing is not just the end of the Cold War, or the passing of a particular period of postwar history, but the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.[1]
History, it turned out, had other ideas. Confidence in data turned out to be its own kind of utopianism. It led naturally to cybernetics, which is a kind of systems thinking that clings to to the idea that a complex system can be centrally managed and controlled as long as it is well enough understood.
But here the thing. Take a complex system — for the sake of argument a relatively stable and predictable one like a transport system, in which say 1,000,000 commuters must use a transport network from different sources to different destinations. It is one thing to “program“ a single agent effectively to navigate that system, quite another to centrally manage the trajectory of most of the agents, much less every agent in it. We can see the optimal management model is to give each agents a simple set of instructions — heuristics — and let them manage their own progress across the network.
There are a limited number of “if then“ calculations needed to centrally manage one agent through this system: they are the same calculations the agent could make itself. Every commuter manages such a calculation everyday, while listening to podcasts and reading books, after all. But the complexity of the required centralised’' instructions increases geometrically the more agents are being centrally controlled. Unexpected contingencies emerge from the large numbers. This is a matter of simple mathematics. We are trying to calculate trajectories of particles in a sandstorm.
If the calculation complexity of a single grain of sand is 1.1, say, then the complexity of calculating for a million grains of sand is 1.1 to the power of 1m. The calculation
but it also leads to a culture of homogenisation, whereby peripheral differences between people, groups, opinions and values were trimmed from models to make the computer models at all meaningful and manageable. Diverse perspectives were “bucketed”, that is to say: and this led to the unintended consequence that, for reasons of ease and convenience, people were happy enough to eschew these peripheral differences and gravitate towards the nearest convenient bucket.
But — and this is how the laws of unintended consequences work — that leads, for matters of convenience and ease of technical management, to a further consolidation of available models, with the smaller ones being consolidated into the bigger ones. This ease of convergence works both ways: just as technology better manages fewer categories, so do we: we can struggle to have our peripheral interests and requirements catered for, but since they are peripheral, if is not the end of the world if no-one caters for them. So we tend to drift ourselves, through convenience, into the larger buckets technology wishes in any case to put us. Hence, the falsification of the Long Tail: Even though the internet networked information revolution has made it possible to cater for every possible variation and interest, in practise it has done the opposite. The “long tail” has evolved into a “fat head”.
Yet, a conviction that we are on a kind of threshold — that prior problems have been solved, and all is about to come finally right — persists notwithstanding plenty of evidence to the contrary. Progressive Western liberal democracy is undergoing something of a middle-order collapse, which you would not expect if it was the ultimate answer to everyone’s problems.
- ↑ Francis Fukuyama, The End of History? National Interest, Summer 1989.