Template:M intro systems financialisation: Difference between revisions

Jump to navigation Jump to search
No edit summary
Tags: Mobile edit Mobile web edit Advanced mobile edit
No edit summary
Tags: Mobile edit Mobile web edit Advanced mobile edit
Line 27: Line 27:
That much is obvious: it is not the lesson we should be drawing, as we should already know it. It is already imprinted in our cultural fabric, however determined the [[modernist]]s may be to forget it
That much is obvious: it is not the lesson we should be drawing, as we should already know it. It is already imprinted in our cultural fabric, however determined the [[modernist]]s may be to forget it


The lesson is this. If we mistake the map for the territory — if we organise our interests and judge our outcomes exclusively by reference to the map, ''we thereby change the territory''. The territory more closely resembles the map. This is only convenient for cartographers. It is bad for the people in the territory whose interests the cartographers are supposedly trying to represent. It leads to two kinds of bad outcomes.
The lesson is this. If we mistake the map for the territory — if we organise our interests and judge our outcomes exclusively by reference to the map, ''we thereby change the territory''. Gradually, by degrees, the territory converges on the map. This is excellent news for the machines and those who employ them, as it makes their job easier, but amongst the rest of us it is only convenient for cartographers. It is bad for the people in the territory whose interests the cartographers supposedly represent. It leads to two kinds of bad outcomes.


{{L1}}It makes life easier for “algorithmic” business units that can only work in terms of numbers — call these “machines”. It enhances financialisation be reducing ineffability. The benefit of network nodes that ''can'' handle ineffability — that tend to be more expensive and less predictable — we call these [[subject matter expert]]s, or even humans — is diminished. Now of course we can assign humans to algorithmic roles — where there is peripheral intractability in a network function, we have no choice — but as the territory redraws itself to the map, we can further marginalise that ineffability, and deploy cheaper humans, and at the limit, replace them altogether. Where intractability is hard, but not important — by interpreting unstructured inputs, as in a consumer helpline and triaging easy/low value queries— then techniques like AI can already handle it. By agreeing to behave like machines, to be categorised according to numerical terms, to be financialised — we ''surrender'' to machines<li>
{{L1}}It makes life easier for “algorithmic” business units that can only work in terms of numbers — call these “machines”. It enhances financialisation be reducing ineffability. The benefit of network nodes that ''can'' handle ineffability — that tend to be more expensive and less predictable — we call these [[subject matter expert]]s, or even humans — is diminished. Now of course we can assign humans to algorithmic roles — where there is peripheral intractability in a network function, we have no choice — but as the territory redraws itself to the map, we can further marginalise that ineffability, and deploy cheaper humans, and at the limit, replace them altogether. Where intractability is hard, but not important — by interpreting unstructured inputs, as in a consumer helpline and triaging easy/low value queries— then techniques like AI can already handle it. By agreeing to behave like machines, to be categorised according to numerical terms, to be financialised — we ''surrender'' to machines<li>
Line 35: Line 35:
Things that can’t be ranked and counted — that aren’t “legible” to this great high powered information processing system — have no particular [[value]] ''to the system, in the system’s terms'' — it can't digest them, extract value out of them which it does by processing — this is so whether or not these have any value to ''us''.  
Things that can’t be ranked and counted — that aren’t “legible” to this great high powered information processing system — have no particular [[value]] ''to the system, in the system’s terms'' — it can't digest them, extract value out of them which it does by processing — this is so whether or not these have any value to ''us''.  
=====Value=====
=====Value=====
''[[Value]]'' if a function of cultural and linguistic context — the richer the language, the more figurative, the more scope for imagination, the more scope for value. Conversely, inflexible languages, with little scope for imaginative reapplication are much harder to articulate values in — much harder to capture all that richness of meaning. (This is a highly relativistic sense of value by the way. Guilty as charged.)
''[[Value]]'' is a function of cultural and linguistic context — the richer the language, the more figurative, the more scope for imagination, the more scope for alternative formulations of ''value''. Conversely, inflexible languages, with little scope for imaginative reapplication have much less scope for articulating values — it is much harder to capture all that richness of meaning. The closer a language is to one-way symbol processing, the more it resembles ''code''. Machine languages cannot handle ineffability. (This is a highly relativistic sense of value by the way. Guilty as charged.)


If you render suman experience in machine language, much less in the constrained parameters of internal financial reporting standards, you are losing something. You are losing ''a lot''.
If you render suman experience in machine language, much less in the constrained parameters of internal financial reporting standards, you are losing something. You are losing ''a lot''.