Talk:Reg tech: Difference between revisions

From The Jolly Contrarian
Jump to navigation Jump to search
No edit summary
 
(2 intermediate revisions by the same user not shown)
Line 39: Line 39:


===More about substrates. A whole other level===
===More about substrates. A whole other level===
But the transition from encoding in engineering to encoding on punch cards didn’t cause the information explosion: it happened in 1804. It took a further separating of information from substrate for that: the abstract digital information from its he physical medium in which the code is embedded to interact with the physical world.
But the transition from ''encoding in engineering'' to ''encoding on punch cards'' didn’t, immediately, cause the information explosion: it happened in 1804. That took a further separation of the ''[[information]]'' as an intellectual concept ''in itself'' from the ''[[substrate]]'' in which the information is [[for the time being]] embedded.


Now that would be the trick: if a machine could take the information, in an abstract sense, ''off'' the card, and copy it onto an internal storage system then you have separated the pure code from its physical articulation.  
Now that would be the trick: if a machine could take the information, in an abstract sense, ''off'' the card, and somehow hold onto it, away from the card, in some kind of conceptual internal storage system, then it would have separated the pure code from its temporal physical articulation. This sounds like a dualism, between a frail mortal body, and an enduring immaterial spirit, doesn’t it.  
Thus
(a) the machine wouldn't thereafter need the card  –  it would have taken what it needs from it, copied it and stored it separately, and
(b) having copied it once, the machine could copy it again, or you have the ability to replicate, splice, augment it, or adjust it. The machine ''itself'' can manipulate the code. The information on the card can contain contain instructions to overwrite itself.
Note again: read, interpret, memory  –  these are poor metaphors because they humanise a process that is nothing like the human activity of reading, interpreting or remembering. Code processing is a far more mechanical and less complex undertaking than real reading, interpretation or memory.
A machine recognises patterns in code, and while it can associate other values with those patterns, it assigns them no “meaning”.  


'''Computers can’t do metaphor'''.
If we could just ''set the spirit free'', then our machine would no longer need the card on which it the information came. Having copied its the code in its abstract essence, the machine could re-copy it, or delete it, or splice it, or amplify it, or augment it, or adjust it. The machine ''itself'' could manipulate the code in unlimited ways, without human intervention.  


The manipulation of that abstract code didn’t happen all at once. In the 1940s memory and processing power was very expensive so even though machines could replicate code digitally they didn't. It was cheaper to reply on physical memory formats (tape, punched cards, disks). The code replication grew from the inside out. Machine outputs were all physical.  But machines began to be sequenced into networks. Communication of data between machines became a priority. Once machines could output abstract code (rather than by writing to disk) it was only a matter of time.
This writable, readable storage place for the abstract essence of information we call “random access memory”.<ref>Strictly, RAM still lives in a substrate, but we can now pick it up and copy it so effortlessly, ''relatively'', that there might as well be none. But if there really were ''no substrate at all''; if information could exist “in the æther”, apropos ''nothing'' — that’s kind of a God-like situation: how do we know it doesn’t? ''Why'' doesn’t it? Shouldn’t just the possibility of an algorithm therefore bootstrap itself into existence?  Anyhow, it would make an awesome science fiction novel.</ref>With it a machine can run, and build, [[algorithm]]s without external input.  


It was only in the late 1970s that digital code started to leave machine networks altogether. Before this, the only means of extracting information from substrate existed at the very edge  – punched card reader  –  at least somewhat digital in bearing, in that it assigned a single output from a single input and barring defect in the card or machine, it was reliable, or the human  –  analogue in every way: bringing its own cognitive architecture and narrative to the text to make sense of it. Here what the substrate contained and what the human took from it quite different things  –  the latter richer, augmented, but unpredictable. Humans can read like machines, but aren't good at it. They're slow, expensive, get distracted, and make mistakes.
But note: “read”, “interpret”, “memory”  –  these are poor [[metaphor]]s. They humanise a process that is nothing like human reading, interpreting or remembering. running and building algorithms on code is a mechanical process. It is complicated, but not complex. A machine recognises patterns, and while it can associate values with patterns, it assigns no “meanings”.
Important NB: at the edge of the network, that interpretative ambiguity remains. You cannot eliminate it. It is who we are.
 
Disintermediation of the substrate: For the first time information in a process is bifurcated from the form of that process.  
'''Computers can’t do [[metaphor]]'''.
Classic: email. The unit cost of a single communication went from paper, ink, envelope, stamp, postal system, and three days to zero, with total loss of access to the information encoded in the communication.
 
With email the information encoded in that letter was preserved in digital form
===The manipulation of abstract code===
Dramatic reduction in the cost and increase in the fidelity of manufacture
In the 1940s memory, and processing power, was expensive. Though machines could hold code abstractly, in working memory, they didn’t do it much: it was cheaper to read from and write to external physical formats (tape, punched cards, magnetic disks). (This is a sort of memory, but it is slow and clunky). Code replication grew from the inside out. Machine outputs were all physical. But machines began to be sequenced into networks. Communication of data between machines became a priority. Once machines could output abstract code (rather than by writing to disk) it was only a matter of time.
Example line 6 Pod. Combines 12 amplifiers, 10 speaker cabinets, reverb units and multi-effects into a single palm sized unit. Later, that became a plugin: no actual physical form at all: it didn't even need standalone software. But but but  –  the idea, once had, was not patentable, and easily replicable. Line 6 remains a significant amp emulator, but it has many competitors. It hasn't been able to dominate the market it established.
 
Not just cheaper but more infinitely flexible. You record the dry signal. All signal processing can be adjusted post production. Want to switch out a close-miked Marshall for a Mesa boogie in a big hall after the recording? No problem.
It was only in the 1970s that digital code started to leave machine networks altogether. Before this, the only means of extracting information from substrate existed at the very edge  – punched card reader  –  at least somewhat digital in bearing, in that it assigned a single output from a single input and barring defect in the card or machine, it was reliable, or the human  –  analogue in every way: bringing its own cognitive architecture and narrative to the text to make sense of it. Here what the substrate contained and what the human took from it quite different things  –  the latter richer, augmented, but unpredictable. Humans ''can'' read like machines, but aren't good at it.  
Note the change in cost. Each classic amp costs thousands of dollars. The studio would take up a large room the size of  – well, a recording studio. few home enthusiasts could afford the cost of renting that even for a day. For those who would want this kind of flexibility at all, there would be no choice but to rent it for the recording period. But a line 6 Pod cust a couple of hundred bucks.
They're slow, expensive, get distracted, and make mistakes.
Disintermediation of distribution: the inability to separate information from substrate before meant communication of information involved moving the substrate around.  
 
Information could be extracted at the utter edges of the network: in our brains. To get the information to our brains meant transporting it as it was embedded in a substrate. Classic example: paper. The time, cost and risk of of loss in moving paper around just to impart information on it was huge.
But at the edge of the network, that interpretative ambiguity remains. '''You cannot eliminate ambiguity. It is who we are.'''
The tools required to encode information onto a substrate were elaborate, dependent on significant manufacturing supply chains  –  even paper is monstrously complex to produce, but consumer level machines  –  typewriters, tape recorders  –  were rudimentary and would not scale for complex projects –  you could write a letter but not typeset a book  –  much less mass produce it or distribute it. And good luck recording music or editing a movie in the comfort of your own home. The result to achieve any of its outcomes needed a dedicated business with the scale to handle your project. So, a recording studio, distribution company, or a movie studio.  
 
Intermediation: All of these businesses are configured in one way or another to take a cut from the revenues accruing to your project. You take the risk, you get the upside, but we take a cut come what may. There's a balance –  the cost of production of movies and records leaves the intermediaries exposed to non-trivial risk of capital loss, pushing their margins up and converting their model to be more like coequity holders: they have skin in the game, for which they want appropriate reward Therefore, intermediaries creating significant barriers to entry but in the analogue universe they were inevitable.
===[[Disintermediation]] of the [[substrate]]: aerogrammes and amplifiers===
The point of the tedious history lesson? This whole dynamic shifted forever with the advent of digitised information. The ramifications weren't immediately clear. Some of them are still emerging  –  we are early stage information revolution (and the late stages are no more likely to go ray kurzweil's way than Jules Verne's)
For the first time, the [[information]] in a process — the ''content'' — became completely abstracted from the ''[[form]]'' of that process. This was a proper dislocation: a punctuation of the equilibrium. Overnight everything — operating protocols, institutions, economics, functions, parameters — were shot to hell.
What did digitisation achieve?
 
The separation of information from substrate.  
Classic example: [[email]]. The unit cost of a single communication went from paper, ink, envelope, stamp, postal system, and three days, with total loss of access to the information encoded in the communication, to ''zero'', with total preservation of the encoded information. The entire distribution infrastructure built around written communication, which had evolved lazily over thousands of years, was ''vaporised'', and the information encoded in written communications was preserved in digital form.
The cost of information transport (and storage) became trivial, the speed of transport instantaneous. No need to shift waxen tablets, paper. Minimal real estate required (server farms can store more information than you could fit on the planet. Imagine printing out the internet and all mail traffic and all backups.
 
Information became manipulable. Not only could it be cheaply and quickly duplicated, 8t could be nondestructively edited and disassembled. This enabled packet switching, a key design feature of the efficient distributed network. Documents could be disassembled into tiny packets of basic data, addressed and sent over a network to any points on it where they could be easily reassembled. Thus, no need for centralised pabx style hubs with huge infrastructural hubs. And dedicated cabling to each user. Each person connects to their nearest node, and routing logic handles the rest. without this capacity, the internet would not work.
Another example: amplifier emulation. The faithful recording of a guitar amplifier required the amplifier, the speaker cabinet, the room, the microphones, the placement, and pre-and post- amplification signal processing. Each amplifier, microphone and room type had different tonal characteristics. This cost a lot of money, and took a lot of room. Along comes Line 6,<ref>https://reverb.com/news/past-is-present-amp-modeling-and-the-contemporary-player</ref> and released the “Pod”, a kidney-shaped device about the size of a kebab. It could emulate 12 amplifiers, 10 speaker cabinets, four microphones, five room types and an array of signal processing options reverb units and multi-effects into a single palm-sized unit.  
 
Later, that became a “plugin”: no actual physical form at all: it didn’t even need standalone hardware. And it was ''far'' more flexible. You record the dry signal, and could adjust the processing post-production. Want to switch out a close-miked Marshall for a Mesa boogie in a big hall after the recording? Ribbon mic rather than a condenser? No problem.
 
But but but  –  the ''idea'', once had, was not patentable, and easily replicable. For the bedroom guitarist, the Pod was an order of magnitude better than one guitar amp (headphones!), but while Line 6 remains a significant brand, but it has many competitors. Its stuff is cheap. It hasn’t dominated the market it established. No-one has.
 
===The [[fax]] machine===
Disintermediation of distribution: while information was still embedded in a substrate, communication meant ''moving the substrate around''. Phase one: move the plough. Phase 2: move the letter, or the punched card.
 
The “pure information” could be extracted at the utter edges of the physical network: in our brains, through our eyes and ears. To get information from one brain to another meant transporting ''it in a substrate''.  
 
Classic example: paper. The time, cost and risk of of loss in moving paper around just to impart information on it was ''huge''.
 
The tools required to encode information onto a substrate were elaborate, dependent on significant manufacturing supply chains  –  even paper is monstrously complex to produce, but consumer-level machines  –  typewriters, tape recorders  –  were rudimentary and would not [[scale]] –  you could type a letter easily enough, but not typeset a book  –  much less mass-produce or distribute it.
 
If publishing writing was hard, well good luck putting out your own music or a movie. You relied on a dedicated business, with the scale to handle your project. A publisher; a recording label or a movie studio.  
 
===The rise of the rentiers===
Hence, ''[[Disintermediation|intermediation]]'': These scalars are configured to ''take a cut'' from your revenues — this is the beauty of scale: you can ''skim''. Client takes the risk, client gets the upside, but ''we take a cut come what may''. Now there’s a balance: the production and distribution costs of a triple-album with gatefold sleeve leaves the label with a non-trivial risk of capital loss, pushing their margins up and converting their model to be more like co-equity holders: they have skin in the game, for which they want appropriate rewardTherefore, intermediaries created a significant entry barrier. In the analogue universe this was inevitable.
 
The point of the tedious history lesson? ''This all changed forever when information went digital''. The ramifications weren’t immediately clear. Some of them are still emerging  –  we are at an early stage in the information revolution (and the late stages are no more likely to go {{author|Ray Kurzweil}}’s way than Jules Verne’s)
 
==What did digitisation achieve?==
The separation of ''information'' from ''substrate''.  
 
The cost of information transport (and storage) became trivial, the speed (as good as) instantaneous.  
 
Information became ''manipulable''.<ref>Picturesque speech note: “manipulable” means “able to be tinkered with”. “Manipulatable” means “gullible or easily led”.</ref> NIt could be cheaply and quickly duplicated, and ''non-destructively'' edited and disassembled. This enabled packet switching, a key design feature of world wide web: documents could be disassembled into tiny packets of basic data, addressed and sent over a network to any points on it where they could be easily reassembled. Thus, no need for centralised hubs with dedicated cabling to each user, as in a PABX network. Each person connects to their nearest node, and routing logic handles the rest. without this capacity, the internet would not work.
===The internet has disintermediated the world===
Why is this important? Because each person is now connected to the whole world. The logistical problem of accessing the market, and publishing to the world, is solved for ever. (The problem morphs from reaching the world to getting the world to listen.)
Why is this important? Because each person is now connected to the whole world. The logistical problem of accessing the market, and publishing to the world, is solved for ever. (The problem morphs from reaching the world to getting the world to listen.)
This is the end to end principle: put the complexity, and the digital/analog conversion, at the edges of the network. Keep it digital st all points until the user wants a physical artefact in her hands.(in many cases that will note be never: streaming audio; eBooks; video.)
 
Thus the internet, literally, has disintermediated the whole world. Logistical barriers to entry have gone. If I have a connected device I can reach any person directly, without the necessary agency of anyone. (For most, bar my own local ISP/ telecom provider). This has proven immensely disruptive already  –  to the print media, music industry, publishing industry and so on.  
This is the [[end-to-end principle]]: '''put the complexity<ref>and the digital/analog conversion</ref> at the ''edges'' of the network'''. Keep it digital at all points ''until (and if) the user wants a physical artefact in her hands''. In many cases that will be never: streaming audio; eBooks; video.
 
'''Thus the internet, literally, has disintermediated the whole world'''. Logistical barriers to entry have gone. If I have a connected device I can reach any person directly, without the necessary agency of anyone. This has proven immensely disruptive already  –  to the print media, music industry, publishing industry and so on.  
 
===Other barriers to entry===
Now there are other barriers to entry that disintermediator has not removed.  
Now there are other barriers to entry that disintermediator has not removed.  
Certain businesses invite natural monopolies, where the very strength of the business is a function of its scale and comprehensive coverage. Classic case is on-line auction: no-one will want to waste time searching for goods on a platform with few sellers. No-one will want to list on a platform with few buyers. eBay was a prime mover with a natural monopoly, which it blew. Amazon, Craigslist, Alibaba ate its lunch, not helped by Google overlaying its shopping service.  
 
Other businesses get monopoly status through superior product, which then morphs into comprehensive coverage. Microsoft has few realistic competitors for its office suite even though it has lost monopolies elsewhere (browsers, operating systems) where tools are more generic or it never gained a genuine monopoly enough to force the point (Windows  –  there were always alternatives from UNIX, Apple, Linux and now android). Apple likewise enforced a monopoly through excellent, vertically integrated, closed o/s hardware
Some businesses enjoy natural monopolies, where their very strength s a function of their scale and reach: no-one will want to waste time searching for goods on a platform with few sellers. No-one will list on a platform with few buyers.  
But in any case if you can claim a natural monopoly, you can seek rent: users pay an additional tariff for access to the platform. You monetise your monopoly.
 
Right. So we were talking about the opportunities offered full-scale revolution happening to established business lines and large institutions in financial services for themselves to get tech. The main driver is as intuitively obvious in as it is ultimately misconceived: by cheap software to disintermediate  –  that is, make redundant  –  the meatware. throw out expensive humans, buy cheap apps.
But natural monopolies are fickle: eBay had a natural monopoly, which it blew. Amazon, Craigslist, Alibaba ate its lunch, and Google overlaid its own shopping service.  
 
You can earn monopolies through a superior product, which grants you comprehensive reach: Microsoft has few realistic competitors for its office suite even though it has lost monopolies elsewhere (browsers, operating systems) where tools are more generic or it was never able fully capitalise on its advantage (Windows  –  there were always alternatives from UNIX, Apple, Linux and now android).  
 
Apple likewise enforced a monopoly through excellent, vertically integrated, closed o/s hardware.
===If you can defend a monopoly, you can charge rent; but if you overcharge rent, you can’t defend your monopoly===
But in any case ''if you can claim a natural monopoly, you can seek rent'': users pay an additional tariff for access to the platform. You monetise your monopoly.
Right. So we were talking about the opportunities offered full-scale revolution happening to established business lines and large institutions in financial services for themselves to get tech. The main driver is as intuitively obvious in as it is ultimately misconceived: by cheap software to disintermediate  –  that is, make redundant  –  the meatware. Throw out expensive humans, buy cheap apps.
Now provider of said software has a problem and an opportunity. It is the paradox of the digital revolution. There is scope to render massively valuable services  –  perform tasks of unimaginable complexity, things of which one could literally not even conceive before the Data Great Bifurcation, and relatively easily. The technology  –  "machine learning" and "natural language processing" are the twin tools, and they are largely open source. The potential value, massive. The potential cost, small.
Now provider of said software has a problem and an opportunity. It is the paradox of the digital revolution. There is scope to render massively valuable services  –  perform tasks of unimaginable complexity, things of which one could literally not even conceive before the Data Great Bifurcation, and relatively easily. The technology  –  "machine learning" and "natural language processing" are the twin tools, and they are largely open source. The potential value, massive. The potential cost, small.
So  –  huge profit opportunity, right?
So  –  huge profit opportunity, right?

Latest revision as of 11:07, 10 April 2024

It’s hard to believe but in 1980 there were no copy machines in Hungary or anywhere else in Eastern Europe. The Communist party banned photocopiers because of how quickly they could distribute information potentially harmful to the regime.

Misha Glenny, The Rise of the Iron Men: (episode 1: Public Enemy No. 1.)

Digitisation of information: a history

In his fabulous 1970s television series Connections, James Burke traced the origins of the modern computer back to the Jacquard loom, the revolutionary silk-weaving machine Joseph Marie Jacquard perfected in 1804. Jacquard used removable punch-cards to “program” the weaving process, in much the same way a self-playing piano reads a punched card to pay a tune.

To Burke’s telling of it, Jacquard’s loom was an important waystation in the development of programmability and plasticity of machines. For the first time, one could change what a machine made without having to physically re-engineer the machine itself.

Jacquard’s loom was “digitally programmable” in the sense that it reliably carried out specific actions by reference to preconfigured instructions, encoded on card, without human intervention.

We might be tempted to call the data on these cards “symbols”, but they are not: a symbol is a representation of something else. It requires interpretation — an imaginative connection, mate in the reader’s brain, between the symbol and the thing it represents. But in “reading” the punched card, Jacquard’s loom did not interpret anything. The cards contained unambiguous, binary instructions to carry out specific functions — namely to create the intricate oriental patterns so sought after in the salons of haute couture in 19th century Paris.

Jacquard’s machine offered more than just flexibility. It separated the information comprising a given textile weave from the machine that made it. This information was imprinted on the cards. Information — binary data needing no intelligence, interpretation or skill to process — was suddenly portable. Jacquard could send instructions for his latest weave from Paris to Lyon by popping a box of cards on one of those new french mail coaches,[1] without having to transport a bloody great automated loom down there with it.

So, to the stages of computerisation of human tools. It is a slow process of extracting the instructions from the the basic engineering of the tool — the “substrate”.

“A device for reliably carrying out a defined function” is not a bad general definition for a “machine” and there were certainly machines before 1804: the innovation was to abstract the instructions from the basic engineering of the tool. You cannot extract the “instructions” built into the engineering of a scythe (when force is applied, use sharp blade to cut wheat) or a water-wheel (blades are set at an angle so that, when wind blows or water flows, the blade is pushe sideways, it turns a crank and rotates the wheel.

A water-wheel will work without human intervention but, as long as the water keeps flowing, won’t stop. This embedded natural coding: “<when water pressure is applied here, rotate this way>”.

Before Jacquard’s loom, you couldn’t “reprogramme” a machine without reengineering it. You could beat a sword into a ploughshare, but then it would be a ploughshare and not a sword.

Now compared to a MacBook hooked up to a 3D printer, a Jacquard loom wasn’t very plastic: you could change patterns easily enough but whatever instructions you fed into it, all it would spit out was fabric.

But the instructions are embedded in the material form — the substrate — of the card. This is an input, but the machine cannot commit it to memory. In a way it can: in the immediate output, which is a function of the input, but this data flows uncaptured through the machine. Analog input, analog output.

Two kinds of plasticity

Computer – a lot of flexibility; limited dependence on physical engineering. Much more “plastic”, but not infinite. A great deal of change in outcome possible without changing engineering.

Two kinds of plasticity here though:

  • Physical: You still need to hook up a peripheral to produce sound, music, printed paper though some of that is integrated, and the degree of engineering in those peripherals is the same (and as unplastic) though with 3d printers we are getting close to the conceivable spectrum here. Almost all engineering can be achieved by code.
  • Digital: the power and flexibility of the computer is its ability to store, manipulate and augment code – it has "memory". (Here’s a point though: unlike human memory, computer “memory” is not symbolic. It simply stores digits without assigning then any symbolic meaning. It therefore neither requires nor allows a symbolic narrative — it generates no meaning out of its stored code.

Bad anthropomorphic metaphors

A Jacquard loom has no writable “memory”. It is a just-in-time production. It can directly translate the card’s instructions into its machinery in an unplastic way, but it can’t do anything else with the code. It can’t transform the instructions: they are hard-coded into the substrate of the punch card. They can’t be separated from it. The loom doesn’t copy or store this information. It just ingests it, mechanically processes it, and instantly forgets it, as soon as the card moves on. If the punch card fails or tears, the machine won't work. The card is the code.

But “memory” is, in any case, a bad metaphor, implying as it does a conceptualisation of the past, present, and future. Like the Jacquard loom, a computer is stuck forever in the moment.

Computer code has no tense.

More about substrates. A whole other level

But the transition from encoding in engineering to encoding on punch cards didn’t, immediately, cause the information explosion: it happened in 1804. That took a further separation of the information as an intellectual concept in itself from the substrate in which the information is for the time being embedded.

Now that would be the trick: if a machine could take the information, in an abstract sense, off the card, and somehow hold onto it, away from the card, in some kind of conceptual internal storage system, then it would have separated the pure code from its temporal physical articulation. This sounds like a dualism, between a frail mortal body, and an enduring immaterial spirit, doesn’t it.

If we could just set the spirit free, then our machine would no longer need the card on which it the information came. Having copied its the code in its abstract essence, the machine could re-copy it, or delete it, or splice it, or amplify it, or augment it, or adjust it. The machine itself could manipulate the code in unlimited ways, without human intervention.

This writable, readable storage place for the abstract essence of information we call “random access memory”.[2]With it a machine can run, and build, algorithms without external input.

But note: “read”, “interpret”, “memory” – these are poor metaphors. They humanise a process that is nothing like human reading, interpreting or remembering. running and building algorithms on code is a mechanical process. It is complicated, but not complex. A machine recognises patterns, and while it can associate values with patterns, it assigns no “meanings”.

Computers can’t do metaphor.

The manipulation of abstract code

In the 1940s memory, and processing power, was expensive. Though machines could hold code abstractly, in working memory, they didn’t do it much: it was cheaper to read from and write to external physical formats (tape, punched cards, magnetic disks). (This is a sort of memory, but it is slow and clunky). Code replication grew from the inside out. Machine outputs were all physical. But machines began to be sequenced into networks. Communication of data between machines became a priority. Once machines could output abstract code (rather than by writing to disk) it was only a matter of time.

It was only in the 1970s that digital code started to leave machine networks altogether. Before this, the only means of extracting information from substrate existed at the very edge – punched card reader – at least somewhat digital in bearing, in that it assigned a single output from a single input and barring defect in the card or machine, it was reliable, or the human – analogue in every way: bringing its own cognitive architecture and narrative to the text to make sense of it. Here what the substrate contained and what the human took from it quite different things – the latter richer, augmented, but unpredictable. Humans can read like machines, but aren't good at it. They're slow, expensive, get distracted, and make mistakes.

But at the edge of the network, that interpretative ambiguity remains. You cannot eliminate ambiguity. It is who we are.

Disintermediation of the substrate: aerogrammes and amplifiers

For the first time, the information in a process — the content — became completely abstracted from the form of that process. This was a proper dislocation: a punctuation of the equilibrium. Overnight everything — operating protocols, institutions, economics, functions, parameters — were shot to hell.

Classic example: email. The unit cost of a single communication went from paper, ink, envelope, stamp, postal system, and three days, with total loss of access to the information encoded in the communication, to zero, with total preservation of the encoded information. The entire distribution infrastructure built around written communication, which had evolved lazily over thousands of years, was vaporised, and the information encoded in written communications was preserved in digital form.

Another example: amplifier emulation. The faithful recording of a guitar amplifier required the amplifier, the speaker cabinet, the room, the microphones, the placement, and pre-and post- amplification signal processing. Each amplifier, microphone and room type had different tonal characteristics. This cost a lot of money, and took a lot of room. Along comes Line 6,[3] and released the “Pod”, a kidney-shaped device about the size of a kebab. It could emulate 12 amplifiers, 10 speaker cabinets, four microphones, five room types and an array of signal processing options reverb units and multi-effects into a single palm-sized unit.

Later, that became a “plugin”: no actual physical form at all: it didn’t even need standalone hardware. And it was far more flexible. You record the dry signal, and could adjust the processing post-production. Want to switch out a close-miked Marshall for a Mesa boogie in a big hall after the recording? Ribbon mic rather than a condenser? No problem.

But but but – the idea, once had, was not patentable, and easily replicable. For the bedroom guitarist, the Pod was an order of magnitude better than one guitar amp (headphones!), but while Line 6 remains a significant brand, but it has many competitors. Its stuff is cheap. It hasn’t dominated the market it established. No-one has.

The fax machine

Disintermediation of distribution: while information was still embedded in a substrate, communication meant moving the substrate around. Phase one: move the plough. Phase 2: move the letter, or the punched card.

The “pure information” could be extracted at the utter edges of the physical network: in our brains, through our eyes and ears. To get information from one brain to another meant transporting it in a substrate.

Classic example: paper. The time, cost and risk of of loss in moving paper around just to impart information on it was huge.

The tools required to encode information onto a substrate were elaborate, dependent on significant manufacturing supply chains – even paper is monstrously complex to produce, but consumer-level machines – typewriters, tape recorders – were rudimentary and would not scale – you could type a letter easily enough, but not typeset a book – much less mass-produce or distribute it.

If publishing writing was hard, well good luck putting out your own music or a movie. You relied on a dedicated business, with the scale to handle your project. A publisher; a recording label or a movie studio.

The rise of the rentiers

Hence, intermediation: These scalars are configured to take a cut from your revenues — this is the beauty of scale: you can skim. Client takes the risk, client gets the upside, but we take a cut come what may. Now there’s a balance: the production and distribution costs of a triple-album with gatefold sleeve leaves the label with a non-trivial risk of capital loss, pushing their margins up and converting their model to be more like co-equity holders: they have skin in the game, for which they want appropriate reward. Therefore, intermediaries created a significant entry barrier. In the analogue universe this was inevitable.

The point of the tedious history lesson? This all changed forever when information went digital. The ramifications weren’t immediately clear. Some of them are still emerging – we are at an early stage in the information revolution (and the late stages are no more likely to go Ray Kurzweil’s way than Jules Verne’s)

What did digitisation achieve?

The separation of information from substrate.

The cost of information transport (and storage) became trivial, the speed (as good as) instantaneous.

Information became manipulable.[4] NIt could be cheaply and quickly duplicated, and non-destructively edited and disassembled. This enabled packet switching, a key design feature of world wide web: documents could be disassembled into tiny packets of basic data, addressed and sent over a network to any points on it where they could be easily reassembled. Thus, no need for centralised hubs with dedicated cabling to each user, as in a PABX network. Each person connects to their nearest node, and routing logic handles the rest. without this capacity, the internet would not work.

The internet has disintermediated the world

Why is this important? Because each person is now connected to the whole world. The logistical problem of accessing the market, and publishing to the world, is solved for ever. (The problem morphs from reaching the world to getting the world to listen.)

This is the end-to-end principle: put the complexity[5] at the edges of the network. Keep it digital at all points until (and if) the user wants a physical artefact in her hands. In many cases that will be never: streaming audio; eBooks; video.

Thus the internet, literally, has disintermediated the whole world. Logistical barriers to entry have gone. If I have a connected device I can reach any person directly, without the necessary agency of anyone. This has proven immensely disruptive already – to the print media, music industry, publishing industry and so on.

Other barriers to entry

Now there are other barriers to entry that disintermediator has not removed.

Some businesses enjoy natural monopolies, where their very strength s a function of their scale and reach: no-one will want to waste time searching for goods on a platform with few sellers. No-one will list on a platform with few buyers.

But natural monopolies are fickle: eBay had a natural monopoly, which it blew. Amazon, Craigslist, Alibaba ate its lunch, and Google overlaid its own shopping service.

You can earn monopolies through a superior product, which grants you comprehensive reach: Microsoft has few realistic competitors for its office suite even though it has lost monopolies elsewhere (browsers, operating systems) where tools are more generic or it was never able fully capitalise on its advantage (Windows – there were always alternatives from UNIX, Apple, Linux and now android).

Apple likewise enforced a monopoly through excellent, vertically integrated, closed o/s hardware.

If you can defend a monopoly, you can charge rent; but if you overcharge rent, you can’t defend your monopoly

But in any case if you can claim a natural monopoly, you can seek rent: users pay an additional tariff for access to the platform. You monetise your monopoly. Right. So we were talking about the opportunities offered full-scale revolution happening to established business lines and large institutions in financial services for themselves to get tech. The main driver is as intuitively obvious in as it is ultimately misconceived: by cheap software to disintermediate – that is, make redundant – the meatware. Throw out expensive humans, buy cheap apps. Now provider of said software has a problem and an opportunity. It is the paradox of the digital revolution. There is scope to render massively valuable services – perform tasks of unimaginable complexity, things of which one could literally not even conceive before the Data Great Bifurcation, and relatively easily. The technology – "machine learning" and "natural language processing" are the twin tools, and they are largely open source. The potential value, massive. The potential cost, small. So – huge profit opportunity, right? But here's the thing. The benefit and the value lies not in the programming of the software – if one coder in Lublijana can figure out, they all can – but in the operation of the software. It is what it does to a corpus of documents. It is the application to which it is put. Analogy: however brilliant the design – and it is brilliant – it is not the Stratocaster that sold Pink Floyd a million records, but what David Gilmour did with it. Leo Fender got the same amount of money – a couple of hundred bucks - for the Black Strat as he got for every other model that rolled off the line in 1964. The reality is 999 out a thousand Stratocasters never made their owner a penny. (I have five, and it's true of every one of mine)

But financial services firms aren't like delusional bedroom guitar players. Each stands to make (save) millions of dollars a year from the creative deployment of fairly rudimentary tech.

Tech entrepreneurs can be forgiven for seeing an angle here. What's it worth to Wickliffe Hampton to save 100 million dollars? More than a couple of hundred bucks, right?

But if you sell rudimentary software you give that opportunity away for a couple of hundred bucks.

So what to do? What is the business model?

Rent-seeking. Find a way to reconfigure your software as a service. Put it in the cloud.

So , in a disintermediated world when I should rent? And when I should buy?

They are the same as for insurance. If you have infrequent need for an expensive service, you should rent it. A rock band goes into the studio once a year. They want maximum capacity, state-of-the-art equipment, perfect acoustics. Hundreds of thousands of dollars of equipment, and real estate in a quiet neighborhood. You rent that. The band tours 40 weeks of the year. They own their instruments – high end/personal (the black strat, right?), relatively portable and affordable but rent the pa – generic, replaceable, bulky and expensive to transport. If they played a residence in the same venue for a year, maybe they'd buy. P0

  1. Je suis obligé, la Wikipèdia.
  2. Strictly, RAM still lives in a substrate, but we can now pick it up and copy it so effortlessly, relatively, that there might as well be none. But if there really were no substrate at all; if information could exist “in the æther”, apropos nothing — that’s kind of a God-like situation: how do we know it doesn’t? Why doesn’t it? Shouldn’t just the possibility of an algorithm therefore bootstrap itself into existence? Anyhow, it would make an awesome science fiction novel.
  3. https://reverb.com/news/past-is-present-amp-modeling-and-the-contemporary-player
  4. Picturesque speech note: “manipulable” means “able to be tinkered with”. “Manipulatable” means “gullible or easily led”.
  5. and the digital/analog conversion