Talk:Reg tech: Difference between revisions
Amwelladmin (talk | contribs) No edit summary Tags: Mobile edit Mobile web edit |
Amwelladmin (talk | contribs) No edit summary Tags: Mobile edit Mobile web edit |
||
Line 1: | Line 1: | ||
''it's hard to believe but in 1980 there were no copy machines in Hungary or anywhere else in Eastern Europe. | ''it's hard to believe but in 1980 there were no copy machines in Hungary or anywhere else in Eastern Europe. The Communist party banned photocopiers because of how quickly they could distribute information potentially harmful to the regime.'' | ||
: Misha Glenny, ''The Rise of the Iron Men'': (episode 1: Public Enemy No. 1.) | : Misha Glenny, ''The Rise of the Iron Men'': (episode 1: Public Enemy No. 1.) | ||
===Digitisation of information: a history=== | ===Digitisation of information: a history=== | ||
In his fabulous 1970s television series ''[[Connections]]'', {{author|James Burke}} traced the origins of the modern computer back to the [[Jacquard loom]], the revolutionary silk-weaving machine Joseph Marie Jacquard perfected in 1804. Jacquard used removable punch-cards to “program” the weaving process, in much the same way a self-playing piano reads a punched card to pay a tune. | In his fabulous 1970s television series ''[[Connections]]'', {{author|James Burke}} traced the origins of the modern computer back to the [[Jacquard loom]], the revolutionary silk-weaving machine Joseph Marie Jacquard perfected in 1804. Jacquard used removable punch-cards to “program” the weaving process, in much the same way a self-playing piano reads a punched card to pay a tune. |
Revision as of 18:38, 24 August 2020
it's hard to believe but in 1980 there were no copy machines in Hungary or anywhere else in Eastern Europe. The Communist party banned photocopiers because of how quickly they could distribute information potentially harmful to the regime.
- Misha Glenny, The Rise of the Iron Men: (episode 1: Public Enemy No. 1.)
Digitisation of information: a history
In his fabulous 1970s television series Connections, James Burke traced the origins of the modern computer back to the Jacquard loom, the revolutionary silk-weaving machine Joseph Marie Jacquard perfected in 1804. Jacquard used removable punch-cards to “program” the weaving process, in much the same way a self-playing piano reads a punched card to pay a tune.
To Burke’s telling of it, Jacquard’s loom was an important waystation in the development of programmability and plasticity of machines. For the first time, one could change what a machine made without having to physically re-engineer the machine itself.
Jacquard’s loom was “digitally programmable” in the sense that it reliably carried out specific actions by reference to preconfigured instructions, encoded on card, without human intervention.
We might be tempted to call the data on these cards “symbols”, but they are not: a symbol is a representation of something else. It requires interpretation — an imaginative connection, mate in the reader’s brain, between the symbol and the thing it represents. But in “reading” the punched card, Jacquard’s loom did not interpret anything. The cards contained unambiguous, binary instructions to carry out specific functions — namely to create the intricate oriental patterns so sought after in the salons of haute couture in 19th century Paris.
Jacquard’s machine offered more than just flexibility. It separated the information comprising a given textile weave from the machine that made it. This information was imprinted on the cards. Information — binary data needing no intelligence, interpretation or skill to process — was suddenly portable. Jacquard could send instructions for his latest weave from Paris to Lyon by popping a box of cards on one of those new french mail coaches,[1] without having to transport a bloody great automated loom down there with it.
So, to the stages of computerisation of human tools. It is a slow process of extracting the instructions from the the basic engineering of the tool — the “substrate”.
“A device for reliably carrying out a defined function” is not a bad general definition for a “machine” and there were certainly machines before 1804: the innovation was to abstract the instructions from the basic engineering of the tool. You cannot extract the “instructions” built into the engineering of a scythe (when force is applied, use sharp blade to cut wheat) or a water-wheel (blades are set at an angle so that, when wind blows or water flows, the blade is pushe sideways, it turns a crank and rotates the wheel.
A water-wheel will work without human intervention but, as long as the water keeps flowing, won’t stop. This embedded natural coding: “<when water pressure is applied here, rotate this way>”.
Before Jacquard’s loom, you couldn’t “reprogramme” a machine without reengineering it. You could beat a sword into a ploughshare, but then it would be a ploughshare and not a sword.
Now compared to a MacBook hooked up to a 3D printer, a Jacquard loom wasn’t very plastic: you could change patterns easily enough but whatever instructions you fed into it, all it would spit out was fabric.
But the instructions are embedded in the material form — the substrate — of the card. This is an input, but the machine cannot commit it to memory. In a way it can: in the immediate output, which is a function of the input, but this data flows uncaptured through the machine. Analog input, analog output.
Two kinds of plasticity
Computer – a lot of flexibility; limited dependence on physical engineering. Much more “plastic”, but not infinite. A great deal of change in outcome possible without changing engineering.
Two kinds of plasticity here though:
- Physical: You still need to hook up a peripheral to produce sound, music, printed paper though some of that is integrated, and the degree of engineering in those peripherals is the same (and as unplastic) though with 3d printers we are getting close to the conceivable spectrum here. Almost all engineering can be achieved by code.
- Digital: the power and flexibility of the computer is its ability to store, manipulate and augment code – it has "memory". (Here’s a point though: unlike human memory, computer “memory” is not symbolic. It simply stores digits without assigning then any symbolic meaning. It therefore neither requires nor allows a symbolic narrative — it generates no meaning out of its stored code.
Bad anthropomorphic metaphors
A Jacquard loom has no writable “memory”. It is a just-in-time production. It can directly translate the card’s instructions into its machinery in an unplastic way, but it can’t do anything else with the code. It can’t transform the instructions: they are hard-coded into the substrate of the punch card. They can’t be separated from it. The loom doesn’t copy or store this information. It just ingests it, mechanically processes it, and instantly forgets it, as soon as the card moves on. If the punch card fails or tears, the machine won't work. The card is the code.
But “memory” is, in any case, a bad metaphor, implying as it does a conceptualisation of the past, present, and future. Like the Jacquard loom, a computer is stuck forever in the moment.
Computer code has no tense.
More about substrates. A whole other level
But the transition from encoding in engineering to encoding on punch cards didn’t cause the information explosion: it happened in 1804. It took a further separating of information from substrate for that: the abstract digital information from its he physical medium in which the code is embedded to interact with the physical world.
Now that would be the trick: if a machine could take the information, in an abstract sense, off the card, and copy it onto an internal storage system then you have separated the pure code from its physical articulation. Thus (a) the machine wouldn't thereafter need the card – it would have taken what it needs from it, copied it and stored it separately, and (b) having copied it once, the machine could copy it again, or you have the ability to replicate, splice, augment it, or adjust it. The machine itself can manipulate the code. The information on the card can contain contain instructions to overwrite itself. Note again: read, interpret, memory – these are poor metaphors because they humanise a process that is nothing like the human activity of reading, interpreting or remembering. Code processing is a far more mechanical and less complex undertaking than real reading, interpretation or memory. A machine recognises patterns in code, and while it can associate other values with those patterns, it assigns them no “meaning”.
Computers can’t do metaphor.
The manipulation of that abstract code didn’t happen all at once. In the 1940s memory and processing power was very expensive so even though machines could replicate code digitally they didn't. It was cheaper to reply on physical memory formats (tape, punched cards, disks). The code replication grew from the inside out. Machine outputs were all physical. But machines began to be sequenced into networks. Communication of data between machines became a priority. Once machines could output abstract code (rather than by writing to disk) it was only a matter of time.
It was only in the late 1970s that digital code started to leave machine networks altogether. Before this, the only means of extracting information from substrate existed at the very edge – punched card reader – at least somewhat digital in bearing, in that it assigned a single output from a single input and barring defect in the card or machine, it was reliable, or the human – analogue in every way: bringing its own cognitive architecture and narrative to the text to make sense of it. Here what the substrate contained and what the human took from it quite different things – the latter richer, augmented, but unpredictable. Humans can read like machines, but aren't good at it. They're slow, expensive, get distracted, and make mistakes. Important NB: at the edge of the network, that interpretative ambiguity remains. You cannot eliminate it. It is who we are. Disintermediation of the substrate: For the first time information in a process is bifurcated from the form of that process. Classic: email. The unit cost of a single communication went from paper, ink, envelope, stamp, postal system, and three days to zero, with total loss of access to the information encoded in the communication. With email the information encoded in that letter was preserved in digital form Dramatic reduction in the cost and increase in the fidelity of manufacture Example line 6 Pod. Combines 12 amplifiers, 10 speaker cabinets, reverb units and multi-effects into a single palm sized unit. Later, that became a plugin: no actual physical form at all: it didn't even need standalone software. But but but – the idea, once had, was not patentable, and easily replicable. Line 6 remains a significant amp emulator, but it has many competitors. It hasn't been able to dominate the market it established. Not just cheaper but more infinitely flexible. You record the dry signal. All signal processing can be adjusted post production. Want to switch out a close-miked Marshall for a Mesa boogie in a big hall after the recording? No problem. Note the change in cost. Each classic amp costs thousands of dollars. The studio would take up a large room the size of – well, a recording studio. few home enthusiasts could afford the cost of renting that even for a day. For those who would want this kind of flexibility at all, there would be no choice but to rent it for the recording period. But a line 6 Pod cust a couple of hundred bucks. Disintermediation of distribution: the inability to separate information from substrate before meant communication of information involved moving the substrate around. Information could be extracted at the utter edges of the network: in our brains. To get the information to our brains meant transporting it as it was embedded in a substrate. Classic example: paper. The time, cost and risk of of loss in moving paper around just to impart information on it was huge. The tools required to encode information onto a substrate were elaborate, dependent on significant manufacturing supply chains – even paper is monstrously complex to produce, but consumer level machines – typewriters, tape recorders – were rudimentary and would not scale for complex projects – you could write a letter but not typeset a book – much less mass produce it or distribute it. And good luck recording music or editing a movie in the comfort of your own home. The result to achieve any of its outcomes needed a dedicated business with the scale to handle your project. So, a recording studio, distribution company, or a movie studio. Intermediation: All of these businesses are configured in one way or another to take a cut from the revenues accruing to your project. You take the risk, you get the upside, but we take a cut come what may. There's a balance – the cost of production of movies and records leaves the intermediaries exposed to non-trivial risk of capital loss, pushing their margins up and converting their model to be more like coequity holders: they have skin in the game, for which they want appropriate reward Therefore, intermediaries creating significant barriers to entry but in the analogue universe they were inevitable. The point of the tedious history lesson? This whole dynamic shifted forever with the advent of digitised information. The ramifications weren't immediately clear. Some of them are still emerging – we are early stage information revolution (and the late stages are no more likely to go ray kurzweil's way than Jules Verne's) What did digitisation achieve? The separation of information from substrate. The cost of information transport (and storage) became trivial, the speed of transport instantaneous. No need to shift waxen tablets, paper. Minimal real estate required (server farms can store more information than you could fit on the planet. Imagine printing out the internet and all mail traffic and all backups. Information became manipulable. Not only could it be cheaply and quickly duplicated, 8t could be nondestructively edited and disassembled. This enabled packet switching, a key design feature of the efficient distributed network. Documents could be disassembled into tiny packets of basic data, addressed and sent over a network to any points on it where they could be easily reassembled. Thus, no need for centralised pabx style hubs with huge infrastructural hubs. And dedicated cabling to each user. Each person connects to their nearest node, and routing logic handles the rest. without this capacity, the internet would not work. Why is this important? Because each person is now connected to the whole world. The logistical problem of accessing the market, and publishing to the world, is solved for ever. (The problem morphs from reaching the world to getting the world to listen.) This is the end to end principle: put the complexity, and the digital/analog conversion, at the edges of the network. Keep it digital st all points until the user wants a physical artefact in her hands.(in many cases that will note be never: streaming audio; eBooks; video.) Thus the internet, literally, has disintermediated the whole world. Logistical barriers to entry have gone. If I have a connected device I can reach any person directly, without the necessary agency of anyone. (For most, bar my own local ISP/ telecom provider). This has proven immensely disruptive already – to the print media, music industry, publishing industry and so on. Now there are other barriers to entry that disintermediator has not removed. Certain businesses invite natural monopolies, where the very strength of the business is a function of its scale and comprehensive coverage. Classic case is on-line auction: no-one will want to waste time searching for goods on a platform with few sellers. No-one will want to list on a platform with few buyers. eBay was a prime mover with a natural monopoly, which it blew. Amazon, Craigslist, Alibaba ate its lunch, not helped by Google overlaying its shopping service. Other businesses get monopoly status through superior product, which then morphs into comprehensive coverage. Microsoft has few realistic competitors for its office suite even though it has lost monopolies elsewhere (browsers, operating systems) where tools are more generic or it never gained a genuine monopoly enough to force the point (Windows – there were always alternatives from UNIX, Apple, Linux and now android). Apple likewise enforced a monopoly through excellent, vertically integrated, closed o/s hardware But in any case if you can claim a natural monopoly, you can seek rent: users pay an additional tariff for access to the platform. You monetise your monopoly. Right. So we were talking about the opportunities offered full-scale revolution happening to established business lines and large institutions in financial services for themselves to get tech. The main driver is as intuitively obvious in as it is ultimately misconceived: by cheap software to disintermediate – that is, make redundant – the meatware. throw out expensive humans, buy cheap apps. Now provider of said software has a problem and an opportunity. It is the paradox of the digital revolution. There is scope to render massively valuable services – perform tasks of unimaginable complexity, things of which one could literally not even conceive before the Data Great Bifurcation, and relatively easily. The technology – "machine learning" and "natural language processing" are the twin tools, and they are largely open source. The potential value, massive. The potential cost, small. So – huge profit opportunity, right? But here's the thing. The benefit and the value lies not in the programming of the software – if one coder in Lublijana can figure out, they all can – but in the operation of the software. It is what it does to a corpus of documents. It is the application to which it is put. Analogy: however brilliant the design – and it is brilliant – it is not the Stratocaster that sold Pink Floyd a million records, but what David Gilmour did with it. Leo Fender got the same amount of money – a couple of hundred bucks - for the Black Strat as he got for every other model that rolled off the line in 1964. The reality is 999 out a thousand Stratocasters never made their owner a penny. (I have five, and it's true of every one of mine)
But financial services firms aren't like delusional bedroom guitar players. Each stands to make (save) millions of dollars a year from the creative deployment of fairly rudimentary tech.
Tech entrepreneurs can be forgiven for seeing an angle here. What's it worth to Wickliffe Hampton to save 100 million dollars? More than a couple of hundred bucks, right?
But if you sell rudimentary software you give that opportunity away for a couple of hundred bucks.
So what to do? What is the business model?
Rent-seeking. Find a way to reconfigure your software as a service. Put it in the cloud.
So , in a disintermediated world when I should rent? And when I should buy?
They are the same as for insurance. If you have infrequent need for an expensive service, you should rent it. A rock band goes into the studio once a year. They want maximum capacity, state-of-the-art equipment, perfect acoustics. Hundreds of thousands of dollars of equipment, and real estate in a quiet neighborhood. You rent that. The band tours 40 weeks of the year. They own their instruments – high end/personal (the black strat, right?), relatively portable and affordable but rent the pa – generic, replaceable, bulky and expensive to transport. If they played a residence in the same venue for a year, maybe they'd buy. P0
- ↑ Je suis obligé, la Wikipèdia.