Template:M intro technology robomorphism: Difference between revisions

From The Jolly Contrarian
Jump to navigation Jump to search
No edit summary
Tags: Mobile edit Mobile web edit Advanced mobile edit
No edit summary
 
(4 intermediate revisions by the same user not shown)
Line 1: Line 1:
{{quote|
“If to call a man a wolf is to put him in a special light, we must not forget that the metaphor makes the wolf seem more human than he otherwise would.”


:—Max Black, ''{{pl|https://web.stanford.edu/~eckert/PDF/Black1954.pdf|Metaphor}}'' (1965)
}}
{{qd|Robomorphism|/rəʊbəʊˈmɔːfɪz(ə)m/|n|
{{qd|Robomorphism|/rəʊbəʊˈmɔːfɪz(ə)m/|n|
The interpretation of human behaviour, activity, or community interactions in terms better suited to the operation of a [[Turing machine]].}}
The interpretation of human behaviour, activity, or interactions in terms better suited to a [[Turing machine]].}}


{{drop|R|ecently, Matt Bradley}} made an {{plainlink|https://www.linkedin.com/pulse/why-humanise-machines-matthew-bradley-adgce|interesting point}} about our gallop towards [[AI]]: whatever we do, we should be careful of anthropomorphising when we talk about robots.  Machines don’t think, and they don’t “''hallucinate''”. Hallucinating is actually a pretty special, [[I am a Strange Loop|strangely-loopy]] phenomenon. No-one has yet come up with a compelling account of how any kind of human consciousness works — cue tedious discussions about [[Cartesian theatre|Cartesian theatres]] — but we do know this is categorically not what machines do. We should not let habits of language conflate the two. Down that road lies a false sense of security.
{{drop|R|ecently, Matt Bradley}} made an {{plainlink|https://www.linkedin.com/pulse/why-humanise-machines-matthew-bradley-adgce|interesting point}} about our gallop towards [[AI]]: whatever we do — however tempting it is — we should be careful not to anthropomorphise when we talk about machines.  


But the converse is just as important: we should not describe what humans do in terms meant for machines — we shouldn’t ''robomorphise'', or evaluate human performance in terms suited to machine behaviour.  
Machines don’t ''think'', and they don’t “''hallucinate''”. Hallucinating is a special, [[I am a Strange Loop|strangely-loopy]] phenomenon. Generative models don’t “see things that aren’t there”: they generate text based on statistical patterns in their training data. When they produce incorrect information, it’s due to limitations in their training data and architecture, not because they're experiencing false perceptions like a human might. No one yet has explained how human perceptions work, but we remain confident that machines don’t perceive and they are not self-aware.
 
But the converse is just as important: we ''really'' shouldn’t ''evaluate'' humans by standards suited to machines — we shouldn’t ''robomorphise'' ourselves, in other words.  


{{quote|“To a man with a computer, everything looks like a computer.”<Ref>[[Neil Postman]]’s similar observation: “To a man with a computer, everything looks like ''data''.”</ref>
{{quote|“To a man with a computer, everything looks like a computer.”<Ref>[[Neil Postman]]’s similar observation: “To a man with a computer, everything looks like ''data''.”</ref>
:—JC}}
:— JC}}


To do so does not just invite [[technological redundancy]] — which, in its place, is no bad thing; few (even proofreaders) lament the demise of proofreaders over [[Track changes|delta-view]] — mechanisation promises  to clear away the [[tedium]] and bureaucratic sludge in well-understood, low-risk, standard processes — but that seems not to be the aspiration of the thought leaders.
If we benchmark humans against computers, we will lose. Machines do predictable things more easily, cheaply and quickly than we can. They always have done. They work best in constrained, predictable environments configured to eliminate [[uncertainty]] and minimise waste.
 
“Any sufficiently advanced technology is indistinguishable from magic” says Arthur C. Clarke — the jury is out whether AI is different, but it is not unreasonable to proceed on the assumption it is not, and foolish to do otherwise.
 
The main use cases for machines in any industry from the beginning of civilisation are these: ''power'', ''speed'', ''accuracy'', ''efficiency'', ''economy''. Machines do things easier and cheaper and quicker than humans. They always have done. Per George Gilder:
{{quote|“The claim of superhuman performance seems rather overwrought to me. Outperforming unaided human beings is what machines are supposed to do. That’s why we build them.”<ref> ''Life After Google: The Fall of Big Data and the Rise of the Blockchain Economy'' (2018)</ref>}}
 
They therefore work best in constrained, predictable environments that have, as far as possible, been preconfigured to eliminate unknowns and minimise waste, such as a factory production line. Human intervention is minimised and, where possible, eliminated.


====Why humans don’t work in manufacturing any more ====
====Why humans don’t work in manufacturing any more ====
{{Drop|T|herefore, in the}} west, humans have been largely absent from production industries for thirty years — hence the colossal pivot to [[bullshit job|service industries]]. It isn’t that we ''don’t'' make things any more: it‘s just that most of the time we don’t need ''humans'' to do it. We have optimised and configured the production industry so it can work by itself.
{{Drop|I|n the west}}, humans have been largely absent from production lines for decades — hence the colossal pivot to [[bullshit job|service industries]]. It isn’t that we ''don’t'' make things any more: it’s just that, usually, we don’t need ''humans'' to do it. We have optimised and configured the production to work by itself.
 
The service industry has undergone a similar process over the last forty years, expecially as it ballooned with refugees from manufacturing. Where it can, it will eliminate expensive, manual processes that can be done automatically.  


Where this can be done easily, it has been: deltaview, for example. Law firms used hire teams of “document examiners”. Likewise email obviated the fax room and, largely the mail room.  
The service industry underwent a similar process as it ballooned with refugees from manufacturing. We automate expensive, manual processes wherever we can.  


But matters requiring human interaction have been harder to fully automate. The typical reactions have been to [[triage]] — delay the intervention of a human as long as possible — or to “self-serve” — a rather cheeky means of getting the consumer to do part of the work, or absorb its expense — for you.  
But matters requiring human intervention — judgment — have been harder to fully automate. The typical reactions have been to [[triage]] — delay human intervention as long as possible — or to “self-serve” — a rather cheeky means of getting the consumer to do part of the work, or absorb its expense — for you.  


Ryanair is the master of this — it even charges customers for getting their booking wrong!  
Ryanair is the master of this — it even charges customers for getting their booking wrong!  

Latest revision as of 10:21, 11 November 2024

“If to call a man a wolf is to put him in a special light, we must not forget that the metaphor makes the wolf seem more human than he otherwise would.”

—Max Black, Metaphor (1965)

Robomorphism
/rəʊbəʊˈmɔːfɪz(ə)m/ (n.)


The interpretation of human behaviour, activity, or interactions in terms better suited to a Turing machine.

Recently, Matt Bradley made an interesting point about our gallop towards AI: whatever we do — however tempting it is — we should be careful not to anthropomorphise when we talk about machines.

Machines don’t think, and they don’t “hallucinate”. Hallucinating is a special, strangely-loopy phenomenon. Generative models don’t “see things that aren’t there”: they generate text based on statistical patterns in their training data. When they produce incorrect information, it’s due to limitations in their training data and architecture, not because they're experiencing false perceptions like a human might. No one yet has explained how human perceptions work, but we remain confident that machines don’t perceive and they are not self-aware.

But the converse is just as important: we really shouldn’t evaluate humans by standards suited to machines — we shouldn’t robomorphise ourselves, in other words.

“To a man with a computer, everything looks like a computer.”[1]

— JC

If we benchmark humans against computers, we will lose. Machines do predictable things more easily, cheaply and quickly than we can. They always have done. They work best in constrained, predictable environments configured to eliminate uncertainty and minimise waste.

Why humans don’t work in manufacturing any more

In the west, humans have been largely absent from production lines for decades — hence the colossal pivot to service industries. It isn’t that we don’t make things any more: it’s just that, usually, we don’t need humans to do it. We have optimised and configured the production to work by itself.

The service industry underwent a similar process as it ballooned with refugees from manufacturing. We automate expensive, manual processes wherever we can.

But matters requiring human intervention — judgment — have been harder to fully automate. The typical reactions have been to triage — delay human intervention as long as possible — or to “self-serve” — a rather cheeky means of getting the consumer to do part of the work, or absorb its expense — for you.

Ryanair is the master of this — it even charges customers for getting their booking wrong! (penalising humans for their own human frailty is the ultimate in chutzpah) but it happens within and without organisations.

Internet enabled outsourcing service to the customer or consumer. Typing pool!

  1. Neil Postman’s similar observation: “To a man with a computer, everything looks like data.”