Template:M intro technology robomorphism: Difference between revisions

Jump to navigation Jump to search
no edit summary
(Created page with "Matt Bradley made an interesting point about the gallop towards AI: we should be careful of anthropomorphizing robots. What they do is a bad proxy or derivative of human intelligence. It is not the same, and we should not let habits of language conflate the two Ther’s another way of looking at it, too: we should not see what humans do as a proxy for how robots behave, either. Not only should be be mindful of lazy computer metaphors for human behaviour, but we should b...")
Tags: Mobile edit Mobile web edit
 
No edit summary
Line 1: Line 1:
Matt Bradley made an interesting point about the gallop towards AI: we should be careful of anthropomorphizing robots. What they do is a bad proxy or derivative of human intelligence. It is not the same, and we should not let habits of language conflate the two
Recently, Matt Bradley made an interesting point<ref>[https://www.linkedin.com/pulse/why-humanise-machines-matthew-bradley-adgce ''Why Humanise The Machines?'']</ref> about our gallop towards [[AI]]: whatever we do, we should be careful of anthropomorphising when we talk about robots. Machines don’t think, and they don’t “''hallucinate''”. Hallucinating is actually a pretty special, [[I am a Strange Loop|strangely-loopy]] phenomenon. No-one has yet come up with a compelling account of how any kind of human consciousness works — cue tedious discussions about [[Cartesian theatre|Cartesian theatres]] — but we do know this is categorically not what machines do. We should not let habits of language conflate the two. Down that road lies a false sense of security.


Ther’s another way of looking at it, too: we should not see what humans do as a proxy for how robots behave, either. Not only should be be mindful of lazy computer metaphors for human behaviour, but we should be wary of evaluating humans by reference to machine qualities, much less optimising our criteria for human contributions for numerical processing. For if there is a sure way to technological redundancy, that is surely it.
But the converse is just as important: we should not describe what humans do in terms meant for machines — we shouldn’t ''robomorphise'', or evaluate human performance in terms suited to machine behaviour. This is to make a grievous category error.  
(Technological redundancy, in its place, is no bad thing — we should not lament the demise of manual comparison over deltaview, any more than we should lament the demise of hand weavers over the Jacquard loom).  
 
This does not just invite [[technological redundancy]] — which, in its place, is no bad thing — we do not lament the demise of proofreaders over delta-view, any more than we should lament the demise of ''les saboteurs'' over the [[Jacquard loom]]).  
But technological redundancy has its place — clearing out the tedium and bureaucratic sludge in well-understood, low-risk, standard processes — but that seems not to be the aspiration of the thought leaders.
But technological redundancy has its place — clearing out the tedium and bureaucratic sludge in well-understood, low-risk, standard processes — but that seems not to be the aspiration of the thought leaders.
“Any sufficiently advanced technology is indistinguishable from magic” says Arthur C. Clarke — the jury is out whether AI is different, but it is not unreasonable to proceed on the assumption it is not, and foolish to do otherwise.
“Any sufficiently advanced technology is indistinguishable from magic” says Arthur C. Clarke — the jury is out whether AI is different, but it is not unreasonable to proceed on the assumption it is not, and foolish to do otherwise.

Navigation menu