84,181
edits
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) No edit summary |
||
Line 1: | Line 1: | ||
{{quote|{{d|Robomorphism|/rəʊbəʊˈmɔːfɪz(ə)m/|n|}} | |||
The attribution of Turing machine characteristics to human behaviour, activity, or community interactions.}} | |||
{{drop|R|ecently, Matt Bradley}} made an interesting point<ref>[https://www.linkedin.com/pulse/why-humanise-machines-matthew-bradley-adgce ''Why Humanise The Machines?'']</ref> about our gallop towards [[AI]]: whatever we do, we should be careful of anthropomorphising when we talk about robots. Machines don’t think, and they don’t “''hallucinate''”. Hallucinating is actually a pretty special, [[I am a Strange Loop|strangely-loopy]] phenomenon. No-one has yet come up with a compelling account of how any kind of human consciousness works — cue tedious discussions about [[Cartesian theatre|Cartesian theatres]] — but we do know this is categorically not what machines do. We should not let habits of language conflate the two. Down that road lies a false sense of security. | |||
But the converse is just as important: we should not describe what humans do in terms meant for machines — we shouldn’t ''robomorphise'', or evaluate human performance in terms suited to machine behaviour. This is to make a grievous category error. | But the converse is just as important: we should not describe what humans do in terms meant for machines — we shouldn’t ''robomorphise'', or evaluate human performance in terms suited to machine behaviour. This is to make a grievous category error. |