Template:Complicated capsule: Difference between revisions
Amwelladmin (talk | contribs) No edit summary |
Amwelladmin (talk | contribs) No edit summary Tags: Mobile edit Mobile web edit |
||
Line 1: | Line 1: | ||
'''[[Complicated system]]s''': Unlike [[simple system]]s, [[complicated system]]s | '''[[Complicated system]]s''': Unlike [[simple system]]s, [[complicated system]]s require interaction with autonomous agents whose specific behaviour is beyond the user's control, and might be intended to defeat the user’s objective, but whose range of behaviour is entirely deterministic. Each autonomous agent’s range of possible actions and reactions can be predicted in advance. At least, in theory. | ||
For example chess — or, for that matter, any boardgame or sport. | |||
[[Complicated system]]s therefore benefit from skilled management and some [[subject matter expert|expertise]] to operate: a good chess player will do better than a poor one — a school leaver in Bucharest with plenty of coffee and a [[playbook]] on her lap probably isn’t the droid you’re looking for — but in the right hands can usually be managed without catastrophe, though the degree of success will be a function of user’s skill and expertise. | |||
You know you have a [[complicated system]] when it cleaves to a comprehensive set of axioms and rules, and thus it is a matter of making sure that the proper models are being used for the situation at hand. [[Chess]] and [[Alpha Go]] are [[Complicated system|complicated]], but not [[Complex systems|complex]], systems. You can “force-solve” them, at least in theory.<ref>Do you hear that, {{author|Daniel Susskind}}?</ref> They are entirely predictable, determinative and calculable, given enough processing power. They’re [[tame problem|tame]], ''not'' [[wicked problem]]s. |
Revision as of 05:12, 12 August 2020
Complicated systems: Unlike simple systems, complicated systems require interaction with autonomous agents whose specific behaviour is beyond the user's control, and might be intended to defeat the user’s objective, but whose range of behaviour is entirely deterministic. Each autonomous agent’s range of possible actions and reactions can be predicted in advance. At least, in theory.
For example chess — or, for that matter, any boardgame or sport.
Complicated systems therefore benefit from skilled management and some expertise to operate: a good chess player will do better than a poor one — a school leaver in Bucharest with plenty of coffee and a playbook on her lap probably isn’t the droid you’re looking for — but in the right hands can usually be managed without catastrophe, though the degree of success will be a function of user’s skill and expertise.
You know you have a complicated system when it cleaves to a comprehensive set of axioms and rules, and thus it is a matter of making sure that the proper models are being used for the situation at hand. Chess and Alpha Go are complicated, but not complex, systems. You can “force-solve” them, at least in theory.[1] They are entirely predictable, determinative and calculable, given enough processing power. They’re tame, not wicked problems.
- ↑ Do you hear that, Daniel Susskind?