Probability

From The Jolly Contrarian
Revision as of 14:16, 10 October 2023 by Amwelladmin (talk | contribs)
Jump to navigation Jump to search
The JC’s amateur guide to systems theory
Index: Click to expand:
Tell me more
Sign up for our newsletter — or just get in touch: for ½ a weekly 🍺 you get to consult JC. Ask about it here.

The JC is no great statistician, but the idea of probability gets tossed around rather more than it usefully should.

A probability distribution describes the probabilities of events in a sample space, Ω, which is the set of all possible outcomes of a given random phenomenon. For example, Ω for a coin flip would be {heads, tails}.

We can see there are two very important components to calculating a probability: the events must be random, and all possible events in the sample space must be fully known in advance.

Why random? because if they are not random there is some external force acting that can affect the outcomes. If I catch your dice and set them to a number a chose each time you throw them, then the probability of each face is not ⅙, but whatever I goddamn feel like. The outcome is not governed by simple probabilities: it is governed by me.

Why must all possible events be known? A sample space is defined to be all possible outcomes of an event, then the sum of the sum of the probabilities of all possible events in the sample space must add up to one. Some event in the sample space, that is to say, will happen. If an event happens that was not in the sample space, then it was not a well-defined sample space. Any of the probabilities you assigned to the events you thought were in the sample space, and that summed to one, were wrong.

Profound implications for averagarianistas and people, like William MacAskill and, well, Chauncey Gardiner think about the world.

Probability in the narrow sense, concerns itself with the outcome of discrete events with fixed nature that happen in tightly bounded circumstances — rather akin to Nancy Cartwright’s idea of the “nomological machine”: the tossing of a die, for example. Here the tightly bounded circumstances are:

(i) it is a fair die;
(ii) it has six equal sides;
(iii) relevant events have only six possible outcomes (i.e., that it lands on one of the six sides);
(v) the die is tossed a defined number of times;
(iv) the die is tossed to approximate randomness in its outcome;
(vi) there are no other factors which will influence the result of each event;
(vi) what happens to the die at any time it is not being tossed is irrelevant.

In these tightly constrained conditions one can calculate the probabilities without trial-and-error: no scientific experiment is needed, and if you run one anyway and it yeilds a result other than the one predicted by the maths then that is either just a randomised variation that should straighten itself out over time or, if it does not, it does not falsify probability theory but rather is evidence that the die is weighted when it is not meant to be.

The sorts of things that are not allowed to happen in the nomological machine are: someone stealing the die, or it getting lost; three more dice being added to the game; the tosseur putting the dice down in a certain way, or tossing them too many or two few times; the tosseur resigning, or dying; the dice melting, shape-shifting, or turning into a bowl of flowers — and so on.