Put your knowledge to the test with the accompanying brain teasers  Uncertainty in decision making is an inevitable aspect of daily life. Whenever we are faced with a situation that has a variety of possible outcomes—be it whether or not to cross a busy street, or deciding if we should bring an umbrella with us when leaving the house on an overcast morning—we often need to make quick decisions with a limited amount of information. The more informed, rational and analytical our approach to making these decisions, the more often we are likely to see favourable results. One of the most significant keys to approaching uncertainty with this outlook is having an understanding of basic probability concepts.

Whenever we participate in a game or take a risk, we are playing the odds. Events have a desired outcome, and the better we can understand the probabilities involved, the more informed our decision making becomes. Having a clear sense of how likely our decisions are to be successful can give us a sense of control over seemingly random occurrences. If we can identify methods to shift the probabilities in our favour, that sense of control will only increase. ## Teaser #1

You wake up in the early morning in the winter and don't want to disturb your sleeping partner by turning on the light. You reach into your sock drawer where there are 6 clean white and black socks. If you randomly withdraw 2 socks, the probability of drawing a pair of white socks is 2/3. What is the probability of drawing a pair of black socks? pocketfruity.com

Roll over to reveal answer

Answer = 0% - the 6 socks must be a combination of 5 white and 1 black.

## Determining the probable outcomes

The probable outcomes of any event are typically finite. For most events, we can relatively easily describe what the possible outcomes are and identify approximately how likely their occurrence is. If we know how frequently a given outcome occurs, and we know how many distinct outcomes are possible, we can determine the probability of the outcome in question.

For instance, if one were to roll a fair six-sided die, we know the probability of landing on any individual number is 1/6, and the probability of landing anywhere else is 5/6. Similarly, if you are playing BINGO, the chances of any given square on your card being called depends on the number of balls that remain to be drawn.

The probability of any sequence of events is found by multiplying the probability of each, i.e. P(A) x P(B) x P(C), etc. We can apply this logic to series of events, such as tossing a pair of six-sided dice. The probability of obtaining any distinct pair of numbers is given by:

Yet, we know that some of the pairs are identical in total value, for instance 1 + 3 and 2 + 2 both result in a value of 4. Thus, for the given possible outcomes ranging from 2 to 12, some combinations are more likely than others as shown:

From the table we can see that obtaining a sum of 2 or 12 are the least likely outcomes. As the sum approaches 7 the outcome becomes increasingly likely, with the probability of rolling a 7 being six times greater than that of rolling a 2 or a 12.

## The law of large numbers

It is important to understand that theoretical and experimental probabilities can diverge significantly in practice, particularly when we are dealing with a small number of observations. While we know theoretically that flipping a coin should result in an even split of heads and tails, in a short run it is quite typical to see a very skewed outcome. Runs of repeated tosses of heads or tails are also entirely typical and should not be considered abnormal. The randomness of naturally occurring variation ensures that consistency is extremely unlikely.

As a result of the random nature of individual observed outcomes, one should not anticipate alignment between the proportion of observed outcomes and the theoretical probability for a given event until a large number of observations has been made. This convergence of observed and theoretical probability as the sample increases is referred to as the " law of large numbers ," and the reverse concern would be that of the small sample size. Essentially, we should not consider the outcomes from a small number of observations to be truly representative of the underlying probability for each event.

If the possible number of distinct outcomes for an event remains constant, and the probabilities don't shift as outcomes occur, then we describe the trials as independent events . Flipping a coin, rolling a die, using a slot machine, or spinning a roulette wheel repeatedly would all be examples of independent trials, where the prior events have no bearing on subsequent outcomes.

## The fallacies of probability

The gambler's fallacy is the mistaken belief that when something happens more frequently than one would expect in a given series of observations of independent events, it becomes less likely to occur in subsequent observations.

This is also known as the Monte Carlo fallacy as a result of the most famous instance of its occurrence at a roulette table at the Monte Carlo casino on 18 August, 1913. The ball fell on black an astounding 26 times in a row, and the gamblers present lost millions to the casino as they bet against it. They were working under the mistaken assumption that the ball was increasingly likely to fall on red following all of the prior observations of it landing on black.

Another fallacy that observers will often fall prey to is that of the hot hand . This is the mistaken belief that a series of recent favourable outcomes for independent events increases the probability of future favourable outcomes occurring.

Typically, this mistaken belief arises in events involving a human element perceived as ability or skill, such as professional sport. When trying to assess the probability of independent outcomes, it makes sense to assess the long-term probability of an individual's success rather than what has been observed in the recent small sample. Independent events—even those involving an element of human skill—do not see probabilities shift as a result of the recent past. ## Teaser #2

A street hustler offers to let you bet on a roll of a fair 6-sided die. For every dollar that you bet, if you can roll a 1 you will receive two dollars in return. At first blush, this bet doesn't seem fair, so the hustler offers to give you three chances to roll and win the bet. Is the bet now fair? pocketfruity.com

Roll over to reveal answer

Answer = NO. The chances of NOT rolling a 1 in three consecutive rolls of a 6-sided die are 5/6 x 5/6 x 5/6, or 125/216. This is the probability of the hustler winning the bet and is significantly higher than 50% - the bet is still not fair

## Breaking down dependent events

Conversely, in card games such as poker or black jack, the probability of different cards being distributed does change, depending on the number of decks being used and the cards that have already been dealt. If you are playing a game of cards with a standard 52-card deck, the probability of being dealt an Ace is reduced for every Ace that has already been distributed (4/52 is greater than 3/51 or 2/50, etc.). BINGO similarly depends on how many balls remain to be called.

These are known as dependent events , where the probability for each subsequent event DOES change depending upon preceding events.

On each BINGO card, the numbers in each column are randomly assigned but remain confined to a set range. The "B" column is restricted to the range between 1 and 15, "I" is restricted to 16-30, "N" is restricted to 31-45,"G" is restricted to 46-60, and "O" is restricted to 61-75.

The probability of any individual number being called in a 75-ball bingo game will increase as the game progresses. On the first draw, it will be 1/75, the second draw will be 1/74, third will be 1/73, etc. Thus, the probability of obtaining any random series of five numbers drawn from a starting selection of 75 is given by:

Because of the Free spot in the centre of the card, the "N" column, the third row and the diagonals are the likeliest means of obtaining BINGO on any given card because only four numbers out of the 75 need to be called, rather than the five required in any other row or column. The probability of any four-number combination being called in a game of BINGO is given by:

This result is exactly 71 times more likely than any 5 number outcome. Unfortunately, there is nothing an individual player can do to enhance their chances of winning on a given bingo card.

The way probability can be worked in one's favour is by competing in games with fewer players. If we consider the probabilities in a game being played by 500 players with one card each versus a game being played with 100 players with one card each, it becomes obvious that your chances of winning are five times higher in the game with 100 players.

## Managing expectations and adjusting accordingly

The other aspect that can be controlled in a given game is how many opportunities each individual has to win—determined by the number of cards they are playing. The chances of any individual card winning within a given bingo game is equivalent to 1 divided by the number of cards being played. If a player is in control of multiple cards, their probability of winning becomes the number of cards they are playing divided by the number of cards being played total.

Thus, one maximises their opportunity to win by playing as many cards as they can manage reasonably. As players increase the number of cards they are playing, the challenge of focusing on too many at once can actually lower their probability of winning as a result of missed numbers.

By developing one's awareness of the underlying probabilities that describe the outcomes we observe in all of our decisions and actions, we can manage expectations and adjust as necessary. Our behaviour in various facets of life can be more passive when the odds are stacked against us, and more aggressive when things are more likely to work out in our favour. Essentially, we can manage decisions more effectively by accurately assessing the likelihood of success and risk of failure.

## Teaser #3

A traveller comes to a bridge, and is greeted by 2 knights, one in red armor the other in blue. The red knight is obviously the larger, stronger and more battle tested of the two. The blue knight tells the traveller that if he wishes to cross the bridge he must defeat each knight consecutively in hand to hand combat. He will have a maximum of three opportunities to challenge one of the knights, and he may decide which knight he fights first (i.e. red-blue-red or blue-red-blue). Which knight should the traveller challenge for the first fight if he hopes to maximize his chances of crossing the bridge?   pocketfruity.com

Roll over to reveal answer

Answer = counter-intuitively he should choose the Red Knight. We can denote a win over Red (R) and Blue (B), with a loss being (R') and (B'). No matter how superior the Red Knight is to the Blue in hand to hand combat, the probability of winning consecutively over both knights can be represented by R*B. The probability of succeeding when facing the Blue Knight first would be (B)*(R) + (B')*(R)*(B), which must be smaller than (R)*(B) + (R')*(B)*(R) because the probability of losing to the Red Knight (R') is larger than the probability of losing to the Blue Knight (B'). 