A fair coin is flipped twice. Which pair of the following events is a pair of independent events?
A = The first flip is heads
B = At least one of the flips is tails
C = The second flip is tails
D = At least one of the flips is heads
This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try
refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c)
loading the
non-javascript version of this page
. We're sorry about the hassle.
Let's look at the possible outcome sets. Before we are given any information beyond "it's a fair coin flipped twice in a row", we have 4 possible outcomes, each with equal probability.
I could take the time to write out the possible sets as flip1=H flip2=H; flip1=H flip2=T; etc.; but to make it easier to write out (and to read), I'll use the more common notation style of HH (where the first letter is always an outcome of the first flip, and the second letter is always an outcome of the second flip).
So our possible outcomes of two flips are HH, HT, TH, and TT.
Let's imagine that our slightly dim friend got a coin and flipped it, wrote down the result, and then flipped it again and wrote down the second result. Then they ask us what probability we assign to the two flips being HH. We say, 1/4. This is correct, because there are four (and only four) equally possible outcomes, but only one of the four possible outcomes is first flip H then second flip H (HH).
They then hand us the paper and say, "The results were actually HH, as you can see. I would like to offer you a bet -- right now -- where I say that the results were actually HH. How much money would you be willing to bet me right now, that the results of this particular double flip were NOT HH?"
The friend is slightly dim, because I already know the result of this particular flip. I'm not willing to bet even one cent that the results were not HH, because I already know the results -- and the results were HH.
"But!" the dim friend cries, "That implies that you don't really trust your initial odds of 1/4 that you gave me. After all, when you gave me the odds of 1/4, I already had the results in hand, so the actual outcome was already settled. If you really believed that the prior probability was 1/4, you would think that it's a good bet to bet on not-HH, even after you know the results."
But no. No human looks at a fixed outcome but then goes on to bet based on the prior probability of that outcome, rather than on the actual results. If the prior probability of HH was 1/4, the post-event actuality is that the flips were flipped, and only one of the four possibilities resulted. If the friend hides the outcome by not showing us the paper and then offers us the same bet, we still wouldn't take it, because they know the actual results and we don't -- either they have decided to give us money, or they have decided to take money from us; they can freely choose either outcome by simply betting on the right answer or betting on an answer they already know is wrong. And this is why we don't agree to bet with them -- if they merely wanted to give us money, they could just do it without the bet, so there's no point in taking the bet (and with it the possibility that they just want to fleece us).
They assign a probability of "1/1" to the coins being HH because they have full information about the result, but we still have no information about the result, so we can't do any better than 1/4 odds of being right.
So we have two types of information which are extremely easy to mix up when we start talking about probability -- on the first hand, we have odds in the absence of information about results (like before the fair coin is flipped, we have no information about the result of the flip except that we know the actual result must be one and only one of several equally possible sets -- and we have enough pre-flip information to successfully write out every one of those equally possible sets) and then on the other hand, we have different odds in the presence of information about one particular specific outcome. That's how we can have odds of 1/4 before the coins are flipped, but odds of 1/1 after the flips are known -- they reflect the additional information we have injected into the problem as a whole (or subtracted -- you could say that the possibility space collapsed from four sets to one set.)
In case you want to fight that conclusion, I'll briefly imagine a different scenario: Your friend has a deck of two cards and says they will draw one card randomly. What are the odds the card is an ace? You confidently say 1/2, because your mind is still on coin flips, where you know when you are tallying up your possibility space that the possible sets are limited to 2 (possible result set one is H and possible result set two is T). But then your friend says, "Aha! The odds are zero. The two cards in the deck were a king and a joker." After you punch your friend in the face, you can thank them for reminding you to keep track of what information you actually have and what information you don't.
In the same way, even our coin flip scenario gives us real world information about the coin before it is ever flipped: it gives us the information on the possible sets -- each set can have two possible events that are either H events or T events, but we can't have extra sets beyond the first four that randomly substitute "R," "H," or "Purple" for H or T (incidentally, let's presume that the coin's edge has been sharpened so that the coin must eventually land on one perfectly fair side or the other and that we are in a gravity field that pulls the coin down with sufficient force to make it land on a flat surface with one face showing). Once the coin is actually flipped twice and we know the results, the other 3 results no longer belong in the space of possible sets for that historical double flip . The space of possible sets is reduced to 1 -- the one that actually happened. Hence the 1/1 odds that the dim friend assigns to the numbers on the paper being HH given that the dim friend has already seen that they are HH.
I'm actually going somewhere with all this.
Let's go back to the coin flip where we are trying to find the prior probability of at least one flip turning out heads (we don't care which flip is heads, and we don't care whether both flips are heads, because any of those sets still meet the criteria given). (This is Option D in the independent/dependent problem.)
The first thing I notice is that we are no longer asking about the probability we will get TT ("both flips are tails"). It is still a possible outcome, however -- part of the possible set space -- but it is not a condition-meeting outcome. We are asking the probability of getting HH, HT, or TH as our two flips.
So our possibility space is: HH (meets conditions) HT (meets conditions) TH (meets conditions) TT (does not meet conditions)
Let's reference something simpler than this by asking our odds of getting a Heads on one single coin flip...
Possibility space: H (meets stated conditions) T (does not meet stated conditions)
So our probability is that only 1 set meets specified conditions out of the two equally possible sets (or 1/2).
Likewise, in the "at least one Heads" scenario, we have 4 possible actual outcomes, but only three of those outcomes meet our conditions, which makes 3/4 odds of desired outcome (recall that desired outcome is "at least one flip is H").
The situation is not dependent after all. How could it be? The first coin flip did not influence the second coin flip in any way (unlike, say, drawing a marble out of jar, then having fewer marbles for the next draw, which is a dependent situation).
So let's look at the "first flip is Heads" problem, since Andy Hayes was not confused in his odds on that one. He concluded that if you know the first flip (OR the second flip, could do it that way too), the odds for the other flip is 1/2, because as he says in his explanatory comment, "The result of the second flip is unaffected by the result of the first flip (and vice versa). If the first flip is heads [...] the probability that the second flip is tails is [the same as it would be in a situation where] the first flip was tails." So far we agree!
Here's the possibility space for the odds of "First flip is Heads": HH (meets condition "first flip is heads") HT (meets condition "first flip is heads") TH (does not meet condition, we're only looking at the odds for those sets where H was the first result) TT (does not meet condition, we're only looking at the odds for those sets where H was the first result)
Since he had no further information about which of the two possible sets the second flip would turn out as, the probability of one or the other happening on that flip is "number of sets: 4" and "condition-meeting outcomes found to be in the possibility space: 2", which simplifies down to 1/2.
Two coin flips are always made up of 4 sets. Two coin flips can never be made up of 3 sets for the prior probability, so the 1/3 number in Andy's comment doesn't actually make sense. I see in Andy's first equation (in his comment) that he eliminated the HH possibility from the sample space. However, the only way to actually do that is to have extra information about the outcome or sample space (such as "the first coin result is definitely H"), and if you have that information, you can't eliminate just one set -- you have to eliminate BOTH sets that start with "first coin is H."
If you don't have that information, then you can't eliminate HH at all without (at the very least) structurally re-wording the problem. (Example: Out of three possible results HT, TH, and TT, what are the odds of any single result being HT? But that isn't a question about coin flipping anymore -- it is a question about reaching into a hat that has three slips of paper in it.) In the coin flipping scenario, HH is still part of the possibility space and MUST be taken into account, even if it is not the result whose probability you are trying to calculate.
So, Andy started by asking for the probability of "coin flip 2" being a particular result. However, he was actually asking, "What are the odds of the specific sets "HH or HT".
The 4 possible sets include 2 outcomes that meet the desired conditions and 2 sets that don't, for total odds of 2 "meets conditions" / 4 "possible", which simplifies down to 1/2.
But then in the next problem, he didn't define what outcomes would "meet conditions" with the same clarity as he did for the "First coin flip is H" version, and more importantly it looks like he didn't define the actual possibility space successfully either.
He also uses a calculation for conditional probability to attempt to prove that the probability is conditional (find a description of the formula itself by searching the web for conditional probability yale statistics) which I'm pretty sure would only be valid as proof if the situation actually was conditional (and he hasn't proven it is). An independent event uses a different formula: P(A|B)= P(A) or to write it out longhand, the probability of A (given B) is just the probability of A itself.
The resulting confusion about dependency vs independency is unsurprising. In actuality, the answer to the original question about which events are independent isn't A and C, the answer should be that answers A thru D are all independent.
When it comes to fair coin flips, the result of one coin flip is always independent of the result of another coin flip – always and forever. However, the events described in this problem are not describing the results of a single coin flip. They are describing the results of two coin flips. We've defined these results in a very specific way, and the effect is that we see some of these pairs of events being dependent.
This can seem non-intuitive (much in the same way as the Monty Hall Problem is), but I can give you a practical example. Suppose your friend offers to play a game with you. He states that he will flip a coin twice, and you will win if the first flip is heads. He then states that there is an additional rule; you have to choose between two options:
Which option should you choose, or does it matter? There is a very obvious choice here -- you should choose the second option, because it would guarantee that you win. This is essentially the meaning of "dependent" in this context -- your chance of winning depends on the incidence of something happening.
I think you're arguing that we don't normally have this kind of "prior" information about results, but hopefully the above shows that this can be the case in certain narrow circumstances.
Wow, that's a hell lot of explanation 👏🏻
Problem Loading...
Note Loading...
Set Loading...
Relevant wiki: Probability - Independent events
The result of the second flip is unaffected by the result of the first flip (and vice versa). If the first flip is heads, then the probability that the second flip is tails is 2 1 , which is the same probability it would be even if the first flip was tails.
Therefore, the independent events are A and C .
It may be a surprise to some that all the other choices listed are pairs of dependent events. That is to say, the incidence of one event affects the probability of the other event.
The sample space for this probability experiment is: S = { H H , H T , T H , T T } . H represents "heads" and T represents "tails", and the order of these outcomes matter. Because the coin is fair, the sample space is uniform; all outcomes are equally likely.
P ( A ∣ B ) = ∣ B ∣ ∣ A ∩ B ∣ = ∣ { H T , T H , T T } ∣ ∣ { H T } ∣ = 3 1
P ( A ∣ B ′ ) = ∣ B ′ ∣ ∣ A ∩ B ′ ∣ = ∣ { H H } ∣ ∣ { H H } ∣ = 1
P ( A ∣ B ) = P ( A ∣ B ′ ) . Therefore, A and B are dependent. It can also be shown the B and C are dependent and C and D are dependent using a similar approach.