To start, let's calculate the mean of the binomial distribution, We expect of course, to find \(\langle \hat{A} \rangle = Np_A\). According to our formula,
And, so, the sample mean, is given by . So far, so good.
Now let's find the sample variance. For this, we'll need the additional piece :
With in hand, we find the sample variance
So that the sample standard deviation is .
Recall that our purpose in doing all this was to calculate the uncertainty in our estimation of the true frequency. We can illuminate our results by writing them in terms of the sample frequency.
, so, , and we expect on average for the sample frequency to equal the true frequency. In other words, if we take infinitely many subsets of the population and average all our sample frequencies, we will find the true frequency.
We can re-write the sample standard deviation so the spread in our estimate of the true frequency (and the spread used by polling companies) is given by , which is plotted below for several value of .
The uncertainty has several interesting features.
As we increase our sample size, we have, initially, big gains in our absolute certainty. However, as continues to increase, we get less and less additional certainty per unit investment in sample size. I.e. we have diminishing returns. Our results suggest that above , it doesn't make much sense to continue building the sample size because the error is already about for all possible values of . To cut that number in half, we'd have to increase our sample size by a factor of four to 4000 respondents.
has a maximum at , therefore populations with true frequencies close to 0.5 produce the highest uncertainty in estimations, and extreme results produce the least. Likewise, if we consider as a cost, it is more expensive to establish a given level of certainty when is close to than when it is close to zero or one. Here, we see the so-called TANSTAAFL principle at work (There Ain't No Such Thing As A Free Lunch).
When is far away from a coin flip, there is information-a-plenty and it is inexpensive to mine (we can achieve low uncertainty with smaller ). Close to however, there is hardly any information in the system and so, it is much more expensive to unearth any of it.
Things get a bit more involved when questions have more than two choices, and we could treat confidence intervals in more detail, but that is poll uncertainty in a nutshell.
Before we wrap up, let's take a quick look at applications of the ideas we've explored in the realm of statistical physics.
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
or_italics_
**bold**
or__bold__
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
\(
...\)
or\[
...\]
to ensure proper formatting.2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Comments
There are no comments in this discussion.