There is a biased coin whose probability of getting a head at the i th flip in a game of 2017 flips be given by sin 2 ( 2 0 1 7 i π ) .
If the variance of this probability distribution can be expressed as q p , where p and q are coprime positive integers, find p + q .
Bonus:
Generalize this.
This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try
refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c)
loading the
non-javascript version of this page
. We're sorry about the hassle.
Hi Kartik, I see that you have derived the equations for the expected value and the variance, but I think you are missing the evaluation of the sums of sin 2 x and sin 4 x .
I think the evaluation is not immediate, and it would be great if you could elaborate on it in your solution. What do you think?
Log in to reply
Yes, they are quite neat. I have added it. Can you check if it is what you wanted?
Log in to reply
Thanks, Kartik. This helps me understand how the sum of sin 2 x and sin 4 x was calculated.
This solution is incomplete.
Essentially, we want to evaluate Var ( x ) = E ( X 2 ) − ( E ( X ) ) 2 , where E ( X n ) denotes the n th moment, E ( X n ) = j = 1 ∑ 2 0 1 7 ( sin 2 2 0 1 7 j π ) n .
So this boils down to a trigonometric expression.
Var ( x ) = = = = = = = = E ( X 2 ) − ( E ( X ) ) 2 j = 1 ∑ 2 0 1 7 ( sin 2 2 0 1 7 j π ) 2 − j = 1 ∑ 2 0 1 7 ( ( sin 2 2 0 1 7 j π ) ) 2 j = 1 ∑ 2 0 1 7 ( sin 2 2 0 1 7 j π ) ( 1 − sin 2 2 0 1 7 j π ) j = 1 ∑ 2 0 1 7 ( sin 2 2 0 1 7 j π cos 2 2 0 1 7 j π ) 4 1 j = 1 ∑ 2 0 1 7 4 ( sin 2 2 0 1 7 j π cos 2 2 0 1 7 j π ) 4 1 j = 1 ∑ 2 0 1 7 sin 2 2 0 1 7 2 j π because sin ( 2 A ) = 2 sin A cos A 4 1 j = 1 ∑ 2 0 1 7 2 1 − cos ( 4 j π / 2 0 1 7 ) because sin 2 ( A ) = 2 1 ( 1 − cos ( 2 A ) ) 8 1 j = 1 ∑ 2 0 1 7 1 − 8 1 j = 1 ∑ 2 0 1 7 cos ( 2 0 1 7 4 π j )
What's left to do is to show that j = 1 ∑ 2 0 1 7 cos ( 2 0 1 7 4 π j ) = 0 , which can be solved using Chebyshev polynomial of the first kind or roots of unity . I'll finish this whenever I feel like it.
I never knew Variance had such a formula and was so happy that I derived something new.
Log in to reply
The formula for Variance is just Var(X) = E((X - mean)^2)
Problem Loading...
Note Loading...
Set Loading...
Let's generalize it a little bit.
We consider a binomial experiment where probability of success changes after every trial.
Let us denote probability of success at i th trial to be p ( i ) and that of failure at i th trial to be q ( i ) = 1 − p ( i )
So, what's the probability of 1 success in a sequence of n trials.
P ( 1 s u c c e s s ) = p ( 1 ) q ( 2 ) q ( 3 ) ⋯ q ( n ) + q ( 1 ) p ( 2 ) q ( 3 ) ⋯ q ( n ) + ⋯ + q ( 1 ) q ( 2 ) q ( 3 ) ⋯ p ( n )
Obviously in layman's term - when p ( 1 ) comes, q ( 1 ) does not.
So, how can we make a model for that given that we'd have to find for m successes?
We use generating functions.
P ( 1 success ) = [ x 1 ] ( p ( 1 ) x + q ( 1 ) ) ( p ( 2 ) x + q ( 2 ) ) ( p ( 3 ) x + q ( 3 ) ) ⋯ ( p ( n ) x + q ( n ) )
where [ x k ] f ( x ) means coefficient of x k in f ( x )
What about 2 success ? We want 2 p ( i ) 's and rest q 's such that "when p ( 1 ) comes, q ( 1 ) does not."
So, very clearly, it is coefficient of x 2 in our given generating function.
Therefore, P ( m successes ) = a m , where f ( x ) = ( p ( 1 ) x + q ( 1 ) ) ( p ( 2 ) x + q ( 2 ) ) ( p ( 3 ) x + q ( 3 ) ) ⋯ ( p ( n ) x + q ( n ) ) = a 0 + a 1 x + a 2 x 2 + ⋯ + a n x n
So that's our new binomial distribution and we can closed forms of problems like odd number of successes etc
We would need to find expected number of successes .
E ( X ) = ∑ k = 0 n P ( m successes ) ∑ k = 0 n P ( m successes ) m
= a 0 + a 1 + a 2 + ⋯ + a n a 0 0 + a 1 1 + a 2 2 + ⋯ + a n n
= f ( 1 ) f ′ ( 1 )
= d x d ( lo g ( f ( x ) ) ) ∣ x = 1
= i = 1 ∑ n d x d ( p ( i ) x + q ( i ) ) ∣ x = 1
E ( X ) = i = 1 ∑ n p ( i )
Variance = σ 2 = a 0 + a 1 + ⋯ + a n a 0 0 2 + a 1 1 2 + a 2 2 2 + ⋯ + a n n 2 − E 2
= x ( x f ′ ( x ) ) ′ ∣ x = 1 − E 2
= f ( x ) x f ′ ′ ( x ) ∣ x = 1 + f ( x ) f ′ ( x ) ∣ x = 1 − E 2
It can be shown that f ′ ′ ( 1 ) = ( p ( 1 ) + p ( 2 ) + ⋯ + p ( n ) ) 2 − p ( 1 ) 2 − p ( 2 ) 2 − ⋯ − p ( n ) 2 .
f ( 1 ) = 1 , f ′ ( 1 ) = p ( 1 ) + p ( 2 ) + ⋯ + p ( n ) .
= E 2 − i = 1 ∑ n p ( i ) 2 + i = 1 ∑ n p ( i ) − E 2
σ 2 = i = 1 ∑ n p ( i ) − i = 1 ∑ n ( p ( i ) ) 2
Check for normal binomial distribution!
For our case, Variance = σ 2 = i = 1 ∑ 2 0 1 7 sin 2 ( 2 0 1 7 i π ) − i = 1 ∑ n sin 4 ( 2 0 1 7 i π )
i = 1 ∑ n sin 2 ( n i π ) = i = 1 ∑ n 2 1 ( 1 − cos ( n 2 i π ) ) = 2 n
i = 1 ∑ n sin 4 ( n i π ) = i = 1 ∑ n 4 1 ( 1 − cos ( n 2 i π ) ) 2
= i = 1 ∑ n 4 1 ( 1 + 2 1 ( 1 + cos ( n 4 i π ) ) ) = 4 n + 8 n = 8 3 n
Here I have repeatedly used cos ( 2 x ) = 2 cos 2 ( x ) − 1 = 1 − 2 sin 2 ( x ) and cos ( α ) + cos ( α + β ) + ⋯ + cos ( α + ( n − 1 ) β ) = sin ( 2 β ) sin ( 2 n β ) cos ( α + 2 ( n − 1 ) β ) .
This yields our answer as 8 n = 8 2 0 1 7