S = 0 , you choose a number between 0 and 1 at random and add it to S . If S < 1 , you repeat and choose a number between 0 and 1 at random and add it to S . If S ≥ 1 , you stop.
Starting withIf the expected value of S when you stop is denoted by E [ S ] , what is the integer closest to e 2 2 0 0 × E [ S ] ?
This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try
refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c)
loading the
non-javascript version of this page
. We're sorry about the hassle.
Let the process stop after the n th time .
The condition for this is we pick x 1 , x 2 . . . . . . . . . . x n and x n + 1 such that x 1 + x 2 + . . . . . x n < 1 and x 1 + x 2 + x 3 + . . . . . x n + x n + 1 > 1 .
Let A denote the event that x 1 + x 2 + . . . . . . . . x n < 1 and B denote the event that x 1 + x 2 + x 3 + . . . . . + x n + x n + 1 > 1 .
We need P ( A ∩ B ) in order to calculate expectation.
P ( A ∩ B ) = P ( A ) ⋅ P ( B ∣ A ) . [Here P ( B ∣ A ) denotes the probability of event B given that event A has happened] .
So let us calculate P ( A ) .
The probability of this event is given by Volume of unit Hypercube in n-dimension Volume of a standard n-simplex .
The volume of the standard n-simplex can be found using calculus.
∫ 0 1 ∫ 0 1 − x n ∫ 0 1 − x n − 1 − x n . . . . . . ∫ 0 1 − x 3 − . . . . . . − x n − 1 − x n ∫ 0 1 − x 2 − x 3 − . . . . . . − x n − 1 − x n d x 1 d x 2 d x 3 . . . . . . d x n − 1 d x n = n ! 1 n = n ! 1 .
Hence P ( A ) = n ! 1
Now to find P ( B ∣ A ) let us call ( x 1 + x 2 + . . . . . . . . x n ) = z .
So the P ( B ∣ A ) is identical to finding the probability of P ( z + x n + 1 ) < 1 where z and x n + 1 are uniformly and randomly selected from ( 0 , 1 ) which is an easy two dimensional problem. We can even apply basic geometry to solve it . It would be the area of the triangle formed by the line x + y = 1 and the coordinate axis . But let us do it by the above simplex concept as a 2 -simplex is just a line.
P ( B ∣ A ) = ∫ 0 1 ∫ 0 1 − x n + 1 d z d x n + 1 = 2 1 .
Hence P ( A ∩ B ) = 2 ⋅ n ! 1
Hence the expectation E ( S ) is given by E ( S ) = n = 1 ∑ ∞ n ⋅ P ( A ∩ B ) = n = 1 ∑ ∞ 2 ⋅ n ! n = 2 1 n = 1 ∑ ∞ ( n − 1 ) ! 1 = 2 e .
Hence our answer is e 1 0 0 = 3 6 . 7 8 7 9
Useful wiki :- Simplex .
Initially i assumed the the expected value of each to be 2 1 , if that isn't the case another way that i found was Not exactly mine but
We start with a general case
Given N random numbers between 0 and 1, the probability P N ( s ) that their sum does not exceed S is given by N ! S N for S ≤ 1 .
I won't go it the proof for now, But if we assume that it is true, let D N ( s ) d s be the probability that sum is between s + d s then it is basically the derivative of P N ( s ) with respect to s Thus we get
D N ( s ) d s = ( N − 1 ) ! s N − 1 S ≤ 1
OK there is a chance that the ( N + 1 ) t h number pushes the number above 1 , Obviously the last number is between 1 − s and s , hence the average result will equal s + 2 1 − s = 1 + 2 s
Hence the expected sum equals S = 1 ∑ ∞ ( ∫ 0 1 ( 1 + 2 s ) ( s ) ( ( N − 1 ) ! s N − 1 ) ) = 1 ∑ ∞ 2 ( N + 2 ) ! 3 N 2 + 5 N = 1 ∑ ∞ ( 2 N ! 3 − ( N + 1 ) ! 2 + ( N + 2 ) ! 1 ) = 2 3 ( e − 1 ) − 2 ( e − 2 ) + ( e − 5 / 2 ) = 2 e
I feel this method works, it can also be shown as calvin stated that the number of N required to be e same method
(This is not a solution, but an area of discussion because the validity of the problem is being questioned.)
The answer should have been e 2 2 0 0 E [ S ] = e 1 0 0 ≈ 3 6 . 7 8 7 9 . Those who entered 36 or 37 were marked correct. The correct answer is now 37.
(The following comments are flawed.)
It can be shown that the probability that we need
n
values is
n
(
n
−
2
)
!
1
.
This gives us that the expected number of values needed is
∑
n
(
n
−
2
)
!
n
=
e
.
A common claim is that the expected value of the sum is thus
2
e
. This would be true
if
the expected value of each of these values in the sum is
2
1
.
However, the expected value of each of these individual values is
not
2
1
.
Clearly, the expected sum of
n
values should be increasing in
n
.
Let us find the expected value when exactly
n
=
2
dice are needed, which should give us a lower bound for
E
[
S
]
.
From calculations similar to the above, we can show that the pdf has the form
p
(
x
)
=
4
−
2
x
on the range
x
∈
[
1
,
2
]
, and hence
E
[
S
∣
n
=
2
]
=
∫
1
2
x
(
4
−
2
x
)
d
x
=
3
4
=
1
.
3
3
3
…
.
As such, it is easy to believe that
E
[
S
]
>
2
e
=
1
.
3
5
9
…
.
Note: Continuing in this manner, you can eventually find E [ S ] = ∑ n ( n − 2 ) ! 1 × E [ S ∣ n values ] .
I think the question should have been 100 E[S] / e. That would be 50.
What is wrong with applying the Wald's lemma ?
Log in to reply
And for a more dependable source for the statement of Wald's lemma, see Theorem 1.1 of this note .
Wald's lemma doesn't apply, because the distribution doesn't satisfy the condition that "the number of terms in the sum is independent of the summands." (end of first paragraph, or assumption 2). Always remember to check the conditions before applying a theorem.
Log in to reply
It does not need to. These are called general stopping time problems and the r.v. N might as well be dependent on the sequence ( X i ) . We just need it to be measurable w.r.t. the natural filtration and to satisfy some finite expectation property. See the link I posted in my previous comment.
Log in to reply
@Abhishek Sinha – HM, I'm not too certain now. I might have a mistake in my original approach.
Log in to reply
@Calvin Lin – If you are not too sure about the relevant advanced probability theory, my suggestion would be to just simulate the situation and see what average S you get.
Log in to reply
@Abhishek Sinha – Thanks. I have updated the correct answer to 37.
@Abhishek Sinha – I am aware of the theory. I convinced myself that this stopping time didn't satisfy the assumptions, because I felt strongly that E [ S ] > 2 e as outlined above. I agree that E [ S ] = 2 e is likely the answer.
Let's give @Beakal Tiliksew a day to respond with his solution. I've added the flag that "the answer is currently under dispute".
@Beakal Tiliksew Care to comment?
Log in to reply
i Can't really say much about what @Abhishek Sinha statement, i couldn't follow much on wald's equation, i have to study about it.
Log in to reply
Wait, when i first posted the question, i am pretty sure i made 2 e to give the answer of 50, this wasn't the form i initial wrote the question
Log in to reply
@Beakal Tiliksew – You initially had it as "If the average or the expected value of the sum can written as p . What is 5 0 ⌊ p ⌋ "
15 mins after posting the question, You made a change of the statement to "If the average or the expected value of the sum can written as p . What is ( 2 0 0 ) e 2 p 2 "
Because of that, I felt strongly that you had a reason for introducing e 2 .
(I made slight edits to the phrasing for clarity.)
Log in to reply
@Calvin Lin – I guess that was an error in my part. When i finished writing the question the first time the answer and the question didn't agree ,and since i couldn't change the answer i was forced to make the question fit, rather in a peculiar way maybe, But anyways thanks , i shall improve on that.
Wald's equation is just a fancy way of justifying Fubini's Theorem in the context of probability theory. Fubini's Theorem is a fancy way of justifying the interchange of summation signs, in the context of analysis.
I'd likely create a set that looks into when X = ∑ X i but E [ X ] = ∑ E [ X i ] .
There's something I don't quite follow. You say that the pdf is p ( x ) = 4 − 2 x for 1 ≤ x ≤ 2 . I think you mean the pdf conditioning on n = 2 ; then it makes sense, because ∫ 1 2 ( 4 − 2 x ) d x = 1 .
So you compute E [ S ∣ n = 2 ] = 3 4 . But then by your own formula E [ S ] = n ∑ n ( n − 2 ) ! 1 E [ S ∣ n ] , the contribution from the case n = 2 is actually 2 1 ⋅ 3 4 = 3 2 , which is well below e / 2 .
Log in to reply
I made the assumption that "Clearly the expected sum of n values should be increasing in n ", to justify that E [ S ] > E [ S ∣ n = 2 ] , and I believed that the increase is somewhat more significant that 3 4 − 2 e = 0 . 0 2 6 … .
I haven't figured out what's wrong with my logic as yet, hence the disclaimer "the following comments are flawed.
by the way , why can't we say the average value to be 2 1
Define f n ( x ) as the probability density function of S after n rounds, defined on 0 ≤ x ≤ 2 . As each term is uniformly distributed we have f 1 ( x ) = 1 on 0 ≤ x ≤ 1 and 0 otherwise.
For x ≤ 1 we have: f n + 1 ( x ) = ∫ 0 x f n ( t ) d t
For x ≥ 1 we have f n + 1 ( x ) = f n ( x ) + ∫ x − 1 1 f n ( t ) d t :
f n + 1 ( x ) = n ! 1 x n on 0 ≤ x < 1
f n + 1 ( x ) = ∑ k = 0 k = n k ! 1 − ( x − 1 ) k on 1 ≤ x ≤ 2 .
Integrating to get the expectation and taking the limit (using Monotone Convergence) gives E ( S ) = ∫ 1 2 x ( e − e x − 1 ) d x . The result follows via Integration by Parts.
Problem Loading...
Note Loading...
Set Loading...
Here is a solution that invokes a bit more formality but I hope in doing so, also resolves some of the confusion of the earlier posts. I believe it is Beakal's original solution stated more formally.
Imagine you have a infinite sequence x 1 , x 2 , x 3 , … of independently chosen random numbers, each between zero and one. Of course, in any given instance of the process described in the problem, only a finite number of the x i will contribute to the sum but it doesn't hurt to imagine that one goes on choosing random numbers that will not be used. For n ≥ 1 , let I n be an "indicator variable" defined by I n = 1 if ∑ i = 1 n x i < 1 and I n = 0 if ∑ i = 1 n x i ≥ 1 . In other words, I n indicates whether the game has terminated on or before the n t h move. For convenience, I'll also define I 0 = 1 since certainly the game hasn't terminated before I've even chosen the first random number.
Let S be the sum of random variables when the process terminates (the original problem was to find the expectation E [ S ] ). Then S = ∑ n = 1 ∞ I n − 1 x n . Since I n − 1 depends only on x 1 , x 2 , … , x n − 1 , it is independent of x n . Therefore E [ S ] = ∑ n = 1 ∞ E [ I n − 1 ] E [ x n ] = 2 1 ∑ n = 1 ∞ E [ I n − 1 ] . Now, E [ I n − 1 ] is the probability that ∑ i = 1 n − 1 x i < 1 which is equal to ( n − 1 ) ! 1 (I have not provided the proof of this here but you can find it by Googling "volume of standard simplex"). So E [ S ] = 2 1 ∑ n = 1 ∞ ( n − 1 ) ! 1 = 2 e . Now multiplying by e 2 2 0 0 you get 36.7879 and the closest integer is 37, which is the answer to the problem.
Interpreting the above solution in light of the concerns in Calvin's posts: the quantity ∑ n = 1 ∞ E [ I n − 1 ] is the average number of random numbers selected before the process ends. The independence argument in the proof shows that indeed one can simply multiply this sum by 2 1 to get the answer.