Let P ( x ) be a polynomial of degree d , given by P ( x ) = c 0 + c 1 x + c 2 x 2 + … + c d x d . Each of the coefficients are chosen independently and uniformly at random from the p -element set F p = { 0 , 1 , 2 , … , p − 1 } , where p is a given prime number.
An element k ∈ F p is said to be a root of the polynomial modulo p if and only if P ( k ) ≡ 0 ( m o d p ) (i.e. the prime p divides the integer P ( k ) ).
Since the coefficients are random, so is the polynomial P ( x ) and so are its roots (defined strictly in the above sense). For p = 2 0 1 7 and d = 1 0 0 8 , find the expected number of roots of such a random polynomial.
This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try
refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c)
loading the
non-javascript version of this page
. We're sorry about the hassle.
Remark : The crucial technique that is used in the above proof is first conditioning on the random variables c 1 , c 2 , … , c d and then unconditioning it at the end. This is a very useful technique in theoretical computer science and discrete mathematics and goes under the name of Principle of deferrred decisions . If interested, you may search for this keyword in the internet. Good luck !
Log in to reply
I call it the Law of Total Expectation, namely
E ( X ) = E Y ( E X ∣ Y ( X ∣ Y ) )
Of course, different disciplines can have a different name for the same result.
Log in to reply
This also goes by the name : "Law of iterated expectation", especially in the probability theory literature.
Log in to reply
@Abhishek Sinha – It is also called Adam's Law. I have no idea why.
An elementary way to rewrite Abhishek's proof is to write out the sample space with p d points (all possible co-efficient vectors). And then we observe that P ( x + a ) ( m o d p ) is another polynomial in the sample space for every 'a' in the field. Further, the map P ( x ) → P ( x + a ) is a bijection. It is simple to see that 0 is a root for p d − 1 points in the sample space. Using the bijection, we infer that every element of the field is a root for p d − 1 points in the sample space. Therefore, we totally have p d roots for p d polynomials. This means on an average we have 1 root per polynomial.
Problem Loading...
Note Loading...
Set Loading...
Let Z k be a random variable which is equal to 1 if k is a root of the polynomial P ( x ) , and 0 otherwise. Hence the number of roots ( N ) of a random polynomial P ( x ) may be conveniently expressed as N = ∑ k = 0 p − 1 Z k . Taking expectation of both sides and using the linearity property of the expectation, we have E N = ∑ k = 0 p − 1 E Z k = ∑ k = 0 p − 1 P ( Z k = 1 ) . Now we will evaluate each of the above probabilities.
Note that k ∈ Z p is a root of the polynomial (i.e. Z k = 1 ), if and only if c 0 = − ( c 1 k + c 2 k 2 + … + c d k d ) m o d ( p ) . Let us now fix a realization of the coefficients { c 1 , c 2 , … , c d } . Hence the right hand side of the above equation is a fixed number in the set F p . Since c 0 is chosen independently and uniformly at random from the set F p , probability that Z k = 1 conditional on that realization of { c 1 , c 2 , … , c d } is the same as the probability that the random variable c 0 is equal to the right hand side. Since c 0 is independent of every other coefficients, this conditional probability is simply p 1 . Since this probability is independent of the particular realization of { c 1 , c 2 , … , c d } , we have P ( Z k = 1 ) = p 1 , for all k ∈ Z p . Thus we have E N = p × p 1 = 1 ■ .