Find \(f(x)\) such that
\(\displaystyle \int _{ -\infty }^{ +\infty }{ log(f(x))f(x)dx } =S \quad is\quad maximised \)
Given that
∫−∞+∞f(x)dx=1
∫−∞+∞x2f(x)dx=k(k∈R)
additionally you may use the fact that f(x) is an even function
Please help if you can, i am unable to solve it
#Math
#Help
#Integral
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
or_italics_
**bold**
or__bold__
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
\(
...\)
or\[
...\]
to ensure proper formatting.2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Comments
First of all, I think you want to maximize the differential entropy, i.e. minimize the quantity (not maximize) ∫−∞∞log(f(x))f(x)dx Subject to the given conditions on f(x). We start with a tiny little exercise :
Exercise : Show that for any two probability densities f(x) and g(x), the following holds∫−∞∞f(x)logg(x)f(x)dx≥0....(1) with equality iff f(x)≡g(x),∀x∈R.
[Hint : Use the inequality exp(y)≥1+y for some suitably chosen y and the fact that both f(x) and g(x) are probability densities and hence they integrate to 1 ]
Now back to the main problem. Since we are free to choose any probability density g(x) in the inequality (1), let us take g(x) to be gaussian with mean zero and variance k, i.e., g(x)=2πk1exp(−x2/2k) The above inequality (1) then simplifies to ∫−∞∞log(f(x))f(x)dx≥∫−∞∞f(x)(log2πk1−2kx2)dx=log2πk1−21...(2)
Where we have used the given constraints on f(x) on the last equation.
Now that's cool because the right hand side of Eqn. (2) is independent of the particular function f(x) satisfying the given constraints. Which means that, for all functions f(x) which satisfy the given two conditions, the quantity ∫−∞∞log(f(x))f(x)dx is at least log2πk1−21=−21log(2πek). So we have a lower bound. However, is the lower bound achievable ? Certainly yes! Because the lower bound is achievable iff we have equality in Eqn (1), i.e. when f(x) is indeed equal to the zero mean gaussian with variance k.
@Mvs Saketh I have a strange feeling that it is of the form Ae−Bx2k or something, possibly Ae−Bx2... Maybe we can modify value of A,B so that it satisfies given conditions??
Log in to reply
The fact that your guess hit spot on is another indication of your genius !!
Yup! That's a Gaussian function that can be used to determine the probability distribution function but we have to consider a few constraints to it . Mostly because Maxwell sir used a lot of assumptions in his theory .
i know the answer bro, it can be found online as well :)
and yes that is the answer, after that, i can impose the constraints and find A and B, infact i have already done that , i want a rigorious clear proof to reach there, please show your method
Log in to reply
It was just a guess. The function should be even. It should tend to zero at both infinities. And it should be more than 1for some x. When it is more than 1, log(f(x)) will also be positive, hence our integral will increase. When it is less than 1, log(f(x)) will be negative and our integral will decrease. We have to somehow make f(x) increase rapidly and then decrease rapidly to 0, hence I suggested exponential. Is it possible to prove this mathematically? I mean, with 12th standard calculus knowledge?
Log in to reply
Hi
Actually I am just a novice to Statistical Mechanics . I only started a bit of light reading on it from this month (specifically after the English exam) .
Currently I am only familiar with the ensemble approaches which use a bit of Combinatorics . But since you are using an integral and that too you intend to find out a function which maximizes the integral , I believe it will take a bit of time and a bit of higher knowledge at that .
If you want an ensemble approach , I can help you out with it or you might also want to look it upon the Internet .
But still , this problem is quite fascinating and I will work on it . First of all I will have to do a bit of reading(of a higher standard) on it though .
Best of luck to you . And do let us know if you make a breakthrough(which I know you will! NSEP Scholar :D)
@Ronak Agarwal @Shashwat Shukla @Deepanshu Gupta @Raghav Vaidyanathan @Azhaghu Roopesh M and any one else who can help,
Can you please tell me where you saw this, I am just curious to know.
Log in to reply
alright, i am trying to prove maxwell's speed distribution, for gas
so here is my approach, (do help, if you can spot something in it)
Let me first consider any particular velocity component,
let it be be f(vx)d(vx , let this represent the probability that a particle has velocity lying between vx and vx+dvx
(now i shall replace vx with x for convinience)
now , the entropy for a system is given as the negative average of the logarithm of probability density and hence i got,
S=−∫−∞+∞log(f(x))f(x)dx
additionally i know , that the expected energy of a molecule due to motion along x-axis is 2kT
and it is equal to E=∫−∞+∞2mx2f(x)dx= 2kT (the expected energy)
finally, being probability , the integral through out should be 1,
Now, assuming that entropy is always maximised, i have the question,
So i know for sure that if i am able to maximise the integral by appropriately choosing f(x), i will have the expected speed distribution
(i have alternative proofs, but they all start out discrete and then go to continuum limit, and others just mathematically predict the function, but a more rigorious proof would be satisfactory, so please help if you can)
Log in to reply
I believe, I don't have the right tactics to deal with such kind of problems. You may ask such questions on physics stack exchange.
This is a really interesting problem. And yes, I'm pretty sure it's very difficult too(i.e. don't think I can do it). But please do post the answer here when you end up solving it!
Instead of using Entropy , how about using a Gaussian Function just like Raghav suggested ?
For example , P(vx)=Ae−Bvx2
And as per your point , a probability function must be normalised .
∴1=∫−∞∞P(vx)dx
Then we'll get A=πB and then I don't know how to follow it up . Maybe take the average value of the mean squared value of the velocity function but wouldn't it lead to us needing the degree of freedom of the gas .
This is just a possibility though .
Is it possible for you to just list out the other methods that you've tried ? I just want to look at it as a source of inspiration .
And I'll also ask some of my other friends about it and let you know :)
Log in to reply
Infact, even Maxwell did not derive it using entropy, he very brilliantly proved that only the exponential function can be the function describing the speed distribution, i know that proof, but still
and ,i want to use minimal combinatorics, a combinatorial proof involves assuming discrete levels , and after getting ,answer, making it continuous, i want to avoid that
this is just gone over to my head ! But bro I'am just curious to know , are you not preparing for JEE_MAINS ? or you had prepared it very well already ?
Log in to reply
i just google these interesting stuff in my free time and try to work on them
@Akshay Bodhare