Without using a calculator, how would you determine if terms of the form are positive? I am mainly interested in the case where are integers.
When there are 5 or fewer terms involved, we can try and split the terms and square both sides, to reduce the number of surds that are involved. For example, to determine if we can square both sides of to obtain
Repeated squaring eventually resolves this question, as the number of surds are reduced. This is a fail-proof method, regardless of the numbers that are involved.
However, when there are more than 6 terms involved, then repeated squaring need not necessarily reduce the terms that are involved.
E.g. How would you determine if
I can think of several approaches
There are special cases, which allow us to apply Jensen's inequality. However, this gives a somewhat restrictive condition on the set of values.
Show that However, it might not be feasible to guess what the middle number is, unless you already had a calculator. Taylor approximations might offer an approximation, though the bound could be very tight (and hence require many terms).
Do you have any other suggestions?
Easy Math Editor
This discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution — they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.
When posting on Brilliant:
*italics*
or_italics_
**bold**
or__bold__
paragraph 1
paragraph 2
[example link](https://brilliant.org)
> This is a quote
\(
...\)
or\[
...\]
to ensure proper formatting.2 \times 3
2^{34}
a_{i-1}
\frac{2}{3}
\sqrt{2}
\sum_{i=1}^3
\sin \theta
\boxed{123}
Comments
For the given example, conjugate surds do the trick. It is equal to A(2+5+13)2−(3+7+11)2 where A is a positive real.
Expanding the numerator, I get −1+2((10−21)+(26−33)+(65−77)) which is obviously negative.
Log in to reply
Wait, so how is this different from squaring both sides in Calvin's example?
Log in to reply
Admittedly both methods yield the same computations :p. The reasoning is different though (algebraic manipulations vs increasing functions)
Whether there's an efficient method for this in general is a well-known open problem in computational complexity; see http://en.wikipedia.org/wiki/Sumofradicals (in particular, I believe it's not even known whether this problem is in NP, let alone P).
There are, of course, a bunch of inefficient ways to figure out whether a sum like S=∑i=1nbiai is positive or negative. One way to do this is to estimate the size of S by first computing the product of all of its conjugates,
P=∣∣∣∏∑±biai∣∣∣
and then applying the estimate
∣S∣≥(∑∣biai∣)2n−1P
Now, to determine whether S is positive or negative, it just suffices to approximate S to within this lower bound for ∣S∣, which you can do via Taylor series or any other convergent expansion.
Log in to reply
Great that you know about this, and have related it to computational complexity theory.
So, if we don't know of an efficient method that applies to all situations, can we figure out efficient methods that apply to certain conditions? In Math, we sometimes develop tools that work according to the conditions in various scenarios, and then break them up into cases.
For example, taking the idea of Jensens, is ∑\sgn(bi)bi2ai=0 a sufficient condition for us to apply Jensen's to obtain information about the signage?
Log in to reply
That condition isn't enough; one easy way to see this is if you just flip all the signs of the bi, that sum stays equal to 0 whereas the original expression switches signs.
The inequality you probably want is a generalization of Jensen's inequality called Karamata's majorization inequality (http://en.wikipedia.org/wiki/Karamata's_inequality). In addition to your condition above, you also need the positive terms to 'majorize' the negative terms (see the wikipedia link for more information). Of course, this only applies in a very limited range of situations.
Log in to reply
Yes, the majorization inequality is a sufficient condition, and can be interpreted as Jensen's applied term wise. Essentially, you work from the smallest value, similar to the inequality chain that I presented in my reply to Ivan.
I'm not quite familiar with "Jensen's inequality". Could anyone please elaborate a bit further? I would be most grateful!
Log in to reply
Jensen's is a statement of convexity / concavity.
A function f(x) is concave in an interval [a,b], if for any x and y in the interval, and 0≤t≤1,
f(tx+(1−t)y)≥tf(x)+(1−t)f(y)
If the function is also twice differentiable, we can simply check that f′′(x)<0.
If f(x) is a concave function (over the entire real line), then the maximum value of ∑i=1nf(xi) subject to ∑i=1nxi=K (for some constant K ) will be attained when all xi values are the same, namely nK.
As an example, since x is a concave function on [0,∞) (Show this using both ways!), thus we know that
3+5≤228=4.
If you evaluate the LHS, you get 3+5=3.968. Of course, the condition that ∑xi=K is extremely restrictive.
As an extension, to apply it to the inequality that I stated, you can show that
3+7+11≥2+7+12≥2+5+14≥2+5+13
Log in to reply
And Jensen's Inequality says that if a function is concave, then for any λ1+λ2+λ3+⋯+λn=1,
f(λ1x1+λ2x2+λ3x3+⋯+λnxn)≥λ1f(x1)+λ2f(x2)+λ3f(x3)+⋯+λnf(xn)
You can use Jensen's Inequality to prove AM-GM by using f(x)=logx and λ1=λ2=λ3=⋯=λn=n1. Already, we know that logx is concave (we can check the second derivative), and we also know λ1+λ2+λ3+⋯+λn=nn1=1, so
na1+a2+a3+⋯+an≥na1a2a3…an log(n1a1+n1a2+n1a3+⋯+n1an)≥logna1a2a3…an=n1loga1+n1loga2+n1loga3+⋯+n1logan
Plugging in our definitions, we get
f(λ1x1+λ2x2+λ3x3+⋯+λnxn)≥λ1f(x1)+λ2f(x2)+λ3f(x3)+⋯+λnf(xn)
which is true by Jensen's Inequality.
Log in to reply
λi to be nonnegative.
Oscar, in your statement of Jensen's inequality, you also need theLog in to reply
Log in to reply
λ need to lie within [0,1].
Yes you do. One way of remembering Jensen's is that (for concave functions) the secant line of the function lies below the graph. This only holds through within the region bounded by your end points, and is false otherwise. Thus, the values ofAs an explicit example, consider f(x)=−x2. Take the points x1=1 and x2=2. How do we reach 0 by connecting the lines? We want 0=λ1+(1−λ)2 which gives us λ=2. Then, your claim is
0=f(0)=f(2×1+(−1)×2)≥2f(1)+(−1)f(2)=2.
Thank you very much guys. :)
http://prntscr.com/1lxjuk
I have this comparison, but it would be tedious work especially with more terms
Log in to reply
That's a good idea, choosing to bound the difference of terms by one another.
However, this gets very complicated very quickly. There is no guarantee that there is an equal number of positive and negative terms, nor how we should bound one difference by another. It is possible that the total sum has absolute values less than 0.1, which each of the differences have absolute value more than 1. In this case, such an approach would not work.
Log in to reply
haha thanks :P I can only do this cuz I haven't learned all sorts of funny things just as yet from my country's education system yet. I could only come up with this :D
Are you sure about five surds?
−2+3+5+7−17>0
3+5+7>2+17
15+210+214+235>19+234
210+214+235>4+234
I don't think the number of surds is decreased. We still have five surds (although one is simply 16=4 and the rest have a factor of 2 out). I can't see a way to continue it; how would you do it?
Log in to reply
I'm more concerned with looking at the sum of many terms. With few terms, there could be various tricks involved that could help us slightly reduce the number. In that sense, 5/6 was chosen arbitrarily.
It is also not immediately clear that 6 cannot work. There could be some kind of algebraic manipulation that reduces the number.
I'm slightly surprised that no one mentioned using convergents (and semi-convergents) to approximate square roots into rational numbers using partial fractions, despite being a question Close to the root which uses this to find method.
This gives us a quick way to approximate the surd using rational numbers, which are much easier to add and multiply.
Log in to reply
maclaurin series or something? I'm not very sure with all these exquisite formulas :P
Log in to reply
No, the approaches presented in the solutions generally require knowledge of continued fractions and convergents/semi-convergents.
I added a solution using Farey sequences to justify the answer. However, this is not ideal because it doesn't explain where to find the approximations (apart from slowly searching).