∫ − 1 1 ∣ ∣ ∣ x 3 − a − b x − c x 2 ∣ ∣ ∣ 2 d x
Let a , b , and c be real constants.
If the minimum value of the integral above can be expressed as B A , where A and B are coprime positive integers, find A + B .
Notation: ∣ ⋅ ∣ denotes the absolute value function .
This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try
refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c)
loading the
non-javascript version of this page
. We're sorry about the hassle.
I've liked your proof, thank you very much (+1). I think this proof is easy and beautiful because "everybody" can understand it....
Good observation it's just a quadratic in disguise, and we can apply more basic methods to solving this problem.
The minimum is got when a = 0 , b = 3 / 5 , c = 0 and B A = 1 7 5 8 . Basically, the proof is to find the best aproximattion(approach) from x 3 to the vector space of the polynomials of second grade with an inner product....
Subparagraph before solution
In math we have to fight to avoid ambiguities. (Sorry for my English and some names)
Definition.- A set is a well-defined collection of objects or elements. For more information you can consult, Set, mathematical definition . There was a controversy at the beginning of the 20th century (fundamental crisis), because of the definition of a "set". For more information, you can consult Russell's paradox . I recommend an algebra book to get you started: Algebra (Thomas Hungerford).
Definition.- A nonempty set A with two operations ( a + b , a × b ) is said to be a conmutative ring if it satisfies:
1.- Properties of + :
1.a) Associative property: a + ( b + c ) = ( a + b ) + c , ∀ a , b , c ∈ A .
2.a) Commutative property: a + b = b + a , ∀ a , b ∈ A .
3.a) ∃ an neutral element for + (we'll call it 0 ), such that a + 0 = a = 0 + a , ∀ a ∈ A .
4.a) ∀ a ∈ A ∃ an inverse element for + , i.e, ∀ a ∈ A , ∃ − a , such that, a + ( − a ) = 0 = ( − a ) + a .
2.- Properties of × :
1.a) Associative property: a × ( b × c ) = ( a × b ) × c , ∀ a , b , c ∈ A .
2.a) Commutative property: a × b = b × a , ∀ a , b ∈ A . Because of this property, the ring is commutative.
3.- Properties with + and × :
Distributive properties:
∀ a , b , c , ∈ A , a × ( b + c ) = a × b + a × c .
∀ a , b , c , ∈ A , ( a + b ) × c = a × c + b × c .
If there exists an identity element 1 for × , i. e, a ⋅ 1 = a = 1 ⋅ a , ∀ a ∈ A , we'll say that A is a commutative ring with identity .
A element a − 1 ∈ A such that a − 1 ⋅ a = a ⋅ a − 1 = 1 is an inverse element of a under × .
A commutative ring with identity in which every nonzero element (each element distinct to 0 ) has an inverse element is called a field .
Definition.- A nomempty set E is said to be a vector space over a field K ... (Pause, to be continued?)
Definition.- A norm in a vector space V over a field K . (For us K will be R or C )is a function ∣ ∣ ⋅ ∣ ∣ : V → R + such that satisfies:
a) ∣ ∣ x ∣ ∣ ≥ 0 , ∀ x ∈ V , and ∣ ∣ x ∣ ∣ = 0 ⟺ x = 0
b) ∣ ∣ x + y ∣ ∣ ≤ ∣ ∣ x ∣ ∣ + ∣ ∣ y ∣ ∣ , ∀ x , y ∈ V
c) ∣ ∣ λ ⋅ x ∣ ∣ = ∣ λ ∣ ⋅ ∣ ∣ x ∣ ∣ , ∀ x ∈ V
Definition.- A normed vector space, is a vector space with a norm.
Associated with the norm we will have a distance : d ( x , y ) = ∣ ∣ x − y ∣ ∣ , ∀ x , y ∈ V . You can prove :
a) d ( x , y ) ≥ 0 , d ( x , y ) = 0 ⟺ x = y
b) d ( x , y ) = d ( y , x )
c) d ( x , z ) ≤ d ( x , y ) + d ( y , z ) (triangular inequality) d ( x , z ) = ∣ ∣ x − z ∣ ∣ = ∣ ∣ x − y + y − z ∣ ∣ ≤ ∣ ∣ x − y ∣ ∣ + ∣ ∣ y − z ∣ ∣ = d ( x , y ) + d ( y , z ) .
Examples.-
a) A = ( R , ∣ ⋅ ∣ ) is a normed vector space over R , where ∣ ⋅ ∣ is the absolute value. Furthemore, A is a Banach space, this means a normed vector space where all Cauchy sequence converges. The reciprocal is always true,i.e, if a sequence converges(in a normed vector space) it's a Cauchy sequence.
b) C ( [ − 1 , 1 ] ) = { f : [ − 1 , 1 ] → R ; f is continuous } is a vector space over R , with the norm ∫ − 1 1 f ( t ) 2 d t . This example is which we need in this exercise. We shall see that this norm comes from an inner product "dot product" and prove that it is really a norm.
We just only need 3 theorems and the notion of orthoganillity and ortonormallity
Definition.-
Two vectors v and w are said to be orthogonal if v ⋅ w = ⟨ v , w ⟩ = 0 . They are orthonormal if each vector has length 1 , i.e., v is orthonormal if ⟨ v , v ⟩ = 1 . Notation.- x ⊥ y means x is orthogonal to y , i. e, ⟨ x , y ⟩ = 0
Definition.- ( E , ⟨ ⋅ , ⋅ ⟩ ) is a prehilbert space iff E is a vectorial space and ⟨ ⋅ , ⋅ ⟩ is an inner product.
Now, I'm going to give the definitions and necessary theorems for solving this problem without demostrations, and after I'll give the solution. Finally, I'll give the proofs of these theorems, and maybe I keep on working on this site, creating a wiki...
Theorem 1.- Cauchy- Schwartz inequality Generalisation
Let ( E , ⟨ ⋅ , ⋅ ⟩ ) be a prehilbert space. Then:
a) ∀ x , y ∈ E , ∣ ⟨ x , y ⟩ ∣ 2 ≤ ⟨ x , x ⟩ ⋅ ⟨ y , y ⟩ ( Cauchy-Schwartz inequality)
b) We can define a norm in E . Thus,If ∣ ∣ x ∣ ∣ = + ⟨ x , x ⟩ , ∀ x ∈ E , then ∣ ∣ ⋅ ∣ ∣ is a norm.
Definition.- Let ( E , ∣ ∣ ⋅ ∣ ∣ ) bea normed vectorial space, S ⊆ E a subset and x ∈ E , then:
a) The distance from x to S is d ( x , S ) = infimum { d ( x , y ) , y ∈ S } = infimum { ∣ ∣ x − y ∣ ∣ , y ∈ S }
b) If there exists y ∈ S such that d ( x , S ) = d ( x , y ) = ∣ ∣ x − y ∣ ∣ , y is a best approximation(approach) from x to S .
Examples.-
1.- In R 2 , S = { ( x , y ) ∈ R 2 , x 2 + y 2 < 1 } , and let x = ( 0 , 2 ) then d ( x , S ) = 1 , nevertheless there doesn't exist a best approximation from x to S .
2.- In R 2 , S = { ( x , y ) ∈ R 2 , x = 0 ∨ y = 1 } , and let x = ( 1 , 0 ) then d ( x , S ) = 1 and there exist 2 best approximation from x to S : y 1 = ( 0 , 0 ) , y 2 = ( 1 , 1 )
Theorem 2.-
Let ( E , ⟨ ⋅ , ⋅ ⟩ ) be a prehilbert space and F ⊆ E a vectorial subspace, and x ∈ E . Then :
a) If there is a better approximation from x to F , this is unique.
b) y ∈ F is a best approximation from x to F ⟺ x − y is ortohogonal a F .
Theorem 3.- (Gram- Schmidt) (1907 a. C.)
Let ( E , ⟨ ⋅ , ⋅ ⟩ ) be a prehilbert space and { x 1 , . . . , x n , . . . } a set of linearly independent vectors. If y 1 : = x 1 , u 1 = ∣ ∣ y 1 ∣ ∣ y 1 and y n : = x n − j = 1 ∑ n − 1 ⟨ x n , u j ⟩ u j , u n : = ∣ ∣ y n ∣ ∣ y n , n ≥ 2 . Then, { u 1 , . . . , u n } is a orthonormal ( and linearly independent) set ∀ n ∈ N , and Span { x 1 , . . . , x n } = Span { u 1 , . . . , u n }
Solution of this problem
Let C ( [ − 1 , 1 ] ) = { f : [ − 1 , 1 ] → R ; f is continuous } be the vectorial space of the continuous functions defined on [ − 1 , 1 ] over R whith the inner product (it's left to the reader that it is an inner product) ⟨ f ( t ) , g ( t ) ⟩ = ∫ − 1 1 f ( t ) ⋅ g ( t ) d t = ∫ − 1 1 f ( t ) ⋅ g ( t ) d t because g ( t ) is a real continuous function. The problem is to find the best approximation from x 3 to the vectorial subspace P 2 ( [ − 1 , 1 ] ) of the polynomials of second grade having values on [ − 1 , 1 ] over R .I'm going to use Gram Schmidt theorem and theorem 2.
(Of course, there are more solutions, for instance, one quick solution is to take Legendre's polynomials(this solution is direct) or using Chebysev theorem or minimax method... or @Pi Han Goh proof, etc.).
{ 1 , t , t 2 } is a basis of P 2 ( [ − 1 , 1 ] ) , and { 1 , t , t 2 , t 3 , . . . } is a basis of P ( [ − 1 , 1 ] ) (vectorial space of the polynomials..) y 1 = 1 , ⟨ 1 , 1 ⟩ = ∫ − 1 1 1 d t = 2 ⇒ u 1 = 2 2 y 2 = t − ∫ − 1 1 2 2 t d t 2 2 = t − ( 1 / 2 ) [ 2 t 2 ] − 1 1 = t , ⟨ t , t ⟩ = 2 / 3 ⇒ u 2 = 3 / 2 t y 3 = t 2 − ( 1 / 2 ) ∫ − 1 1 t 2 d t − 2 3 t ⋅ ∫ − 1 1 t 3 d t = t 2 − 3 1 ⇒ u 3 = ∣ ∣ y 3 ∣ ∣ y 3 And finally, we just only need y 4 = t 3 − ( 1 / 2 ) ∫ − 1 1 t 3 d t − 2 3 t ⋅ ∫ − 1 1 t 4 d t − ⟨ y 3 , y 3 ⟩ ⟨ t 3 , y 3 ⟩ y 3 = t 3 − ( 3 / 5 ) t , i.e, the unique best approximation from x 3 to P 2 ( [ − 1 , 1 ] ) is ( 3 / 5 ) x and this means a , b , c ∈ R min ∫ − 1 1 ∣ x 3 − a − b x − c x 2 ∣ 2 d x = ∫ − 1 1 ( t 3 − ( 3 / 5 ) t ) 2 d t = 1 7 5 8 .
Details.-
{ 1 , t , t 2 , t 3 } is a basis of P 3 ( [ − 1 , 1 ] ) = { set of poynomials at most of third degree on [-1, 1] over R } , due to Gram-Schmidt { y 1 , y 2 , y 3 , y 4 } is an orthogonal basis of P 3 ( [ − 1 , 1 ] and y 4 = x 3 − ( 3 / 5 ) x is orthogonal to P 2 ( [ − 1 , 1 ] with this inner product. Because of the problem is to find the best approximation from x 3 to P 2 ( [ − 1 , 1 ] , using theorem 2, it's not necessary to calculate u 4 or to multiply y 4 by a real number...
Theorem 1.- Cauchy- Schwartz inequality Generalisation
Let ( E , ⟨ ⋅ , ⋅ ⟩ ) be a prehilbert space. Then:
a) ∀ x , y ∈ E , ∣ ⟨ x , y ⟩ ∣ 2 ≤ ⟨ x , x ⟩ ⋅ ⟨ y , y ⟩ (Cauchy-Schwartz inequality)
b) We can define a norm in E . Thus,If ∣ ∣ x ∣ ∣ = + ⟨ x , x ⟩ , ∀ x ∈ E , then ∣ ∣ ⋅ ∣ ∣ is a norm
Proof of theorem 1.-
a) ∀ x , y ∈ E and λ ∈ K = R or C , 0 ≤ ⟨ x − λ y , x − λ y ⟩ = ⟨ x , x ⟩ − λ ⟨ x , y ⟩ − λ ⟨ y , x ⟩ + ∣ λ ∣ 2 ⟨ y , y ⟩ ; ⟨ y , x ⟩ = ∣ ⟨ y , x ⟩ ∣ ⋅ e i θ = b ⋅ e i θ . Let λ = t ⋅ e − i θ with t ∈ R . Then 0 ≤ ⟨ x , x ⟩ − t ⋅ e i θ ⋅ b ⋅ e − i θ − t ⋅ e − i θ ⋅ b ⋅ e i θ + t 2 ⟨ y , y ⟩ = ⟨ x , x ⟩ − 2 t b + t 2 ⟨ y , y ⟩ , ∀ t ∈ R For this to happen ∀ t ∈ R , the discriminant of this quadratic equation in t , must be less than or equal to 0 . This means 4 b 2 − 4 ⟨ y , y ⟩ ⋅ ⟨ x , x ⟩ ≤ 0 ⇒ b 2 − ⟨ y , y ⟩ ⋅ ⟨ x , x ⟩ ≤ 0 ⇒ ∣ ⟨ x , y ⟩ ∣ 2 ≤ ⟨ y , y ⟩ ⋅ ⟨ x , x ⟩ □ = q.e.d
b) ,If ∣ ∣ x ∣ ∣ = + ⟨ x , x ⟩ , ∀ x ∈ E , then ∣ ∣ ⋅ ∣ ∣ is a norm.
i) ∣ ∣ x ∣ ∣ = + ⟨ x , x ⟩ ≥ 0 , ∀ x ∈ E ⟺ ∣ ∣ x ∣ ∣ 2 = ⟨ x , x ⟩ ≥ 0 , ∀ x ∈ E . (True)
∣ ∣ x ∣ ∣ = + ⟨ x , x ⟩ = 0 ⟺ ∣ ∣ x ∣ ∣ 2 = ⟨ x , x ⟩ = 0 ⟺ x = 0
ii) ∣ ∣ λ x ∣ ∣ 2 = ∣ λ ∣ 2 ⋅ ∣ ∣ x ∣ ∣ 2 ⇒ ∣ ∣ λ x ∣ ∣ = ∣ λ ∣ ∣ ∣ x ∣ ∣ , ∀ x ∈ E
iii) ∣ ∣ x + y ∣ ∣ 2 = ⟨ x , x ⟩ + ⟨ x , y ⟩ + ⟨ y , x ⟩ + ⟨ y , y ⟩ = ∣ ∣ x ∣ ∣ 2 + 2 R ⟨ x , y ⟩ + ∣ ∣ y ∣ ∣ 2 ≤ ∣ ∣ x ∣ ∣ 2 + 2 ∣ ⟨ x , y ⟩ ∣ + ∣ ∣ y ∣ ∣ 2 ≤ Applying Cauchy Schwartz inequality ∣ ∣ x ∣ ∣ 2 + 2 ∣ ∣ x ∣ ∣ ⋅ ∣ ∣ y ∣ ∣ + ∣ ∣ y ∣ ∣ 2 = ( ∣ ∣ x ∣ ∣ + ∣ ∣ y ∣ ∣ ) 2 Therefore, ∣ ∣ x + y ∣ ∣ 2 ≤ ( ∣ ∣ x ∣ ∣ + ∣ ∣ y ∣ ∣ ) 2 ⇒ ∣ ∣ x + y ∣ ∣ ≤ ∣ ∣ x ∣ ∣ + ∣ ∣ y ∣ ∣ This last inequality is sometimes called Minkowsky inequality □ (End of the proof of theorem 1)
Theorem 2.-
Let ( E , ⟨ ⋅ , ⋅ ⟩ ) be a prehilbert space and F ⊆ E a vectorial subspace, and x ∈ E . Then :
a) If there is a better approximation from x to F , this is unique.
I'll use Pythagoras's theorem : If x ⊥ y then, ∣ ∣ x + y ∣ ∣ 2 = ∣ ∣ x ∣ ∣ 2 + ∣ ∣ y ∣ ∣ 2 .
∣ ∣ x + y ∣ ∣ 2 = ⟨ x + y , x + y ⟩ = ⟨ x , x ⟩ + ⟨ y , x ⟩ + ⟨ x , y ⟩ + ⟨ y , y ⟩ = ∣ ∣ x ∣ ∣ 2 + ∣ ∣ y ∣ ∣ 2 , □ = q.e.d
Proof of a).- Suppose that we have just proved b)
Reductio ad absurdum. Suppose that y 1 and y 2 are two best approximattions from x to F , then y 1 = y 2 ⇒ ∣ ∣ y 1 − y 2 ∣ ∣ > 0 , ⇒ ∣ ∣ x − y 1 ∣ ∣ 2 = ∣ ∣ x − y 2 + y 2 − y 1 ∣ ∣ 2 = ∣ ∣ x − y 2 ∣ ∣ 2 + ∣ ∣ y 2 − y 1 ∣ ∣ 2 > ∣ ∣ x − y 2 ∣ ∣ 2 (Contradiction) □
b) y ∈ F is a best approximation from x to F ⟺ x − y is ortohogonal a F
⇒ ) Let z ∈ F be any vector with ∣ ∣ z ∣ ∣ = 1 , then define w = y + ⟨ x − y , z ⟩ ⋅ z ∈ F . Because of y is a best approximattion from x to F , ∣ ∣ x − y ∣ ∣ 2 ≤ ∣ ∣ x − w ∣ ∣ 2 = ⟨ x − y − ⟨ x − y , z ⟩ ⋅ z , x − y − ⟨ x − y , z ⟩ ⋅ z ⟩ = = ∣ ∣ x − y ∣ ∣ 2 − ⟨ x − y , z ⟩ ⋅ ⟨ z , x − y ⟩ − ⟨ x − y , z ⟩ ⟨ x − y , z ⟩ + ∣ ⟨ x − y , z ⟩ ∣ 2 ⋅ ∣ ∣ z ∣ ∣ 2 = Using ∣ ∣ z ∣ ∣ = 1 = ∣ ∣ x − y ∣ ∣ 2 − ∣ ⟨ x − y , z ⟩ ∣ 2 − ∣ ⟨ x − y , z ⟩ ∣ 2 + ∣ ⟨ x − y , z ⟩ ∣ 2 = ∣ ∣ x − y ∣ ∣ 2 − ∣ ⟨ x − y , z ⟩ ∣ 2 ⇒ ∣ ⟨ x − y , z ⟩ ∣ 2 = 0 □
⇐ ) Let z ∈ F be any vector , then y − z ∈ F and ∣ ∣ x − z ∣ ∣ 2 = ∣ ∣ x − y + y − z ∣ ∣ 2 = applying Pythagoras's theorem ∣ ∣ x − y ∣ ∣ 2 + ∣ ∣ y − z ∣ ∣ 2 ≥ ∣ ∣ x − y ∣ ∣ 2 ⇒ y is the best approximation from x to F □ (End of the proof of theorem 2)
Theorem 3.- (Gram- Schmidt) (1907 a. C.)
Let ( E , ⟨ ⋅ , ⋅ ⟩ ) be a prehilbert space and { x 1 , . . . , x n , . . . } a set of linearly independent vectors. If y 1 : = x 1 , u 1 = ∣ ∣ y 1 ∣ ∣ y 1 and y n : = x n − j = 1 ∑ n − 1 ⟨ x n , u j ⟩ u j , u n : = ∣ ∣ y n ∣ ∣ y n , n ≥ 2 . Then, { u 1 , . . . , u n } is a orthonormal ( and linearly independent) set ∀ n ∈ N , and Span { x 1 , . . . , x n } = Span { u 1 , . . . , u n }
Proof of theorem 3.-
By induction:
1.- n=1 is trivial, supposse this result is true for n ≥ 1 ∈ N , we are going to prove that this result is true for n + 1
2.- y n + 1 = x n + 1 − j = 1 ∑ n ⟨ x n + 1 , u j ⟩ u j ⇒ ⟨ y n + 1 , u j ⟩ = = ⟨ x n + 1 , u j ⟩ − ⟨ x n + 1 , u j ⟩ = 0 , ∀ j ∈ { 1 , 2 , . . . , n } due to { u 1 , u 2 , . . . , u n } is an orthonormal set. This implies that { u 1 , . . . , u n + 1 } is an orthonormal set and Span { x 1 , . . . , x n } = Span { u 1 , . . . , u n } because u n + 1 ∈ Span { x 1 , . . . , x n + 1 } and x n + 1 ∈ Span { u 1 , . . . , u n + 1 } □
Now, I'm going to give a corollary of this theorem, and other proof similar to the first solution.
Corollary of theorem 3 .-
Let ( E , ⟨ ⋅ , ⋅ ⟩ ) be a prehilbert space and M ⊆ E a finite vectorial subspace. Then ∀ x ∈ E , there exists the best approximation P n ( x ) from x to M . If { u 1 , u 2 , . . . , u n } is an orthonormal basis of M , then P n ( x ) = i = 1 ∑ n ⟨ x , u i ⟩ ⋅ u i and d(x, M) 2 = ∣ ∣ x ∣ ∣ 2 − i = 1 ∑ n ∣ ⟨ x , u i ⟩ ∣ 2
Solution 2 of this problem based on previous solution and corollary
\displaystyle \min_{a, b, c \in \mathbb{R}} \int_{-1}^1 |x^3 - a - bx - cx^2|^2 \, dx = \text{ d(x^3, P\_2 ([-1, 1])}^2 = \int_{-1}^1 x^6 \, dx - |\int_{-1}^1 \sqrt{\frac{3}{2}} \cdot x^4 \, dx |^2 = = 7 2 − 2 5 6 = 1 2 5 5 0 − 4 2 = 1 2 5 8 , □
Proof of Corollary of theorem 3 .-
∀ i = 1 , 2 , . . . , n , fixed but arbitrary , ⟨ x − P n ( x ) , u i ⟩ = ⟨ x , u i ⟩ − ⟨ x , u i ⟩ = 0 ⇒ x − P n ( x ) ⊥ Span { u 1 , u 2 , . . . , u n } = M ⇒ P n ( x ) is the unique best approximation from x to M due to theorem 2.
Then, d(x, M) 2 = ⟨ x − P n ( x ) , x − P n ( x ) ⟩ = ⟨ x − P n ( x ) , x ⟩ = = ∣ ∣ x ∣ ∣ 2 − i = 1 ∑ n ∣ ⟨ x , u i ⟩ ∣ 2 , □
This theory will be applied and continued in this problem
You can easily solve this by realising that integral simplifies to a quadratic expression. And what's left to do is completing the square .
Log in to reply
My soution have been edited
Log in to reply
Originally it wasn't like tthis
This looks like a lot of important information in one solution.
I think it's better for you to write your contributions in some note/wiki so that other people can easily access it. What do you think about it?
Log in to reply
@Pi Han Goh – It's very easy make a coppy and I can fixed it and make a wiki, but you have to to make one thing. Please, respect my work, please only this and you have to say two names, my teacher and me
Log in to reply
@Guillermo Templado – My teacher is Bernardo Cascales about this
Log in to reply
@Guillermo Templado – He is a great teacher, one of the best than I have had
Log in to reply
@Guillermo Templado – He's a teacher in the University of Murcia, Spain
@Guillermo Templado – I'm sorry. I don't know what you're trying to say here. I'm requesting you to publish a wiki that showcase all these interesting linear algebra facts. I'm sure a lot of linear algebra enthusiast will be interested in reading them.
I don't see how knowing those two names will help anybody.
@Pi Han Goh – ok, please, the person or people than manipulation my solution come here, please
Log in to reply
@Guillermo Templado – I'm sorry. What is going on here? Who is manipulating your solution? How is that the subject matter in the first place?
Log in to reply
@Pi Han Goh – my words from my original solution have been manipulated. Please,if I'm working respect my job. And I need a preview for my work, is it possible, please?
Log in to reply
@Guillermo Templado – What has been manipulated? It doesn't look much different from the last time I saw it.
Plus, I doubt any moderators/staffs who has edited your solution has an ill intention to screw up your work. Nobody here is deliberately trying to sabotage you.
Very detailed and well written. Agreed with Pi Han that this parts of this could be added to the existing vector space or gram-schmidt process wikis.
I'm interested in developing some interesting theorems, Gram Schmidt... and other theorems what can be useful for comunity, and for other problems... Anyway, you can make a proof too, if you want...
Log in to reply
Hey, are you on Slack? We're currently developing some wikis, are you interested to join in?
Log in to reply
@Pi Han Goh – I was on Slack... I left it. I got bored,haha, but I'm interested on developing wikis... I would like finishing this part. I liked so much when I were student. The subject was Numerical Analisis II,... In numerical Analyisis I you can find Newton method, fixed point theorem,and so many theorems using computers... In this second part, you don't need so many computers algorithms. It's an introduction to Functional Analysis... Yes, I'm interested on creating wikis... but I'm very busy too, little by little, I want to learn Electricity and magnetism, and I want to implement Python or Pascal to my computer... so many things...
Log in to reply
@Guillermo Templado – Come back to Slack if you're interested. Notify @Eli Ross if you're interested to build up a particular wiki that you're interested.
Log in to reply
@Pi Han Goh – Yup, I'm interested on Riesz, Riesz- Fisher, Korovkin, Weierstrass theorems, Legendre polynomials, Chebyshev polynomilas applications. Example: Exercise.- Find the polynomial q of third degree what minimize max − 1 ≤ x ≤ 1 ∣ x 4 − q ( x ) ∣ ., Fourier Series (introduction), trigonometric polynomials, minimax and minimum squares method, optimization,...
Log in to reply
@Guillermo Templado – Those are great wikis to develop! I look forward to seeing your contributions.
Thanks for such a detailed explanation. However, I do not think that you have yet made it evident why we should be finding u 4 . I see some references to it, but I think you can make it more explicit.
Log in to reply
@Calvin Lin – Althought u 4 is orthogonal and orthonormal to P 2 ( [ − 1 , 1 ] ) , theorem 2 can detune you... We are looking for a best approximation from x 3 to P 2 ( [ − 1 , 1 ] ) ... Anyway, I'll keep on with details and working...
Log in to reply
@Guillermo Templado – I'll be back later here, and I'll start defining an inner product and a vectorial space,and a group and everything, if it's necessary for this exercise, but please, don't touch my work, and my comments, and please, leave my alone, and please, respect ourselves.
@Calvin Lin – haha, don't me make laugh or cry, please
@Guillermo Templado – Who is this person talking, you are not Pi Han Gogh, are you?
Can u xplain more
Log in to reply
Yes of course, give me some time, please. I want to develope a whole solution,but I need at least a week... I can say a wiki... Day after day, you can go seeing this problem, I'll leave something written. I'm going to start right now ..
Log in to reply
Yes sure .This is a very nice question
Log in to reply
@Kushal Bose – I have finished the proof of this problem. I'm now going to prove the theorems 1 , 2 , 3, and maybe I keep on with more theorems. I hope you find this site useful...
Problem Loading...
Note Loading...
Set Loading...
For all real x , ∣ x ∣ 2 = x 2 , so
I a , b , c = = = = = ∫ − 1 1 ∣ x 3 − a − b x − c x 2 ∣ 2 d x ∫ − 1 1 ( x 3 − a − b x − c x 2 ) 2 d x ∫ − 1 1 [ x 6 + x 5 ( − 2 c ) + x 4 ( − 2 b + c 2 ) + x 3 ( − 2 a + 2 b c ) + x 2 ( 2 a c + b 2 ) + x ( 2 a b ) + a 2 ] d x [ 7 1 x 7 + 6 − 2 c x 6 + 5 − 2 b + c 2 x 5 + 4 − 2 a + 2 b c x 4 + 3 2 a c + b 2 + 2 2 a b x 2 + a 2 x ] x = − 1 x = 1 1 0 5 2 ( 1 0 5 a 2 + 7 0 a c + 3 5 b 2 − 4 2 b + 2 1 c 2 + 1 5 )
What's left is to minimize the expression 1 0 5 a 2 + 7 0 a c + 3 5 b 2 − 4 2 b + 2 1 c 2 + 1 5 via completing the square ,
1 0 5 a 2 + 7 0 a c + 3 5 b 2 − 4 2 b + 2 1 c 2 + 1 5 = = = = = ≥ ( 1 0 5 a 2 + 7 0 a c + 2 1 c 2 ) + ( 3 5 b 2 − 4 2 b + 1 5 ) 1 0 5 ( a 2 + 3 2 a c + 5 1 c 2 ) + 3 5 ( b 2 − 5 6 b + 7 3 ) 1 0 5 [ ( a + 3 c ) 2 + 5 c 2 − 9 c 2 ] + 3 5 [ ( b − 5 3 ) 2 − 2 5 9 + 7 3 ] 1 0 5 [ ( a + 3 c ) 2 − 4 5 4 c 2 ] + 3 5 ( b − 5 3 ) 2 + 5 1 2 1 0 5 ( a + 3 c ) 2 + 3 2 8 c 2 + 3 5 ( b − 5 3 ) 2 + 5 1 2 0 + 0 + 0 + 5 1 2 = 5 1 2 ,
because x 2 ≥ 0 for all real x .
Hence, I a , b , c ≥ 1 0 5 2 ⋅ 5 1 2 = 1 7 5 8 when a + 3 c = b − 5 3 = c = 0 ⇒ ( a , b , c ) = ( 0 , 5 3 , 0 ) . The answer is 8 + 1 7 5 = 1 8 3 .