Gram-Schmidt Or?

Calculus Level 5

1 1 x 3 a b x c x 2 2 d x \large \int_{-1}^1 \Big|x^3 - a-bx - cx^2\Big|^2 \, dx

Let a , b , a,b, and c c be real constants.

If the minimum value of the integral above can be expressed as A B \dfrac AB , where A A and B B are coprime positive integers, find A + B A+B .

Notation: | \cdot | denotes the absolute value function .


The answer is 183.

This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try refreshing the page, (b) enabling javascript if it is disabled on your browser and, finally, (c) loading the non-javascript version of this page . We're sorry about the hassle.

2 solutions

Pi Han Goh
Oct 24, 2016

For all real x x , x 2 = x 2 |x|^2 = x^2 , so

I a , b , c = 1 1 x 3 a b x c x 2 2 d x = 1 1 ( x 3 a b x c x 2 ) 2 d x = 1 1 [ x 6 + x 5 ( 2 c ) + x 4 ( 2 b + c 2 ) + x 3 ( 2 a + 2 b c ) + x 2 ( 2 a c + b 2 ) + x ( 2 a b ) + a 2 ] d x = [ 1 7 x 7 + 2 c 6 x 6 + 2 b + c 2 5 x 5 + 2 a + 2 b c 4 x 4 + 2 a c + b 2 3 + 2 a b 2 x 2 + a 2 x ] x = 1 x = 1 = 2 105 ( 105 a 2 + 70 a c + 35 b 2 42 b + 21 c 2 + 15 ) \begin{aligned} I_{a,b,c} &= &\int_{-1}^1 |x^3 - a-bx - cx^2|^2 \, dx \\ &=& \int_{-1}^1 (x^3 - a-bx - cx^2)^2 \, dx \\ &=& \int_{-1}^1 [ x^6 + x^5(-2c) + x^4(-2b + c^2) + x^3(-2a +2bc) + x^2(2ac+b^2) + x(2ab) + a^2 ] \, dx \\ &=& \left [\dfrac17x^7 + \dfrac{-2c}6 x^6 + \dfrac{-2b+c^2}5 x^5 + \dfrac{-2a+2bc}4 x^4 + \dfrac{2ac+b^2}3 + \dfrac{2ab}2 x^2 + a^2 x \right]_{x=-1}^{x=1} \\ &=& \dfrac2{105} (105a^2 + 70ac + 35b^2 - 42b + 21c^2 + 15) \end{aligned}

What's left is to minimize the expression 105 a 2 + 70 a c + 35 b 2 42 b + 21 c 2 + 15 105a^2 + 70ac + 35b^2 - 42b + 21c^2 + 15 via completing the square ,

105 a 2 + 70 a c + 35 b 2 42 b + 21 c 2 + 15 = ( 105 a 2 + 70 a c + 21 c 2 ) + ( 35 b 2 42 b + 15 ) = 105 ( a 2 + 2 3 a c + 1 5 c 2 ) + 35 ( b 2 6 5 b + 3 7 ) = 105 [ ( a + c 3 ) 2 + c 2 5 c 2 9 ] + 35 [ ( b 3 5 ) 2 9 25 + 3 7 ] = 105 [ ( a + c 3 ) 2 4 c 2 45 ] + 35 ( b 3 5 ) 2 + 12 5 = 105 ( a + c 3 ) 2 + 28 3 c 2 + 35 ( b 3 5 ) 2 + 12 5 0 + 0 + 0 + 12 5 = 12 5 , \begin{aligned} 105a^2 + 70ac + 35b^2 - 42b + 21c^2 + 15 &=& (105a^2 + 70ac + 21c^2) + (35b^2 - 42 b + 15) \\ &=& 105 \left( a^2 + \dfrac23 ac + \dfrac15 c^2 \right) + 35 \left( b^2 - \dfrac65 b + \dfrac37\right) \\ &=& 105 \left [ \left( a + \dfrac c3\right)^2 + \dfrac{c^2}5 - \dfrac{c^2}9 \right] + 35 \left[ \left( b - \dfrac35 \right)^2 - \dfrac{9}{25} + \dfrac37\right] \\ &=& 105 \left [ \left( a + \dfrac c3\right)^2 - \dfrac{4c^2}{45} \right] +35 \left( b - \dfrac35\right) ^2 + \dfrac{12}5 \\ &=& 105 \left( a + \dfrac c3\right)^2 + \dfrac{28}3 c^2 + 35 \left( b - \dfrac35\right) ^2 + \dfrac{12}5 \\ & \geq & 0 + 0 + 0 + \dfrac{12}5 = \dfrac{12}5 \; , \end{aligned}

because x 2 0 x^2 \geq 0 for all real x x .

Hence, I a , b , c 2 105 12 5 = 8 175 I_{a,b,c} \geq \dfrac{2}{105} \cdot \dfrac{12}5 = \dfrac8{175} when a + c 3 = b 3 5 = c = 0 ( a , b , c ) = ( 0 , 3 5 , 0 ) a + \dfrac c3 = b - \dfrac 35 = c = 0 \Rightarrow (a,b,c) = \left( 0,\dfrac35,0\right) . The answer is 8 + 175 = 183 8 + 175 = \boxed{183} .

I've liked your proof, thank you very much (+1). I think this proof is easy and beautiful because "everybody" can understand it....

Guillermo Templado - 4 years, 7 months ago

Good observation it's just a quadratic in disguise, and we can apply more basic methods to solving this problem.

Calvin Lin Staff - 4 years, 7 months ago

The minimum is got when a = 0 , b = 3 / 5 , c = 0 a =0, b = 3/5 , c=0 and A B = 8 175 \frac{A}{B} = \frac{8}{175} . Basically, the proof is to find the best aproximattion(approach) from x 3 x^3 to the vector space of the polynomials of second grade with an inner product....


Subparagraph before solution

In math we have to fight to avoid ambiguities. (Sorry for my English and some names)

Definition.- A set is a well-defined collection of objects or elements. For more information you can consult, Set, mathematical definition . There was a controversy at the beginning of the 20th century (fundamental crisis), because of the definition of a "set". For more information, you can consult Russell's paradox . I recommend an algebra book to get you started: Algebra (Thomas Hungerford).

Definition.- A nonempty set A A with two operations ( a + b , a × b a + b, a \times b ) is said to be a conmutative ring if it satisfies:

1.- Properties of + + :

\space \space 1.a) Associative property: a + ( b + c ) = ( a + b ) + c , a , b , c A a + (b + c) = (a + b) + c, \space \forall a, b , c \in A .

\space \space 2.a) Commutative property: a + b = b + a , a , b A a + b = b + a, \space \forall a, b \in A .

\space \space 3.a) \exists an neutral element for + + (we'll call it 0 0 ), such that a + 0 = a = 0 + a , a A a + 0 = a = 0 + a, \space \forall a \in A .

\space \space 4.a) a A \forall a \in A \space \exists an inverse element for + + , i.e, a A , a \forall a \in A \space, \exists \space -a , such that, a + ( a ) = 0 = ( a ) + a a + (-a) = 0 = (-a) + a .

2.- Properties of × \times :

\space \space 1.a) Associative property: a × ( b × c ) = ( a × b ) × c , a , b , c A a \times (b \times c) = (a \times b) \times c, \space \forall a, b , c \in A .

\space \space 2.a) Commutative property: a × b = b × a , a , b A a \times b = b \times a, \space \forall a, b \in A . Because of this property, the ring is commutative.

3.- Properties with + + and × \times :

\space \space Distributive properties:

a , b , c , A , a × ( b + c ) = a × b + a × c \forall a, b, c, \in A, \space a \times (b + c) = a \times b + a \times c .

a , b , c , A , ( a + b ) × c = a × c + b × c \forall a, b, c, \in A, \space (a + b) \times c = a \times c + b \times c .

If there exists an identity element 1 1 for × \times , i. e, a 1 = a = 1 a , a A a \cdot 1 = a = 1 \cdot a, \forall a \in A , we'll say that A A is a commutative ring with identity .

A element a 1 A a^{-1} \in A such that a 1 a = a a 1 = 1 a^{-1} \cdot a = a \cdot a^{-1} = 1 is an inverse element of a a under × \times .

A commutative ring with identity in which every nonzero element (each element distinct to 0 0 ) has an inverse element is called a field .

Definition.- A nomempty set E E is said to be a vector space over a field K K ... (Pause, to be continued?)

Definition.- A norm in a vector space V V over a field K K . (For us K K will be R \mathbb{R} or C \mathbb{C} )is a function : V R + || \cdot ||: V \to \mathbb{R}^{+} such that satisfies:

a) x 0 , x V || x || \ge 0, \space \forall x \in V , and x = 0 x = 0 ||x|| = 0 \iff x = 0

b) x + y x + y , x , y V || x + y || \leq ||x || + || y ||, \space \forall x , y \in V

c) λ x = λ x , x V ||\lambda \cdot x || = |\lambda| \cdot ||x||, \space \forall x \in V

Definition.- A normed vector space, is a vector space with a norm.

Associated with the norm we will have a distance : d ( x , y ) = x y , x , y V d(x,y) = || x - y ||, \space \forall x, y \in V . You can prove :

a) d ( x , y ) 0 , d ( x , y ) = 0 x = y d(x, y) \ge 0, d(x, y) = 0 \iff x = y

b) d ( x , y ) = d ( y , x ) d(x,y) = d(y,x)

c) d ( x , z ) d ( x , y ) + d ( y , z ) d(x,z) \leq d(x,y) + d(y,z) (triangular inequality) d ( x , z ) = x z = x y + y z x y + y z = d ( x , y ) + d ( y , z ) d(x,z) = || x - z || = || x - y + y - z || \leq ||x - y|| + ||y - z|| = d(x,y) + d(y,z) .

Examples.-

a) A A = ( R , ) (\mathbb{R}, | \cdot |) is a normed vector space over R \mathbb{R} , where |\cdot | is the absolute value. Furthemore, A A is a Banach space, this means a normed vector space where all Cauchy sequence converges. The reciprocal is always true,i.e, if a sequence converges(in a normed vector space) it's a Cauchy sequence.

b) C ( [ 1 , 1 ] ) = { f : [ 1 , 1 ] R ; f is continuous } \mathcal{C}([-1,1]) = \{ f: [-1, 1] \to \mathbb{R}; \text{f is continuous}\} is a vector space over R \mathbb{R} , with the norm 1 1 f ( t ) 2 d t \sqrt{\int_{-1}^1 f(t)^2 \, dt} . This example is which we need in this exercise. We shall see that this norm comes from an inner product "dot product" and prove that it is really a norm.

We just only need 3 theorems \boxed{\text{3 theorems}} and the notion of orthoganillity and ortonormallity

Definition.-

Two vectors v v and w w are said to be orthogonal if v w = v , w = 0 v \cdot w = \langle v , w \rangle = 0 . They are orthonormal if each vector has length 1 1 , i.e., v is orthonormal if v , v = 1 \langle v, v \rangle = 1 . Notation.- x y x \bot y means x x is orthogonal to y y , i. e, x , y = 0 \langle x, y \rangle = 0

Definition.- ( E , , ) (E, \langle \cdot, \cdot \rangle) is a prehilbert space iff E E is a vectorial space and , \langle \cdot, \cdot \rangle is an inner product.

Now, I'm going to give the definitions and necessary theorems for solving this problem without demostrations, and after I'll give the solution. Finally, I'll give the proofs of these theorems, and maybe I keep on working on this site, creating a wiki...

Theorem 1.- Cauchy- Schwartz inequality Generalisation

Let ( E , , ) (E, \langle \cdot, \cdot \rangle) be a prehilbert space. Then:

a) x , y E \forall x, y \in E , x , y 2 x , x y , y |\langle x ,y \rangle|^2 \leq \langle x ,x \rangle \cdot \langle y ,y \rangle ( Cauchy-Schwartz inequality)

b) We can define a norm in E E . Thus,If x = + x , x , x E || x || = + \sqrt{\langle x ,x \rangle}, \space \forall x \in E , then || \cdot || is a norm.

Definition.- Let ( E , ) (E, ||\cdot||) bea normed vectorial space, S E S \subseteq E a subset and x E x \in E , then:

a) The distance from x x to S S is d ( x , S ) = infimum { d ( x , y ) , y S } d(x,S) = \text{ infimum} \{d(x,y), \space y \in S\} = infimum { x y , y S } \text{ infimum } \{ ||x - y||,\space y \in S \}

b) If there exists y S y \in S such that d ( x , S ) = d ( x , y ) = x y d(x, S) = d(x, y) = || x - y|| , y y is a best approximation(approach) from x x to S S .

Examples.-

1.- In R 2 \mathbb{R}^2 , S = { ( x , y ) R 2 , x 2 + y 2 < 1 } S = \{ (x, y) \in \mathbb{R}^2, x^2 + y^2 < 1\} , and let x = ( 0 , 2 ) x = (0, 2) then d ( x , S ) = 1 d(x, S) = 1 , nevertheless there doesn't exist a best approximation from x x to S S .

2.- In R 2 \mathbb{R}^2 , S = { ( x , y ) R 2 , x = 0 y = 1 } S = \{ (x, y) \in \mathbb{R}^2, x = 0 \vee y = 1\} , and let x = ( 1 , 0 ) x = (1,0) then d ( x , S ) = 1 d(x, S) = 1 and there exist 2 best approximation from x x to S S : y 1 = ( 0 , 0 ) , y 2 = ( 1 , 1 ) y_1 = (0,0) , y_2 = (1,1)

Theorem 2.-

Let ( E , , ) (E, \langle \cdot, \cdot \rangle) be a prehilbert space and F E F \subseteq E a vectorial subspace, and x E x \in E . Then :

a) If there is a better approximation from x x to F F , this is unique.

b) y F y \in F is a best approximation from x x to F F \iff x y x - y is ortohogonal a F F .

Theorem 3.- (Gram- Schmidt) (1907 a. C.)

Let ( E , , ) (E, \langle \cdot, \cdot \rangle) be a prehilbert space and { x 1 , . . . , x n , . . . } \{x_1, ... ,x_n, ...\} a set of linearly independent vectors. If y 1 : = x 1 , u 1 = y 1 y 1 y_1 := x_1,\quad u_1 = \frac{y_1}{|| y_1 ||} and y n : = x n j = 1 n 1 x n , u j u j , u n : = y n y n , n 2. \displaystyle y_n := x_n - \sum_{j = 1}^{n -1} \langle x_n, u_j \rangle \space u_j, \quad u_n := \frac{y_n}{|| y_n ||}, \space n \ge 2. Then, { u 1 , . . . , u n } \{u_1, ... , u_n\} is a orthonormal ( and linearly independent) set n N \forall n \in \mathbb{N} , and Span { x 1 , . . . , x n } = Span { u 1 , . . . , u n } \text{ Span} \{x_1, ... , x_n\} = \text{ Span} \{u_1, ... , u_n\}


Solution of this problem

Let C ( [ 1 , 1 ] ) = { f : [ 1 , 1 ] R ; f is continuous } \mathcal{C}([-1,1]) = \{ f: [-1, 1] \to \mathbb{R}; \text{f is continuous}\} be the vectorial space of the continuous functions defined on [ 1 , 1 ] [- 1, 1] over R \mathbb{R} whith the inner product (it's left to the reader that it is an inner product) f ( t ) , g ( t ) = 1 1 f ( t ) g ( t ) d t = 1 1 f ( t ) g ( t ) d t \langle f(t), g(t) \rangle = \int_{-1}^1 f(t)\cdot \overline{g(t)} \, dt = \int_{-1}^1 f(t)\cdot g(t) \, dt because g ( t ) g(t) is a real continuous function. The problem is to find the best approximation from x 3 x^3 to the vectorial subspace P 2 ( [ 1 , 1 ] ) P_2 ([-1 ,1]) of the polynomials of second grade having values on [ 1 , 1 ] [-1, 1] over R \mathbb{R} .I'm going to use Gram Schmidt theorem and theorem 2.

(Of course, there are more solutions, for instance, one quick solution is to take Legendre's polynomials(this solution is direct) or using Chebysev theorem or minimax method... or @Pi Han Goh proof, etc.).

{ 1 , t , t 2 } \{1, t, t^2\} is a basis of P 2 ( [ 1 , 1 ] ) P_2 ([-1 ,1]) , and { 1 , t , t 2 , t 3 , . . . } \{1, t, t^2, t^3, ... \} is a basis of P ( [ 1 , 1 ] ) P ([-1 ,1]) (vectorial space of the polynomials..) y 1 = 1 , 1 , 1 = 1 1 1 d t = 2 u 1 = 2 2 y_1 = 1, \langle 1, 1 \rangle = \int_{-1}^1 1 \, dt = 2 \Rightarrow u_1 = \frac{\sqrt{2}}{2} y 2 = t 1 1 2 2 t d t 2 2 = t ( 1 / 2 ) [ t 2 2 ] 1 1 = t , t , t = 2 / 3 u 2 = 3 / 2 t y_2 = t - \int_{-1}^1 \frac{\sqrt{2}}{2}t \, dt \space \frac{\sqrt{2}}{2} = t - (1/2)\left[\frac{t^2}{2}\right]_{-1}^1 = t, \space \langle t, t \rangle = 2/3 \Rightarrow u_2 = \sqrt{3/2}t y 3 = t 2 ( 1 / 2 ) 1 1 t 2 d t 3 2 t 1 1 t 3 d t = t 2 1 3 u 3 = y 3 y 3 y_3 = t^2 - (1/2)\int_{-1}^1 t^2 \, dt - \frac{3}{2}t \cdot \int_{-1}^1 t^3 \, dt = t^2 - \frac{1}{3} \Rightarrow u_3 = \frac{y_3}{|| y_ 3 ||} And finally, we just only need y 4 = t 3 ( 1 / 2 ) 1 1 t 3 d t 3 2 t 1 1 t 4 d t t 3 , y 3 y 3 , y 3 y 3 = t 3 ( 3 / 5 ) t y_4 = t^3 - (1/2)\int_{-1}^1 t^3 \, dt - \frac{3}{2}t \cdot \int_{-1}^1 t^4 \, dt - \frac{\langle t^3, y_3 \rangle}{\langle y_3, y_3 \rangle}y_3 = t^3 - (3/5)t , i.e, the unique best approximation from x 3 x^3 to P 2 ( [ 1 , 1 ] ) P_2 ([-1 ,1]) is ( 3 / 5 ) x (3/5)x and this means min a , b , c R 1 1 x 3 a b x c x 2 2 d x = 1 1 ( t 3 ( 3 / 5 ) t ) 2 d t = 8 175 \displaystyle \min_{a,b,c \in \mathbb{R}} \int_{-1}^1 |x^3 - a - bx - cx^2|^2 \, dx = \int_{-1}^1 (t^3 - (3/5)t)^2 \, dt = \frac{8}{175} .

Details.-

{ 1 , t , t 2 , t 3 } \{1, t, t^2, t^3\} is a basis of P 3 ( [ 1 , 1 ] ) = { set of poynomials at most of third degree on [-1, 1] over R } P_3([-1,1]) = \{\text{ set of poynomials at most of third degree on [-1, 1] over } \mathbb{R}\} , due to Gram-Schmidt { y 1 , y 2 , y 3 , y 4 } \{y_1, y_2 , y_3, y_4\} is an orthogonal basis of P 3 ( [ 1 , 1 ] P_3([-1,1] and y 4 = x 3 ( 3 / 5 ) x y_4 = x^3 - (3/5)x is orthogonal to P 2 ( [ 1 , 1 ] P_2([-1,1] with this inner product. Because of the problem is to find the best approximation from x 3 x^3 to P 2 ( [ 1 , 1 ] P_2([-1,1] , using theorem 2, it's not necessary to calculate u 4 u_4 or to multiply y 4 y_4 by a real number...


Theorem 1.- Cauchy- Schwartz inequality Generalisation

Let ( E , , ) (E, \langle \cdot, \cdot \rangle) be a prehilbert space. Then:

a) x , y E \forall x, y \in E , x , y 2 x , x y , y |\langle x ,y \rangle|^2 \leq \langle x ,x \rangle \cdot \langle y ,y \rangle (Cauchy-Schwartz inequality)

b) We can define a norm in E E . Thus,If x = + x , x , x E || x || = + \sqrt{\langle x ,x \rangle}, \space \forall x \in E , then || \cdot || is a norm

Proof of theorem 1.-

a) x , y E \forall x, y \in E and λ K = R \lambda \in K = \mathbb{R} or C \mathbb{C} , 0 x λ y , x λ y = x , x λ x , y λ y , x + λ 2 y , y ; 0 \leq \langle x - \lambda y, x - \lambda y \rangle = \langle x, x \rangle - \overline{\lambda} \langle x , y \rangle - \lambda \langle y , x \rangle + |\lambda|^2 \langle y , y \rangle; y , x = y , x e i θ = b e i θ \langle y , x \rangle = |\langle y , x \rangle| \cdot e^{i\theta} = b \cdot e^{i\theta} . Let λ = t e i θ \lambda = t \cdot e^{-i\theta} with t R t \in \mathbb{R} . Then 0 x , x t e i θ b e i θ t e i θ b e i θ + t 2 y , y = x , x 2 t b + t 2 y , y , t R 0 \leq \langle x , x \rangle - t \cdot e^{i\theta} \cdot b \cdot e^{-i\theta} - t \cdot e^{-i\theta} \cdot b \cdot e^{i\theta} + t^2 \langle y, y \rangle = \langle x , x \rangle - 2tb +t^2 \langle y , y \rangle, \space \forall t \in \mathbb{R} For this to happen t R \forall t \in \mathbb{R} , the discriminant of this quadratic equation in t t , must be less than or equal to 0 0 . This means 4 b 2 4 y , y x , x 0 b 2 y , y x , x 0 x , y 2 y , y x , x 4b^2 - 4 \langle y, y \rangle \cdot \langle x, x \rangle \leq 0 \Rightarrow b^2 - \langle y, y \rangle \cdot \langle x, x \rangle \leq 0 \Rightarrow |\langle x, y \rangle|^2 \leq \langle y, y \rangle \cdot \langle x, x \rangle \square = q.e.d

b) ,If x = + x , x , x E || x || = + \sqrt{\langle x ,x \rangle}, \space \forall x \in E , then || \cdot || is a norm.

i) x = + x , x 0 , x E x 2 = x , x 0 , x E || x || = + \sqrt{\langle x ,x \rangle} \ge 0, \space \forall x \in E \iff || x ||^2 = \langle x ,x \rangle \ge 0, \space \forall x \in E . (True)

x = + x , x = 0 x 2 = x , x = 0 x = 0 || x || = + \sqrt{\langle x ,x \rangle} = 0 \iff || x ||^2 = \langle x ,x \rangle = 0 \iff x = 0

ii) λ x 2 = λ 2 x 2 λ x = λ x , x E ||\lambda x||^2 = |\lambda|^2 \cdot || x ||^2 \Rightarrow ||\lambda x|| = |\lambda| ||x||, \space \forall x \in E

iii) x + y 2 = x , x + x , y + y , x + y , y = x 2 + 2 R x , y + y 2 || x + y||^2 = \langle x , x \rangle + \langle x , y \rangle + \langle y , x \rangle + \langle y , y \rangle = ||x||^2 + 2 \mathfrak R \langle x , y \rangle + || y||^2 \leq x 2 + 2 x , y + y 2 ||x||^2 + 2 |\langle x , y \rangle| + || y||^2 \leq Applying Cauchy Schwartz inequality x 2 + 2 x y + y 2 = ( x + y ) 2 ||x||^2 + 2 ||x|| \cdot ||y|| + ||y||^2 = (||x|| + ||y||)^2 Therefore, x + y 2 ( x + y ) 2 x + y x + y || x + y ||^2 \leq (||x|| + ||y||)^2 \Rightarrow ||x + y|| \leq ||x || + ||y|| This last inequality is sometimes called Minkowsky inequality \square (End of the proof of theorem 1)

Theorem 2.-

Let ( E , , ) (E, \langle \cdot, \cdot \rangle) be a prehilbert space and F E F \subseteq E a vectorial subspace, and x E x \in E . Then :

a) If there is a better approximation from x x to F F , this is unique.

I'll use Pythagoras's theorem : If x y x \bot y then, x + y 2 = x 2 + y 2 ||x + y ||^2 = || x ||^2 + || y ||^2 .

x + y 2 = x + y , x + y = x , x + y , x + x , y + y , y = x 2 + y 2 ||x + y ||^2 = \langle x + y, x + y \rangle = \langle x, x \rangle + \langle y, x \rangle + \langle x, y \rangle + \langle y, y \rangle = || x ||^2 + || y ||^2 , = q.e.d \square = \text{ q.e.d}

Proof of a).- Suppose that we have just proved b)

Reductio ad absurdum. Suppose that y 1 y_1 and y 2 y_2 are two best approximattions from x x to F F , then y 1 y 2 y 1 y 2 > 0 , x y 1 2 = x y 2 + y 2 y 1 2 = x y 2 2 + y 2 y 1 2 > x y 2 2 y_1 \neq y_2 \Rightarrow ||y_1 - y_2|| > 0, \Rightarrow ||x - y_1 ||^2 = ||x - y_2 + y_2 - y_1||^2 = || x - y_2 ||^2 + || y_2 - y_1||^2 > ||x - y_2||^2 (Contradiction) \square

b) y F y \in F is a best approximation from x x to F F \iff x y x - y is ortohogonal a F F

\Rightarrow ) Let z F z \in F be any vector with z = 1 || z || = 1 , then define w = y + x y , z z F w = y + \langle x - y, z \rangle \cdot z \in F . Because of y y is a best approximattion from x x to F F , x y 2 x w 2 = x y x y , z z , x y x y , z z = || x - y ||^2 \leq ||x - w ||^2 = \langle x - y - \langle x - y, z \rangle \cdot z, x - y - \langle x - y, z \rangle \cdot z \rangle = = x y 2 x y , z z , x y x y , z x y , z + x y , z 2 z 2 = = ||x - y ||^2 - \langle x - y, z \rangle \cdot \langle z, x - y\rangle - \overline{\langle x -y, z \rangle} \langle x - y, z \rangle + |\langle x - y, z \rangle|^2 \cdot || z ||^2 = Using z = 1 || z || = 1 = x y 2 x y , z 2 x y , z 2 + x y , z 2 = x y 2 x y , z 2 x y , z 2 = 0 = ||x - y ||^2 - |\langle x - y, z \rangle|^2 - |\langle x - y, z \rangle|^2 + |\langle x - y, z \rangle|^2 = || x - y ||^2 - |\langle x - y, z \rangle|^2 \Rightarrow |\langle x - y, z \rangle|^2 = 0 \square

\Leftarrow ) Let z F z \in F be any vector , then y z F y - z \in F and x z 2 = x y + y z 2 = ||x - z ||^2 = || x - y + y - z||^2 = applying Pythagoras's theorem x y 2 + y z 2 x y 2 || x - y||^2 + ||y - z||^2 \ge ||x - y ||^2 \Rightarrow y is the best approximation from x x to F F \square (End of the proof of theorem 2)

Theorem 3.- (Gram- Schmidt) (1907 a. C.)

Let ( E , , ) (E, \langle \cdot, \cdot \rangle) be a prehilbert space and { x 1 , . . . , x n , . . . } \{x_1, ... ,x_n, ...\} a set of linearly independent vectors. If y 1 : = x 1 , u 1 = y 1 y 1 y_1 := x_1,\quad u_1 = \frac{y_1}{|| y_1 ||} and y n : = x n j = 1 n 1 x n , u j u j , u n : = y n y n , n 2. \displaystyle y_n := x_n - \sum_{j = 1}^{n -1} \langle x_n, u_j \rangle \space u_j, \quad u_n := \frac{y_n}{|| y_n ||}, \space n \ge 2. Then, { u 1 , . . . , u n } \{u_1, ... , u_n\} is a orthonormal ( and linearly independent) set n N \forall n \in \mathbb{N} , and Span { x 1 , . . . , x n } = Span { u 1 , . . . , u n } \text{ Span} \{x_1, ... , x_n\} = \text{ Span} \{u_1, ... , u_n\}

Proof of theorem 3.-

By induction:

1.- n=1 is trivial, supposse this result is true for n 1 N n \ge 1 \in \mathbb{N} , we are going to prove that this result is true for n + 1 n + 1

2.- y n + 1 = x n + 1 j = 1 n x n + 1 , u j u j y n + 1 , u j = y_{n + 1} = x_{n + 1} - \sum_{j = 1}^{n } \langle x_{n + 1}, u_j \rangle \space u_j \Rightarrow \langle y_{n + 1}, u_j \rangle = = x n + 1 , u j x n + 1 , u j = 0 , j { 1 , 2 , . . . , n } = \langle x_{n + 1}, u_j \rangle - \langle x_{n + 1}, u_j \rangle = 0, \space \forall j \in \{1,2, ... , n\} due to { u 1 , u 2 , . . . , u n } \{u_1, u_2, ... , u_n\} is an orthonormal set. This implies that { u 1 , . . . , u n + 1 } \{u_1, ... , u_{n + 1}\} is an orthonormal set and Span { x 1 , . . . , x n } = Span { u 1 , . . . , u n } \text{ Span} \{x_1, ... , x_n\} = \text{ Span} \{u_1, ... , u_n\} because u n + 1 Span { x 1 , . . . , x n + 1 } u_{n + 1} \in \text{ Span} \{x_1, ... , x_{n + 1}\} and x n + 1 Span { u 1 , . . . , u n + 1 } x_{n + 1} \in \text{ Span} \{u_1, ... , u_{n + 1}\} \square


Now, I'm going to give a corollary of this theorem, and other proof similar to the first solution.

Corollary of theorem 3 .-

Let ( E , , ) (E, \langle \cdot, \cdot \rangle) be a prehilbert space and M E M \subseteq E a finite vectorial subspace. Then x E \forall x \in E , there exists the best approximation P n ( x ) P_n (x) from x x to M M . If { u 1 , u 2 , . . . , u n } \{u_1, u_2, ... , u_n\} is an orthonormal basis of M M , then P n ( x ) = i = 1 n x , u i u i \displaystyle P_n (x) = \sum_{i = 1}^n \langle x, u_i \rangle \cdot u_i and d(x, M) 2 = x 2 i = 1 n x , u i 2 \displaystyle \text{ d(x, M)}^2 = || x ||^2 - \sum_{i = 1}^n | \langle x, u_i \rangle |^2

Solution 2 of this problem based on previous solution and corollary

\displaystyle \min_{a, b, c \in \mathbb{R}} \int_{-1}^1 |x^3 - a - bx - cx^2|^2 \, dx = \text{ d(x^3, P\_2 ([-1, 1])}^2 = \int_{-1}^1 x^6 \, dx - |\int_{-1}^1 \sqrt{\frac{3}{2}} \cdot x^4 \, dx |^2 = = 2 7 6 25 = 50 42 125 = 8 125 , = \frac{2}{7} - \frac{6}{25} = \frac{50 - 42}{125} = \frac{8}{125},\space \square

Proof of Corollary of theorem 3 .-

i = 1 , 2 , . . . , n , fixed but arbitrary , x P n ( x ) , u i = x , u i x , u i = 0 \forall i = 1, 2, ... , n, \space \text{ fixed but arbitrary }, \space \langle x - P_n(x), u_i \rangle = \langle x , u_i \rangle - \langle x , u_i \rangle = 0 \Rightarrow x P n ( x ) Span { u 1 , u 2 , . . . , u n } = M x - P_n(x) \bot \text{ Span } \{u_1, u_2, ... , u_n\} = M \Rightarrow P n ( x ) P_n (x) is the unique best approximation from x x to M M due to theorem 2.

Then, d(x, M) 2 = x P n ( x ) , x P n ( x ) = x P n ( x ) , x = \displaystyle \text{ d(x, M)}^2 = \langle x - P_n(x), x - P_n(x) \rangle = \langle x - P_n(x), x \rangle = = x 2 i = 1 n x , u i 2 , = || x ||^2 - \sum_{i = 1}^n | \langle x, u_i \rangle |^2, \square

This theory will be applied and continued in this problem

You can easily solve this by realising that integral simplifies to a quadratic expression. And what's left to do is completing the square .

Pi Han Goh - 4 years, 7 months ago

Log in to reply

My soution have been edited

Guillermo Templado - 4 years, 5 months ago

Log in to reply

Originally it wasn't like tthis

Guillermo Templado - 4 years, 5 months ago

This looks like a lot of important information in one solution.

I think it's better for you to write your contributions in some note/wiki so that other people can easily access it. What do you think about it?

Pi Han Goh - 4 years, 5 months ago

Log in to reply

@Pi Han Goh It's very easy make a coppy and I can fixed it and make a wiki, but you have to to make one thing. Please, respect my work, please only this and you have to say two names, my teacher and me

Guillermo Templado - 4 years, 5 months ago

Log in to reply

@Guillermo Templado My teacher is Bernardo Cascales about this

Guillermo Templado - 4 years, 5 months ago

Log in to reply

@Guillermo Templado He is a great teacher, one of the best than I have had

Guillermo Templado - 4 years, 5 months ago

Log in to reply

@Guillermo Templado He's a teacher in the University of Murcia, Spain

Guillermo Templado - 4 years, 5 months ago

@Guillermo Templado I'm sorry. I don't know what you're trying to say here. I'm requesting you to publish a wiki that showcase all these interesting linear algebra facts. I'm sure a lot of linear algebra enthusiast will be interested in reading them.

I don't see how knowing those two names will help anybody.

Pi Han Goh - 4 years, 5 months ago

@Pi Han Goh ok, please, the person or people than manipulation my solution come here, please

Guillermo Templado - 4 years, 5 months ago

Log in to reply

@Guillermo Templado I'm sorry. What is going on here? Who is manipulating your solution? How is that the subject matter in the first place?

Pi Han Goh - 4 years, 5 months ago

Log in to reply

@Pi Han Goh my words from my original solution have been manipulated. Please,if I'm working respect my job. And I need a preview for my work, is it possible, please?

Guillermo Templado - 4 years, 5 months ago

Log in to reply

@Guillermo Templado What has been manipulated? It doesn't look much different from the last time I saw it.

Plus, I doubt any moderators/staffs who has edited your solution has an ill intention to screw up your work. Nobody here is deliberately trying to sabotage you.

Pi Han Goh - 4 years, 5 months ago

Very detailed and well written. Agreed with Pi Han that this parts of this could be added to the existing vector space or gram-schmidt process wikis.

Calvin Lin Staff - 4 years, 5 months ago

I'm interested in developing some interesting theorems, Gram Schmidt... and other theorems what can be useful for comunity, and for other problems... Anyway, you can make a proof too, if you want...

Guillermo Templado - 4 years, 7 months ago

Log in to reply

Hey, are you on Slack? We're currently developing some wikis, are you interested to join in?

Pi Han Goh - 4 years, 7 months ago

Log in to reply

@Pi Han Goh I was on Slack... I left it. I got bored,haha, but I'm interested on developing wikis... I would like finishing this part. I liked so much when I were student. The subject was Numerical Analisis II,... In numerical Analyisis I you can find Newton method, fixed point theorem,and so many theorems using computers... In this second part, you don't need so many computers algorithms. It's an introduction to Functional Analysis... Yes, I'm interested on creating wikis... but I'm very busy too, little by little, I want to learn Electricity and magnetism, and I want to implement Python or Pascal to my computer... so many things...

Guillermo Templado - 4 years, 7 months ago

Log in to reply

@Guillermo Templado Come back to Slack if you're interested. Notify @Eli Ross if you're interested to build up a particular wiki that you're interested.

Pi Han Goh - 4 years, 7 months ago

Log in to reply

@Pi Han Goh Yup, I'm interested on Riesz, Riesz- Fisher, Korovkin, Weierstrass theorems, Legendre polynomials, Chebyshev polynomilas applications. Example: Exercise.- Find the polynomial q q of third degree what minimize max 1 x 1 x 4 q ( x ) \max_{-1 \leq x \leq 1} |x^4 - q(x)| ., Fourier Series (introduction), trigonometric polynomials, minimax and minimum squares method, optimization,...

Guillermo Templado - 4 years, 7 months ago

Log in to reply

@Guillermo Templado Those are great wikis to develop! I look forward to seeing your contributions.

Thanks for such a detailed explanation. However, I do not think that you have yet made it evident why we should be finding u 4 u_4 . I see some references to it, but I think you can make it more explicit.

Calvin Lin Staff - 4 years, 7 months ago

Log in to reply

@Calvin Lin Althought u 4 u_4 is orthogonal and orthonormal to P 2 ( [ 1 , 1 ] ) P_2([-1, 1]) , theorem 2 can detune you... We are looking for a best approximation from x 3 x^3 to P 2 ( [ 1 , 1 ] ) P_2([-1, 1]) ... Anyway, I'll keep on with details and working...

Guillermo Templado - 4 years, 7 months ago

Log in to reply

@Guillermo Templado I'll be back later here, and I'll start defining an inner product and a vectorial space,and a group and everything, if it's necessary for this exercise, but please, don't touch my work, and my comments, and please, leave my alone, and please, respect ourselves.

Guillermo Templado - 4 years, 5 months ago

@Calvin Lin haha, don't me make laugh or cry, please

Guillermo Templado - 4 years, 5 months ago

@Guillermo Templado Who is this person talking, you are not Pi Han Gogh, are you?

Guillermo Templado - 4 years, 5 months ago

Can u xplain more

Kushal Bose - 4 years, 7 months ago

Log in to reply

Yes of course, give me some time, please. I want to develope a whole solution,but I need at least a week... I can say a wiki... Day after day, you can go seeing this problem, I'll leave something written. I'm going to start right now ..

Guillermo Templado - 4 years, 7 months ago

Log in to reply

Yes sure .This is a very nice question

Kushal Bose - 4 years, 7 months ago

Log in to reply

@Kushal Bose I have finished the proof of this problem. I'm now going to prove the theorems 1 , 2 , 3, and maybe I keep on with more theorems. I hope you find this site useful...

Guillermo Templado - 4 years, 7 months ago

0 pending reports

×

Problem Loading...

Note Loading...

Set Loading...