Suppose f and g are non-constant real-valued differentiable functions on ( − ∞ , ∞ ) ,
f ( x + y ) = f ( x ) f ( y ) − g ( x ) g ( y ) g ( x + y ) = f ( x ) g ( y ) + g ( x ) f ( y )
∀ x , y ∈ R , and f ′ ( 0 ) = 0 . Find ( f ( x ) ) 2 + ( g ( x ) ) 2 , ∀ x ∈ R .
This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try
refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c)
loading the
non-javascript version of this page
. We're sorry about the hassle.
@Agni Purani , that is your clever observation. But can you show that this is the only solution to the given problem?
Log in to reply
I will see to that but I believe that that is the only solution. I will come back if I find out. I have other ways to solve too. But to prove that it is the only solution is quite difficult.
If we put k = g ′ ( 0 ) , then the fact that f and g are differentiable, together with the given conditions, give us that f ( 0 ) = 1 and g ( 0 ) = 0 , as well as f ′ ( x ) = − k g ( x ) g ′ ( x ) = k f ( x ) x ∈ R ( ⋆ ) From this we deduce that f ( 2 n ) ( 0 ) g ( 2 n ) ( 0 ) = = ( − 1 ) n k 2 n 0 f ( 2 n + 1 ) ( 0 ) g ( 2 n + 1 ) ( 0 ) = = 0 ( − 1 ) k k 2 n + 1 for any n ≥ 0 . It is clear from ( ⋆ ) that f and g are infinitely differentiable, and that ∣ ∣ f ( n ) ( x ) ∣ ∣ ≤ ∣ k ∣ n and ∣ ∣ g ( n ) ( x ) ∣ ≤ ∣ k ∣ n for all n ≥ 0 . Taylor's Theorem, with remainder, tells us that, for any x ∈ R and n ∈ N , we can find 0 < θ < 1 such that f ( x ) = j = 0 ∑ n − 1 j ! 1 f ( j ) ( 0 ) x j + n ! 1 x n f ( n ) ( θ x ) and hence ∣ ∣ ∣ ∣ ∣ f ( x ) − j = 0 ∑ n − 1 j ! 1 f ( j ) ( 0 ) x j ∣ ∣ ∣ ∣ ∣ ≤ n ! 1 ∣ k x ∣ n from which we deduce that f ( x ) = j = 0 ∑ ∞ j ! 1 f ( j ) ( 0 ) x j = j = 0 ∑ ∞ ( 2 j ) ! ( − 1 ) j k 2 j x 2 j = cos k x and. similarly, g ( x ) = sin k x .
Putting x = y = 0 in the two equations gives f ( 0 ) = f ( 0 ) 2 − g ( 0 ) 2 g ( 0 ) = 2 f ( 0 ) g ( 0 ) Since f ( 0 ) ( 1 − f ( 0 ) ) = − g ( 0 ) 2 ≤ 0 , we deduce that either f ( 0 ) ≤ 0 or f ( 0 ) ≥ 1 . Thus f ( 0 ) = 2 1 , and hence g ( 0 ) = 0 , and hence f ( 0 ) = 0 or 1 . If f ( 0 ) = 0 then g ( x ) = f ( x ) g ( 0 ) + g ( x ) f ( 0 ) = 0 is constant, which is not possible. Thus we deduce that f ( 0 ) = 1 .
If we define h ( x ) = f ( x ) 2 + g ( x ) 2 , then h is differentiable, and it is easy to show that h ( x + y ) = h ( x ) h ( y ) for all x , y . Since h ( 0 ) = 1 we see that h ( x ) h ( − x ) = 1 for all x , and hence h ( x ) = 0 for all x . Clearly h ( x ) ≥ 0 for all x , and hence it follows that h ( x ) > 0 for all x . Thus ln h is a differentiable function such that ( ln h ) ( x + y ) = ( ln h ) ( x ) + ( ln h ) ( y ) for all x , y , and hence there exists some α such that ( ln h ) ( x ) = α x , and hence h ( x ) = e α x for all x . Since α = h ′ ( 0 ) = 2 f ( 0 ) f ′ ( 0 ) + 2 g ( 0 ) g ′ ( 0 ) = 0 , we deduce that h ( x ) = 1 for all x .
Everything in this argument works for general functions f , g up to the point where we use differentiability of h to show that α = 0 . It might be fun to think of a criterion which is less restrictive than differentiability which would be enough to make this problem work. Requiring both f and g to be bounded functions would do, for example (if h ( x ) = e α x was a bounded function, we would have to have α = 0 ).
Problem Loading...
Note Loading...
Set Loading...
From the above equation and condition, I figured out that f ( x ) = cos ( x ) and g ( x ) = sin ( x )
∴ ( f ( x ) ) 2 + ( g ( x ) ) 2 = sin 2 ( x ) + cos 2 ( x ) = 1