How many real 2 × 2 matrices A are there such that A 2 0 1 6 = − I 2 , where I 2 represents the 2 × 2 identity matrix.
Enter 666 if you come to the conclusion that infinitely many such matrices A exist.
This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try
refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c)
loading the
non-javascript version of this page
. We're sorry about the hassle.
This does not quite work out. The 2016th power of your matrix will be I 2 since you are using a rotation through 2 π . Also there is an error in the matrix product.
Log in to reply
Oops! Problem fixed.
Log in to reply
Great! It's really to same approach as mine (great minds think alike!); you are just using another matrix, a shear, to conjugate. We both move along an ellipse... ;)
Herr @Andreas Wendler is writing a fine solution. For the sake of variety let me suggest another solution based on @Calvin Lin 's idea.
Let R be the rotation matrix through 2 0 1 6 π , with R 2 0 1 6 = − I 2 . If S is any invertible 2 × 2 matrix, then ( S − 1 R S ) 2 0 1 6 = S − 1 R 2 0 1 6 S = S − 1 ( − I 2 ) S = − I 2 , so that A 2 0 1 6 = − I 2 for A = S − 1 R S . We need to make sure that we can construct infinitely many solutions A this way.
For example, if we let S = [ x 0 0 1 ] , where x = 0 and R = [ a c b d ] , then A = S − 1 R S = [ a c x b / x d ] , an infinite family of solutions for x = 0 .
Thus the answer is 6 6 6 .
[This is not a complete solution]
We want a linear transformation on the plane that when performed 2016 times results in a 180 degree rotation. There are 2016 of them, consisting of rotations of the form ( 2 n π + π ) / 2 0 1 6 .
So, why are there infinitely many of them?
But there are only 2016 such rotation matrices
R θ = [ cos ( θ ) sin ( θ ) − sin ( θ ) cos ( θ ) ]
with θ = ( 2 n + 1 ) π / 2 0 1 6 for 0 ≤ n ≤ 2 0 1 5 , corresponding to the 2016 complex roots of − 1 . We need to find other solutions to show that there are infinitely many.
Log in to reply
Ah yes. My bad. Let me edit the solution.
Log in to reply
Note that for matrices the association rule is valid, so:
A 2 0 1 6 = A 1 0 0 8 ∗ A 1 0 0 8 : = B ∗ B = − I 2
Now it must be fulfilled: d e t ( B ∗ B ) = d e t ( B ) ∗ d e t ( B ) = [ d e t ( B ) ] 2 = d e t ( − I 2 ) = 1
d e t ( B ) = [ d e t ( A ) ] 1 0 0 8
We see that det(B) must be 1 which is possible for an infinite count of matrices A having a determinant of -1 or 1.
q.e.d.
Log in to reply
@Andreas Wendler – This does not quite clinch it. It is necessary that det A = ± 1 but not sufficient.
Log in to reply
@Otto Bretscher – We now have to construct A. Note that A 3 2 = − I 2 because simply to proof each matrix A 3 2 ( 1 + 2 k ) = − I 2 for natural k. We can write:
A 3 2 = A 1 6 ∗ A 1 6 : = B ∗ B ,
A 1 6 = B = A 8 ∗ A 8 : = C ∗ C ,
A 8 = C = A 4 ∗ A 4 : = D ∗ D ,
A 4 = D = A 2 ∗ A 2 : = E ∗ E ,
A 2 = E
This gives us the possibility to determine A by recursion beginning with B from B 2 = − I 2 tracing C, D and E. Already the first step lets two parameters of B free in choice. With real numbers s and t we get:
b 1 1 = s , b 1 2 = t b 2 1 = − t 1 + s 2 , b 2 2 = − s
So we find out that an infinite set of matrices A must exist.
Log in to reply
@Andreas Wendler – It is my role as the setter of this problem to play "des Teufels Advokaten" until everything is crystal clear.
I see how you are giving us infinitely many matrices B such that B 2 = − I 2 , cleverly constructed, but I don't quite see the next step of the recursion. How are we getting the matrices C such that C 2 = B ?
Log in to reply
@Otto Bretscher – It is very smple: Successively we have to solve equation systems for the matrix members and since already B is not unique so A will not be!!!
Log in to reply
@Andreas Wendler – But how do you know that there exists a C such that C 2 = B for the B 's you constructed? Not every 2 × 2 matrix has a "square root".
Log in to reply
@Otto Bretscher – Matrix B (like C...E) has determinante equal 1. So the inverse is existing and we can write: B − 1 ∗ C 2 = B − 1 ∗ B = I 2
So with necessity C 2 must be B!
Log in to reply
@Andreas Wendler – Given B , you have to show that there exists a C such that B = C 2 . No such C is given "a priori."
Log in to reply
@Otto Bretscher – A few minutes ago I got a real solution for matrix C ( 1 s t row: a, b and 2 n d row: c, d) by manually calculation related to matrix B(s,t) given in the 6th last posting:
d = 2 − s + 2 s 2 + 1
a = 2 s + d 2
b = a + d t
c = − b t 2 1 + s 2
We see that all parameter values for s and t produce real matrices C ! That's why the root of B is existing. As second conclusion we note that (similiar to real numbers) then the further (infinite) roots exist till we finally reach wanted matrix A !
Log in to reply
@Andreas Wendler – Thank you for your careful work! This has become quite a project.
I cannot agree with your statement that "the further roots" will necessarily exist; that step requires some more reasoning. As a simple counter example, consider the identity matrix which has as one of its roots the diagonal matrix with diagonal entries 1 and -1... but that matrix will not have a real square root.
Log in to reply
@Otto Bretscher – All matrices B...E have determinantes equal 1. Diagonalization of these matrices is possible what is precondition of the existence of square roots in each case. No eigenvalue will be 0.
So a square root can be determined by the known procedure via calculation of eigenvalues and eigenvectors out of the diagonal matrix and finally back transformation. Maybe we get hereby complex matrices but - what is the salt in the soup - we proved the existence of roots!
But we also know that the square root is not unique. So we can calculate real roots solving the successive equation systems I mentioned and showed as samples in my past postings.
Log in to reply
@Andreas Wendler – Sure, we get complex roots, even in the 3 x 3 case, but the problem asks for real matrices. It is not true, however, that any diagonalizable 2 x 2 matrix with determinant 1 has a real square root.
Log in to reply
@Otto Bretscher – Real roots we ALWAYS obtain by successive solution of the mentioned equation systems I showed for B and C. Perhaps a restriction of the scope of one of the two free parameters could be necessary. But the solutions' plurality remains infinitely of course!
Log in to reply
@Andreas Wendler – There is a theorem that applies here: "If a real invertible matrix A has no negative eigenvalues, then it has a real square root" (Corollary 4.5 here ). This applies to all your matrices A, B, C, D, E since they have an even power that is − I 2 , implying that they have no real eigenvalues at all.
Log in to reply
@Otto Bretscher – Thank you for the finish! I realize that negative eigenvalues of a diagonalizable matrix result in a complex square root.
But finally my step-by-step-tour over the equation systems also lead to the result of having an infinite set of matrices A.
Log in to reply
@Andreas Wendler – Genau... Ende gut alles gut! It was fun thinking about this... I learned quite a bit about real square roots in the process! I may use some of this stuff in the next edition of my text "Linear Algebra with Applications". Thanks!
A diagonalizable real matrix with negative eigenvalues will still have a real square root as long as the multiplicity of each negative eigenvalue is even... that's why − I 2 has a real square root but − I 3 does not.
Log in to reply
@Otto Bretscher – Relatedly, one can show that (for n ≥ 2 ) if A n = 0 but A n − 1 = 0 , then A doesn't have a (complex) square root.
There is a nice 1-line proof using the characteristic equation, and allows us to relax the restriction to A m = 0 where 2 m > n .
Log in to reply
@Calvin Lin – If we had A = B 2 , then B would be nilpotent as well so B n = 0 so B 2 m = 0 so A m = 0 , a contradiction
Problem Loading...
Note Loading...
Set Loading...
Any matrix of the form A = B ( cos θ sin θ − sin θ cos θ ) B − 1 will do, where B is a nonsingular real matrix and cos 2 0 1 6 θ = − 1 ; putting θ = 3 2 π and B = ( 1 0 a 1 ) gives an infinite family of matrices A a = ( cos 3 2 π + a sin 3 2 π sin 3 2 π − ( a 2 + 1 ) sin 3 2 π cos 3 2 π − a sin 3 2 π ) with A a 3 2 = − I 2 , and so certainly A a 2 0 1 6 = − I 2 .