Any A ∈ M n ( C ) yields a linear transformation T A on vector space M n ( C ) defined by T A ( X ) = A X − X A . Which of the following statements is/are true?
Ⅰ. If A is nilpotent , that is, A k = 0 for some positive integer k , then T A is also nilpotent.
Ⅱ. If A is diagonalizable, then T A is also diagonalizable.
Bonus: What about the converses of Ⅰ and Ⅱ?
This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try
refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c)
loading the
non-javascript version of this page
. We're sorry about the hassle.
Wow! Great solution! Would you be kind enough to elaborate on (II)? I'm not familiar with your technique. I see that you have brilliantly formed a basis of n 2 n x n matrices (for M n x n ( C ) ). I also understand why T A is a linear transformation. For me to understand the remainder of your solution I must know how you determined T A as a matrix. Because X and T A ( X ) are n x n , it comes to reason that T A must be n x n . Since your diagonalization has an n 2 x n 2 matrix in the center, this can only mean that the matrices on the sides must be n x n 2 and n 2 x n . But isn't that an impossibility? I only know of cases where diagonalizations result in square matrices. How can you take the inverse of a non-square matrix? I am definitely missing something major here. Hope you or someone else is willing to help me understand.
Log in to reply
If T : V → V is a linear map with dim V = m , then the matrix of T with respect to a chosen basis of V will be m × m . In the case of T A we have V = C n × n with dim V = n 2 so that the matrix of T A will me n 2 × n 2 .
To understand what is going on here, you may find it helpful to work out a simple case, for example, A = [ 1 0 1 0 ] .
Ah ok, that helps a lot! Thanks for taking the time to answer my question! I tried your example with A =[1 1; 0 0] and found T A =[0 1 0 0; 0 -1 0 0; -1 0 1 1; 0 -1 0 0] (by concatenating the columns of both X and AX-XA and setting TX = AX-XA). Some time in the near future, I'll try to spend some more time to try to figure out how to diagonalize T A using your method.
Log in to reply
If you use an eigenbasis for A , for example, ( 1 , 0 ) , ( 1 , − 1 ) , then the matrix of T A comes out to be ⎣ ⎢ ⎢ ⎡ 0 − 1 0 0 0 1 0 0 0 0 − 1 − 1 0 0 0 0 ⎦ ⎥ ⎥ ⎤ , as I mention in my solution, with the 2 × 2 blocks λ i I 2 − A T on the diagonal, and that one is easy to diagonalise. You can also see that the eigenvalues of T A are the differences of the eigenvalues of A .
Problem Loading...
Note Loading...
Set Loading...
(I) We can see, by induction, that T A m ( X ) is a linear combination of expressions of the form A p X A q where p + q = m . Thus, if A k = 0 , then T A 2 k = 0 .
(II) If v 1 , . . . , v n is an eigenbasis for A , with associated eigenvalues λ 1 , . . . , λ n , consider the matrix M i j whose j th column is v i , with 0's elsewhere. The matrix of T A with respect to the basis ( M i j ) will be diagonalizable. More precisely, it will have n diagonalizable blocks λ i I n − A T along the diagonal.
The converse of (I) fails to hold, of course: If A = I n then T A = 0 . The converse of (II) does hold, though; consider the Jordan normal form.