This is a follow-up problem. The previous version can be found here .
Consider the system of equations:
A x + B y = m C x + D y = n
Here,
A = ⎣ ⎡ 3 0 0 0 3 0 0 0 3 ⎦ ⎤ ; B = ⎣ ⎡ 2 4 7 3 5 9 ⎦ ⎤ ; C = B T ; D = [ 1 2 2 4 ]
m = ⎣ ⎡ 1 0 1 1 1 2 ⎦ ⎤ ; n = [ 1 3 1 4 ]
x ∈ I R 3 × 1 ; y ∈ I R 2 × 1
The goal is to find the column matrices x and y . Let the solution be stacked into a larger column matrix
S = [ x y ] T ; S ∈ I R 5 × 1
Let all the sum of the elements of S be P . Find ⌊ 1 0 ∣ P ∣ ⌋
Note:
⌊ . ⌋ denotes the floor function.
∣ . ∣ denotes the absolute value function.
The superscript T denotes the transpose operation carried out on a matrix.
It may be easy to rewrite the system as a set of 5 linear equations involving 5 unknowns. However, I encourage the solvers to use linear algebra. Try and evaluate a closed-form algebraic expression of the solution and use a calculator in the final few steps. The recommended approach becomes useful while dealing with systems involving a very large number of variables. Otherwise, rewriting the equation set as a system of linear equations and solving them is a mammoth task.
This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try
refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c)
loading the
non-javascript version of this page
. We're sorry about the hassle.
Very nice solution, thank you!
Two further questions:
this solution used the most basic type of algebraic manipulation to solve the equations. For scalars, there are other methods (eg Cramer's rule) - can these be extended to matrices?
is there a nice linear algebra shortcut to get straight to the sum of all entries of S ? (Essentially this is dotting S with a vector with 1 s in every position)
Log in to reply
Thanks for the suggestions. What I have in mind is the following:
The sum of all elements of x and y can be found in the following equation
P = [ 1 1 1 1 1 ] [ A B T B D ] − 1 [ m n ]
Of course the use of a computer can easily enable such a calculation. In order to be able to do this by hand, one needs to invert the block matrix comprising of A , B and D , which would be a rather tedious task. There is a way to factorise the block matrix. I will post another comment following this once I find the means to do so.
Log in to reply
The following link demonstrates how to factorise and invert a block matrix. I would make use of the technique described in the link.
https://en.wikipedia.org/wiki/Schur_complement
Problem Loading...
Note Loading...
Set Loading...
We can manipulate this in the same we way would an equation in scalars, as long as we're careful about the interpretation of multiplication and division; we need to remember that matrix multiplication is not (in general) commutative (so U V is not necessarily equal to V U ), and in place of "division", we will multiply by inverse matrices (so we can only do this with non-singular square matrices).
Left-multiply the first equation by A − 1 to get
x + A − 1 B y = A − 1 m
Now left-multiply by C to get
C x + C A − 1 B y = C A − 1 m
(these two steps could of course be done at the same time, but I've separated them for clarity). Now subtract the second given equation to get
C A − 1 B y − D y = C A − 1 m − n
or
( C A − 1 B − D ) y = C A − 1 m − n
We can now solve for y :
y = ( C A − 1 B − D ) − 1 ( C A − 1 m − n )
and use this value to find x :
x = A − 1 m − A − 1 B y
Plugging the numbers in, we find S = [ 3 . 8 6 , 1 . 3 6 , 0 . 8 0 , 1 4 . 3 5 , − 1 0 . 1 0 ] T (all the values have been rounded to 2 decimal places), leading to the answer ⌊ 1 0 ∣ P ∣ ⌋ = 1 0 2 .