In a dreadful Algebra class, Deeparaj was asked to multiply out 32 matrices in general form.
Deeparaj knew that to multiply two matrices of order l × m and m × n , he had to make l m n multiplications. Since the result is independent of the order in which the matrices are multiplied (as long as they are in the same sequence), Deeparaj figured out the way to minimize the number of multiplications that were needed.
Can you find the number of multiplications he performed?
Input Format
1 2 3 4 5 |
|
corresponds to n matrices of dimensions d 1 × d 2 , d 2 × d 3 , … , d n × d n + 1 .
Example
1 2 3 4 |
|
Output:
4500
Explanation: The matrices are m 1 , m 2 , m 3 of dimensions 1 0 × 3 0 , 3 0 × 5 , 5 × 6 0 . Multiplying them in the order ( m 1 × m 2 ) × m 3 is the best you can do, and it takes 1 0 × 3 0 × 5 + 1 0 × 5 × 6 0 = 4 5 0 0 multiplications.
This section requires Javascript.
You are seeing this because something didn't load right. We suggest you, (a) try
refreshing the page, (b) enabling javascript if it is disabled on your browser and,
finally, (c)
loading the
non-javascript version of this page
. We're sorry about the hassle.
Great writeup and explanation!
This is a very nice solution. What is the complexity of the solution in terms of the number of matrices?
Log in to reply
The complexity is
O
(
n
3
)
. For a given
d
i
f
f
, we compute
f[start, i] + f[i+1, start+diff] + a[start-1]*a[i]*a[start+diff]
for
(
n
−
d
i
f
f
)
⋅
d
i
f
f
times (the loop for
start
and the loop for
i
). That computation is constant time, so for a given
d
i
f
f
, it takes
O
(
(
n
−
d
i
f
f
)
⋅
d
i
f
f
)
time. Summing over all
d
i
f
f
, the time taken is
O ( 1 ( n − 1 ) + 2 ( n − 2 ) + … + ( n − 1 ) ( 1 ) ) = O ( ( 1 + 2 + … + n ) ⋅ n − ( 1 2 + 2 2 + … + n 2 ) ) = O ( 2 1 n 3 − 6 1 n 3 ) = O ( 3 1 n 3 ) = O ( n 3 )
Log in to reply
Yes. Another hacky (but maybe not reliable) way to see this is that the dp table is two dimensional whereas we are accessing n entries at each step.
Problem Loading...
Note Loading...
Set Loading...
Let the matrices be M 1 , M 2 , … , M n in order. Suppose a 0 , a 1 , a 2 , … , a n be positive integers such that the dimension of matrix M i is a i − 1 × a i .
Note that the last product formed must be of the form M 1 M 2 … M i multiplied by M i + 1 M i + 2 … M n for some i ∈ [ 1 , n − 1 ] . The first term is an a 0 × a i matrix, and the second term is an a i × a n matrix, so the cost of this multiplication is a 0 a i a n . Additionally, we need to add the cost of finding each term. The good thing is that the cost of finding each term is independent from each other, so we want to minimize the cost of each term (so that the sum is also as small as possible). This means each term itself is the same problem but with a smaller size. This means we can use dynamic programming to solve it.
The idea is as follows. Suppose we have the optimal cost to compute the product M i M i + 1 … M j for all i , j except ( i , j ) = ( p , q ) . Let f ( i , j ) be this cost. If we want to compute M p M p + 1 … M q as the product of M p M p + 1 … M i times M i + 1 M i + 2 … M q , the total cost is f ( p , i ) + f ( i + 1 , q ) + a p − 1 a i a q . We want to minimize this, so we can simply iterate over all valid i :
f ( p , q ) = i ∈ [ p , q − 1 ] min f ( p , i ) + f ( i + 1 , q ) + a p − 1 a i a q
And how do we find the values of f ( p , i ) and f ( i + 1 , q ) ? Easy, just use the same thing! Note that i − p , q − ( i + 1 ) < q − p ; the difference of the two arguments is smaller. The base case is f ( p , p ) = 0 for all p , because the product of one matrix is just the same matrix, no multiplication required. Thus we can now construct the algorithm: we compute f ( p , q ) one by one, starting from one with smallest q − p (namely 0, the base case above) until we reach q − p = n − 1 , where we will find the value of f ( 1 , n ) .