Matrix Chain Multiplication (Recursion & Dynamic Programming)

Posted by on Mar 31, 2024

Given a sequence of dimensions of matrices in an array arr[], where the dimension of the ith matrix is arr[i-1] * arr[i], find the most efficient way to multiply these matrices together.

The efficient way is the one that involves the least number of multiplications.

Input: arr[] = {40, 20, 30, 10, 30}
Output: 26000

Input: arr[] = {10, 20, 30}
Output: 6000

Note: Two matrices of size m * n and n * p when multiplied generate a matrix of size m * p, and the number of multiplications performed or cost is m * n * p.


Solutions

Method 1: Recursion

We can solve this problem recursively.

The idea is to consider the sub-array from "l" to "h" in each recursion, where "l" is the first element and "r" is the last element.

Try to put "k" in different places in each recur and get the minimum cost of left and right sub-arrays recursively, then add left-cost, right-cost, and cost of multiplying left and right matrix to get totalCost.

In each recur, we need to try putting "k" in different places.

Use an inner "for" loop for this purpose.

In each recur, we will get multiple values for "totalCost" because we are getting one totalCost for each inner loop run.

In each recur, we need to return the minimum of all "totalCost" we got from putting k in different places using the inner loop.

That means, the recur will define which part of the array the logic should work on, and the inner loop will define a different position of "k" for each sub-array.

Now for each recur, the position of k will start from "l+1" to "h-1".

If "l >= h - 1" that means the current sub-array contains 2 or fewer elements, or a single matrix, the cost of multiplying the single matrix is zero.

For each k, the array will be divided into two sub-arrays "l to k" and "k to h", get the cost for these two sub-arrays recursively.

Now to get the cost of multiplication of the left and right matrix, we need the dimensions of left and right matrix.

It would be "l * h".

Complexity

The time complexity of this solution is exponential O(2^n).

In addition, O(2^n) auxiliary space was used by the recursion stack.

Method 2: Memoization

The Memoization Technique is basically an extension to the recursive approach so that we can overcome the problem of calculating redundant cases and thus decrease time complexity.

We can see that in each recursive call only the value of "l" and "h" changes, so we can store and reuse the result of a function(..l,h..) call using a "n*n" array.

The array will store a particular state (..l,h..) if we get it the first time.

Now, if we come across the same state (..l,h..) again, instead of calculating it in exponential complexity, we can directly return its result stored in the table in constant time.

Complexity

The time complexity of this solution is (n^3).

In addition, O(n*n) auxiliary space was used by the table.

Related


Unbounded knapsack (Recursion & Dynamic Programming)

Print Shortest Common Super-sequence (SCS)

Longest Common Subsequence (LCS)

Shortest Common Super-sequence (SCS)

Equal subset sum partition problem

Count number of subsets with given sum

Find length of the Longest Common Substring

Minimum number of deletions and insertions

What is "Subset Sum Problem" ?

Print Longest Common Subsequence (LCS)

What is the "Rod Cutting Problem" ?