Table of contents
Instead of the greedy approach, we can design algorithms with a divide and conquer strategy. This is algorithms which:
- Divide a problem into 2 or more subproblems
- Solve each problem recursively
- Combine the solutions to the subproblems
A classic example of this is merge sort. There are two ways to analyse the runtime of merge sort: one with induction, and one with a recurrence tree.
In merge sort, we recursively divide a list of elements into 2 subproblems of size , then combine them in linear time. Hence, we make computation steps at each level of the recurrence, where is some constant:
- Level 0 (the full list) =
- Level 1 (2 sublists) =
- Level 2 (4 sublists) =
And so on
The recurrence tree has levels, since the base case occurs when we reach problems of size 1. Therefore, the time complexity is upper bounded by
Alternatively, we can prove the recurrence algebraically by expressing it with the recurrence
Suppose the upper bound is of the form for some constants and . We will, by induction, attempt to prove that
From the recurrence, we have for some constant . If we pick and some , then , hence the statement holds for
Assuming the statement holds for , we can show it holds for
This simplifies to:
Hence, by induction,
We can form upper bounds of a general form for divide-and-conquer problems with “balanced” recursion trees using The Master Theorem
Specifically, when we have problems which:
- Divide an input of size into parts of size where
- Solve the parts of recursively
- Combine the results in for some
In other words, The Master Theorem deals with runtimes in the form
The overall runtime is dependent on the relationship between and
A subtle algorithmic problem is that of integer multiplication. Suppose we multiply two integers and . The length of and is logarithmic in the base they’re represented in. For example, the binary number is times larger than the binary number , and it requires more digits to represent.
Traditional long multiplication takes time—can we do any better?
It was shown in 1960 by a Russian mathematics student Anatoly Karatsuba that we can. His method makes use of divide and conquer to reduce the number of individual multiplications required to compute a product of two integers.
To multiply two number with a representation of length in base , we can split each number into the lower half of its bits (which have the subscript below), and the upper half of its bits (with subscript ). To account for the bit shift we need an extra factor of for the upper components, but note these are quick to compute as left/right bitshifts can be done in constant time.
Having performed two multiplications to achieve values for and , we can see only one further multiplication is required:
This yields the recurrence , which is equal to by the Master Theorem (which is better than the solution we started with).