The tests in this course may contain any material covered in lecture up until this point, although much stronger emphasis will be placed on material for which you have received graded feedback. This page contains a collection of exercises that you may wish to complete to prepare yourself.
Inductive Proofs
You should be prepared to give inductive proofs for simple mathematical summations.
Prove by induction that \(\sum\limits_{i=1}^{n} i \cdot 2^i = 2^{n+1}(n-1) + 2\).
Prove by induction that \(\sum\limits_{i=1}^{n} \frac{3}{4^i} = 1 - (\frac{1}{4})^n\).
We have \(P(n) \equiv \sum\limits_{i=1}^{n} i \cdot 2^i = 2^{n+1}(n-1) + 2\).
The base case, \(P(1)\), is to say \(\sum\limits_{i=1}^{1} i \cdot 2^i = 2^{1+1}(1-1) + 2\). We reduce this as follows:
As we have shown both the base case and the inductive case, \(P(n)\) must be true for all integer \(n>1\).
Big-O Proofs
Recall the definition of big-\(O\) notation we used in class: \(f(x)\) is \(O(g(x))\) if \(\exists c>0, k\geq 1.\ \forall n \geq k.\ f(n) \leq cg(n)\). You should be prepared, given this definition, to prove complexity bounds on functions.
Prove that \(f(n) = \frac{n^2+1}{n}\) is \(O(n)\).
Prove that \(f(n) = 3n^3 + 5n^2 + 8\) is \(O(n^3)\).
Prove that \(f(n) = (n+1)(n-1)\) is \(O(n^2)\).
(This is a “reverse” proof.)
We want to show that \(f(n)\) is \(O(n)\). By definition, this is to claim that \(\exists c>0, k\geq 1.\ \forall n \geq k.\ \frac{n^2+1}{n} \leq cn\).
Let \(k=1\) and \(c=2\). Then it suffices to show that \(\forall n \geq 1.\ \frac{n^2+1}{n} \leq 2n\). We can redistribute the let side to read \(\forall n \geq 1.\ n + \frac{1}{n} \leq 2n\). We have from algebra that, if \(a \leq c\) and \(b \leq d\), then \(a + c \leq b + d\); therefore, it suffices to show both that \(\forall n \geq 1. n \leq n\) and also that \(\forall n \geq 1. \frac{1}{n} \leq n\). The prior is true by definition of equality. For the latter, we multiply both sides by \(n\) (a positive number) to get \(\forall n \geq 1. 1 \leq n^2\); we take the square root of both sides to get \(\forall n \geq 1. 1 \leq n\), which is immediately true. As these two points are sufficient to show the original statement, we conclude that \(f(n)\) is \(O(n)\).
(This is a “forward” proof.)
\begin{align}
&\text{1.}& \forall n \geq 1. 1 &\leq n & \text{by definition} \\
&\text{2.}& \forall n \geq 1. 1 &\leq n^2 & \text{(1), square both (positive) sides} \\
&\text{3.}& \forall n \geq 1. 1 &\leq n^3 & \text{(1), cube both (positive) sides} \\
&\text{4.}& \forall n \geq 1. 8 &\leq 8n^3 & \text{(3), multiply by 8} \\
&\text{5.}& \forall n \geq 1. 5n^2 &\leq 5n^3 & \text{(1), multiply by \(5n^2\) (positive by (2))} \\
&\text{6.}& \forall n \geq 1. 3n^3 &\leq 3n^3 & \text{by definition of equality} \\
&\text{7.}& \forall n \geq 1. 3n^3 + 5n^2 &\leq 3n^3 + 5n^3 & \text{by (5), (6) and transitivity of \(\leq\)} \\
&\text{8.}& \forall n \geq 1. 3n^3 + 5n^2 + 8 &\leq 3n^3 + 5n^3 + 8n^3 & \text{by (4), (7), and transitivity of \(\leq\)} \\
&\text{9.}& \forall n \geq 1. 3n^3 + 5n^2 + 8 &\leq 16n^3 & \text{by (8), simplification} \\
&\text{10.}& \exists c>0. \forall n \geq 1. 3n^3 + 5n^2 + 8 &\leq cn^3 & \text{by (9), introduction of existential} \\
&\text{11.}& \exists c>0, k \geq 1. \forall n \geq k. 3n^3 + 5n^2 + 8 &\leq cn^3 & \text{by (10), introduction of existential} \\
&\text{12.}& 3n^3 + 5n^2 + 8 &\text{ is } O(n^3) & \text{by (11), definition of big-O} \\
\end{align}
(This is a limit-based “reverse” proof.)
We aim to prove that \(f(n) = (n+1)(n-1)\) is \(O(n^2)\). By definition, this is to say that \(\lim\limits_{n\to\infty} \dfrac{(n+1)(n-1)}{n^2} = c\) for some constant c. By distribution, this is to say that \(\lim\limits_{n\to\infty} \dfrac{n^2-1}{n^2} = c\).
By the sum law for limits, this is equivalent to the statement \(\left(\lim\limits_{n\to\infty} 1\right) - \left(\lim\limits_{n\to\infty} \dfrac{1}{n^2}\right) = c\). The limit of a constant is the constant itself. The second term in this equation has \(n\) only in the denominator, so this limit is zero. It therefore suffices to show \(1 - 0 = c\) for some constant \(c\), which is true.
Data Structures and Invariants
You should be prepared to discuss the data structures we have created in class, the fields in their implementations, and the invariants we rely upon in order to write algorithms for those data structures.
What is the difference between a List and a Stack? When might we use a Stack instead of a List?
For each of LinkedList (assume singly-linked with no tail pointer), ArrayList, and CircularArrayList, what are the big-\(O\) complexities of the following operations? Briefly explain your answer.
Insert to the front of the list.
Insert to the back of the list.
Retrieve the first element of the list.
Retrieve the middle element of the list.
Remove an element from the front of the list.
What is amortized complexity?
List and Stack are two different ADTs. The List ADT supports a variety of operations (such as get by index) that a Stack does not. A Stack may be implemented using a List but is not a kind of List.
We might use a Stack instead of a List if the operations provided by Stack are enough for our algorithm. By using a “weaker” data structure, the algorithm that uses it is easier to verify as correct since it’s easier to understand how the data structure behaves.
LinkedList
ArrayList
CircularArrayList
insertAtHead
\(O(1)\) A new head is created and linked to the old head.
\(O(n)\) Every element must shift down one space.
amortized \(O(1)\) The start position can shift, but the new element may grow the aray.
insertAtTail
\(O(n)\) We must find the last node in the list.
amortized \(O(1)\) We add to the end of the array; this may require us to grow the array.
amortized \(O(1)\) Identical to ArrayList
peekHead
\(O(1)\) The first node of the list is immediately available.
\(O(1)\) Accessing an array is constant time.
\(O(1)\) Identical to ArrayList
get
\(O(n)\) We have to find the appropriate node.
\(O(1)\) Accessing an array is constant time.
\(O(1)\) Identical to ArrayList
removeHead
\(O(1)\) We make the second node our new head and delete the first node.
\(O(n)\) Everything has to shift left one position.
\(O(1)\) We can just change the start position and size.
Amortized complexity is the average complexity of an algorithm over a sequence of calls to it. For instance, the amortized worst-case complexity of ArrayList’s insertAtTail is \(O(1)\) even though the worst case complexity is \(O(n)\): on any given call to insertAtTail, \(O(n)\) is the worst complexity that can occur. However, a sequence of \(n\) calls to insertAtTail on the same list has time complexity \(O(n)\) so, on average once the sequence is finished, each call can be assigned \(O(1)\) time.
Algorithmic Complexity and Pseudocode
You should be prepared to discuss the algorithmic complexity of pseudocode examples.
What is the algorithmic complexity of the following pseudocode? Justify your answer.
\(O(n)\). This function runs a loop a total of \(n\) times; the only step in the loop is constant time.
\(O(n^4)\). A single iteration of the inner loop is a single constant-time step. The inner loop runs \(O(n^2)\) iterations each time it is executed, so a single iteration of the outer loop takes \(O(n^2)\) time. The outer loop runs \(O(n^2)\) iterations, so the overall function takes \(O(n^2) \times O(n^2) = O(n^4)\) time.
\(O(n^2)\). A single iteration of the first loop is a single constant-time step; that loop runs for \(O(n)\) iterations and so takes \(O(n)\) time. An iteration of the second loop is also a constant-time step; the second loop runs \(O(n^2)\) iterations and so takes \(O(n^2)\) time. Since one loop comes before the other, this takes \(O(n) + O(n^2) = O(n^2)\) time.
\(O(2^n)\). An iteration of the inner loop takes constant time. The inner loop runs a number of times equal to \(k\), which doubles for each iteration of the outer loop, starting from 1; that is, the first execution of the inner loop runs 1 iteration, the second execution runs 2 iterations, the third runs 4, the fourth runs 8, and so on. There are \(n\) iterations of the outer loop, each of which is dominated by the time spent on the inner loop. In total (over all of the iterations of the outer loop) the inner loop runs \(1 + 2 + \ldots + 2^{n-1}\) iterations (each of which is constant time). This function therefore takes \(O(2^n)\) time.
\O(\log n)\). \(k\) is initially set to \(n\) and divided by 2 each time the body of the While loop runs. After the first iteration, for instance, \(k=\frac{n}{2}\); after the second iteration, \(k=\frac{n}{4}\); and so on. When the denominator exceeds \(n\), \(k\) is less than \(1\) and so the loop terminates. It will take \(O(\log n)\) iterations for this to occur. Each iteration of the loop performs two constant time actions, so this function runs in \(O\log n)\) time.
\(O(n^2)\). The inner loop takes constant time for each iteration; its execution runs a number of iterations equal to \(k\). The value of \(k\) increases as the outer loop runs; on the first iteration of the outer loop, \(k=1\); on the second iteration, \(k=2\); and so on. Each iteration of the outer loop therefore takes \(O(k)\) steps for each value of \(k\) from \(1\) to \(n\). This is a total of \(\sum_{k=1}^{n} O(k)\) time, or \(O(n^2)\) time.