




























































































Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Solution manual prepared by Michelle Bodnar, Andrew Lohr of Rutgers University to Introduction to Algorithms Cormen 3rd Edition
Typology: Exercises
1 / 530
This page cannot be seen from the preview
Don't miss anything!
On special offer
Exercise 1.1-
An example of a real world situation that would require sorting would be if you wanted to keep track of a bunch of people’s file folders and be able to look up a given name quickly. A convex hull might be needed if you needed to secure a wildlife sanctuary with fencing and had to contain a bunch of specific nesting locations.
Exercise 1.1-
One might measure memory usage of an algorithm, or number of people required to carry out a single task.
Exercise 1.1-
An array. It has the limitation of requiring a lot of copying when re-sizing, inserting, and removing elements.
Exercise 1.1-
They are similar since both problems can be modeled by a graph with weighted edges and involve minimizing distance, or weight, of a walk on the graph. They are different because the shortest path problem considers only two vertices, whereas the traveling salesman problem considers minimizing the weight of a path that must include many vertices and end where it began.
Exercise 1.1-
If you were for example keeping track of terror watch suspects, it would be unacceptable to have it occasionally bringing up a wrong decision as to whether a person is on the list or not. It would be fine to only have an approximate solution to the shortest route on which to drive, an extra little bit of driving is not that bad.
Exercise 1.2-
A program that would pick out which music a user would like to listen to next. They would need to use a bunch of information from historical and pop- ular preferences in order to maximize.
Exercise 1.2-
We wish to determine for which values of n the inequality 8n^2 < 64 n log 2 (n) holds. This happens when n < 8 log 2 (n), or when n ≤ 43. In other words, insertion sort runs faster when we’re sorting at most 43 items. Otherwise merge sort is faster.
Exercise 1.2-
We want that 100n^2 < 2 n. note that if n = 14, this becomes 100(14)^2 = 19600 > 21 4 = 16384. For n = 15 it is 100(15)^2 = 22500 < 215 = 32768. So, the answer is n = 15.
Problem 1-
We assume a 30 day month and 365 day year.
1 Second 1 Minute 1 Hour 1 Day 1 Month 1 Year 1 Century lg n 21 ×^10
6 26 ×^10
7 23.^6 ×^10
9 28.^64 ×^10
10 22.^592 ×^10
12 23.^1536 ×^10
13 23.^15576 ×^10
15 √ n 1 × 1012 3. 6 × 1015 1. 29 × 1019 7. 46 × 1021 6. 72 × 1024 9. 95 × 1026 9. 96 × 1030 n 1 × 106 6 × 107 3. 6 × 109 8. 64 × 1010 2. 59 × 1012 3. 15 × 1013 3. 16 × 1015 n lg n 62746 2801417 133378058 2755147513 71870856404 797633893349 6. 86 × 1013 n^2 1000 7745 60000 293938 1609968 5615692 n^3 100 391 1532 4420 13736 31593 2 n^19 25 31 36 41 44 n! 9 11 12 13 15 16 17
Algorithm 2 Linear-Search(A,v)
1: i = N IL 2: for j = 1 to A.length do 3: if A[j] = v then 4: i = j 5: return i 6: end if 7: end for 8: return i
Input: two n-element arrays A and B containing the binary digits of two numbers a and b. Output: an (n + 1)-element array C containing the binary digits of a + b.
Algorithm 3 Adding n-bit Binary Integers 1: carry = 0 2: for i=n to 1 do 3: C[i + 1] = (A[i] + B[i] + carry) (mod 2) 4: if A[i] + B[i] + carry ≥ 2 then 5: carry = 1 6: else 7: carry = 0 8: end if 9: end for 10: C[1] = carry
Exercise 2.2-
n^3 / 1000 − 100 n^2 − 100 n + 3 ∈ Θ(n^3 )
Exercise 2.2-
Input: An n-element array A. Output: The array A with its elements rearranged into increasing order. The loop invariant of selection sort is as follows: At each iteration of the for loop of lines 1 through 10, the subarray A[1..i − 1] contains the i − 1 smallest elements of A in increasing order. After n − 1 iterations of the loop, the n − 1 smallest elements of A are in the first n − 1 positions of A in increasing order, so the nth^ element is necessarily the largest element. Therefore we do not need to run the loop a final time. The best-case and worst-case running times of selection sort are Θ(n^2 ). This is because regardless of how the elements are initially arranged, on the ith^ iteration of the main for loop the algorithm always inspects each of the remaining n − i elements to find the smallest one remaining.
Algorithm 4 Selection Sort 1: for i = 1 to n − 1 do 2: min = i 3: for j = i + 1 to n do 4: // Find the index of the ith^ smallest element 5: if A[j] < A[min] then 6: min = j 7: end if 8: end for 9: Swap A[min] and A[i] 10: end for
This yields a running time of
n∑− 1
i=
n − i = n(n − 1) −
n∑− 1
i=
i = n^2 − n − n^2 − n 2
n^2 − n 2
= Θ(n^2 ).
Exercise 2.2-
Suppose that every entry has a fixed probability p of being the element looked for. A different interpretation of the question is given at the end of this solution. Then, we will only check k elements if the previous k − 1 positions were not the element being looked for, and the kth position is the desired value. This means that the probabilty that the number of steps taken is k is p(1 − p)k. The last possibility is that none of the elements in the array match what we are looking for, in which case we look at all A.length many positions, and it happens with probability (1 − p). By multiplying the number of steps in each case by the probability that that case happens, we get the expected value of:
E(steps) = A.length(1 − p)A.length^ +
A.length∑
k=
k(1 − p)k−^1 p
The worst case is obviously if you have to check all of the possible positions, in which case, it will take exactly A.length steps, so it is Θ(A.length). Now, we analyze the asyptotic behavior of the average-case. Consider the following manipulations, where first, we rewrite the single summation as a dou- ble summation, and then use the geometric sum formula twice.
For a good best-case running time, modify an algorithm to first randomly produce output and then check whether or not it satisfies the goal of the al- gorithm. If so, produce this output and halt. Otherwise, run the algorithm as usual. It is unlikely that this will be successful, but in the best-case the running time will only be as long as it takes to check a solution. For example, we could modify selection sort to first randomly permute the elements of A, then check if they are in sorted order. If they are, output A. Otherwise run selection sort as usual. In the best case, this modified algorithm will have running time Θ(n).
Exercise 2.3-
If we start with reading across the bottom of the tree and then go up level
by level.
Exercise 2.3-
The following is a rewrite of MERGE which avoids the use of sentinels. Much like MERGE, it begins by copying the subarrays of A to be merged into arrays L and R. At each iteration of the while loop starting on line 13 it selects the next smallest element from either L or R to place into A. It stops if either L or R runs out of elements, at which point it copies the remainder of the other subarray into the remaining spots of A.
Exercise 2.3-
Since n is a power of two, we may write n = 2k. If k = 1, T (2) = 2 = 2 lg(2). Suppose it is true for k, we will show it is true for k + 1.
T (2k+1) = 2T
2 k+ 2
2 k
= k 2 k+1^ + 2k+1^ = (k + 1)2k+1^ = 2k+1^ lg(2k+1) = n lg(n)
Exercise 2.3-
Let T (n) denote the running time for insertion sort called on an array of size n. We can express T (n) recursively as
T (n) =
Θ(1) if n ≤ c T (n − 1) + I(n) otherwise
where I(n) denotes the amount of time it takes to insert A[n] into the sorted array A[1..n − 1]. Since we may have to shift as many as n − 1 elements once we find the correct place to insert A[n], we have I(n) = θ(n).
Algorithm 5 M erge(A, p, q, r) 1: n 1 = q − p + 1 2: n 2 = r − q 3: let L[1, ..n 1 ] and R[1..n 2 ] be new arrays 4: for i = 1 to n 1 do 5: L[i] = A[p + i − 1] 6: end for 7: for j = 1 to n 2 do 8: R[j] = A[q + j] 9: end for 10: i = 1 11: j = 1 12: k = p 13: while i 6 = n 1 + 1 and j 6 = n 2 + 1 do 14: if L[i] ≤ R[j] then 15: A[k] = L[i] 16: i = i + 1 17: else A[k] = R[j] 18: j = j + 1 19: end if 20: k = k + 1 21: end while 22: if i == n 1 + 1 then 23: for m = j to n 2 do 24: A[k] = R[m] 25: k = k + 1 26: end for 27: end if 28: if j == n 2 + 1 then 29: for m = i to n 1 do 30: A[k] = L[m] 31: k = k + 1 32: end for 33: end if
1: Use Merge Sort to sort the array A in time Θ(n lg(n)) 2: i = 1 3: j = n 4: while i < j do 5: if A[j] + A[j] = S then 6: return true 7: end if 8: if A[i] + A[j] < S then 9: i = i + 1 10: end if 11: if A[i] + A[j] > S then 12: j = j − 1 13: end if 14: end while 15: return false
such k. The fact that this is nonzero meant that immediately after considering it, we considered (i+1,k) which means mi,k this means mi,j Case 2 ∃k, (k, j) was considered and k < i. In this case, we take the largest such k. The fact that this is nonzero meant that immediately after considering it, we considered (k,j-1) which means Mk,j this means Mi,j Note that one of these two cases must be true since the set of considered points separates {(m, m′) : m ≤ m′^ < n} into at most two regions. If you are in the region that contains (1, 1)(if nonempty) then you are in Case 1. If you are in the region that contains (n, n) (if non-empty) then you are in case 2.
Problem 2-
a. The time for insertion sort to sort a single list of length k is Θ(k^2 ), so, n/k of them will take time Θ( nk k^2 ) = Θ(nk).
b. Suppose we have coarseness k. This meas we can just start using the usual merging procedure, except starting it at the level in which each array has size at most k. This means that the depth of the merge tree is lg(n) − lg(k) = lg(n/k). Each level of merging is still time cn, so putting it together, the merging takes time Θ(n lg(n/k)).
c. Viewing k as a function of n, as long as k(n) ∈ O(lg(n)), it has the same asymptotics. In particular, for any constant choice of k, the asymptotics are the same.
d. If we optimize the previous expression using our calculus 1 skills to get k, we have that c 1 n− nc k 2 = 0 where c 1 and c 2 are the coeffients of nk and n lg(n/k) hidden by the asymptotics notation. In particular, a constant choice of k is optimal. In practice we could find the best choice of this k by just trying and timing for various values for sufficiently large n.
Problem 2-
Problem 2-
d. We’ll call our algorithm M.Merge-Sort for Modified Merge Sort. In addition to sorting A, it will also keep track of the number of inversions. The algorithm works as follows. When we call M.Merge-Sort(A,p,q) it sorts A[p..q] and returns the number of inversions in the elements of A[p..q], so lef t and right track the number of inversions of the form (i, j) where i and j are both in the same half of A. When M.Merge(A,p,q,r) is called, it returns the number of inversions of the form (i, j) where i is in the first half of the array and j is in the second half. Summing these up gives the total number of inversions in A. The runtime is the same as that of Merge-Sort because we only add an additional constant-time operation to some of the iterations of some of the loops. Since Merge is Θ(n log n), so is this algorithm.
Algorithm 6 M.Merge-Sort(A, p, r) if p < r then q = b(p + r)/ 2 c lef t = M.M erge − Sort(A, p, q) right = M.M erge − Sort(A, q + 1, r) inv = M.M erge(A, p, q, r) + lef t + right return inv end if return 0
Algorithm 7 M.Merge(A,p,q,r) inv = 0 n 1 = q − p + 1 n 2 = r − q let L[1, ..n 1 ] and R[1..n 2 ] be new arrays for i = 1 to n 1 do L[i] = A[p + i − 1] end for for j = 1 to n 2 do R[j] = A[q + j] end for i = 1 j = 1 k = p while i 6 = n 1 + 1 and j 6 = n 2 + 1 do if L[i] ≤ R[j] then A[k] = L[i] i = i + 1 else A[k] = R[j] inv = inv + j // This keeps track of the number of inversions between the left and right arrays. j = j + 1 end if k = k + 1 end while if i == n 1 + 1 then for m = j to n 2 do A[k] = R[m] k = k + 1 end for end if if j == n 2 + 1 then for m = i to n 1 do A[k] = L[m] inv = inv + n 2 // Tracks inversions once we have exhausted the right array. At this point, every element of the right array contributes an inversion. k = k + 1 end for end if return inv
Suppose f (n) ∈ Θ(g(n)), then ∃c 1 , c 2 , n 0 , ∀n ≥ n 0 , 0 ≤ c 1 g(n) ≤ f (n) ≤ c 2 g(n), if we just look at these inequalities saparately, we have c 1 g(n) ≤ f (n) (f (n) ∈ Ω(g(n))) and f (n) ≤ c 2 g(n) (f (n) ∈ O(g(n))). Suppose that we had ∃n 1 , c 1 , ∀n ≥ n 1 , c 1 g(n) ≤ f (n) and ∃n 2 , c 2 , ∀n ≥ n 2 , f (n) ≤ c 2 g(n). Putting these together, and letting n 0 = max(n 1 , n 2 ), we have ∀n ≥ n 0 , c 1 g(n) ≤ f (n) ≤ c 2 g(n).
Exercise 3.1-
Suppose the running time is Θ(g(n)). By Theorem 3.1, the running time is O(g(n)), which implies that for any input of size n ≥ n 0 the running time is bounded above by c 1 g(n) for some c 1. This includes the running time on the worst-case input. Theorem 3.1 also imlpies the running time is Ω(g(n)), which implies that for any input of size n ≥ n 0 the running time is bounded below by c 2 g(n) for some c 2. This includes the running time of the best-case input. On the other hand, the running time of any input is bounded above by the worst-case running time and bounded below by the best-case running time. If the worst-case and best-case running times are O(g(n)) and Ω(g(n)) respec- tively, then the running time of any input of size n must be O(g(n)) and Ω(g(n)). Theorem 3.1 implies that the running time is Θ(g(n)).
Exercise 3.1-
Suppose we had some f (n) ∈ o(g(n)) ∩ ω(g(n)). Then, we have
0 = lim n→∞
f (n) g(n)
a contradiction.
Exercise 3.1-
Ω(g(n, m)) = {f (n, m) : there exist positive constants c, n 0 , and m 0 such that f (n, m) ≥ cg(n, m)
for all n ≥ n 0 or m ≥ m 0 }
Θ(g(n, m)) = {f (n, m) : there exist positive constants c 1 , c 2 , n 0 , and m 0 such that c 1 g(n, m) ≤ f (n, m)
≤ c 2 g(n, m) for all n ≥ n 0 or m ≥ m 0 }
Exercise 3.2-
Let n 1 < n 2 be arbitrary. From f and g being monatonic increasing, we know f (n 1 ) < f (n 2 ) and g(n 1 ) < g(n 2 ). So
f (n 1 ) + g(n 1 ) < f (n 2 ) + g(n 1 ) < f (n 2 ) + g(n 2 )
Since g(n 1 ) < g(n 2 ), we have f (g(n 1 )) < f (g(n 2 )). Lastly, if both are nonega- tive, then, f (n 1 )g(n 1 ) = f (n 2 )g(n 1 ) + (f (n 2 ) − f (n 1 ))g(n 1 ) = f (n 2 )g(n 2 ) + f (n 2 )(g(n 2 ) − g(n 1 )) + (f (n 2 ) − f (n 1 ))g(n 1 )
Since f (n 1 ) ≥ 0, f (n 2 ) > 0, so, the second term in this expression is greater than zero. The third term is nonnegative, so, the whole thing is< f (n 2 )g(n 2 ).
Exercise 3.2-
alogb(c)^ = a
loga(c) loga(b) (^) = c log^1 a(b) (^) = clogb(a).
Exercise 3.2-
As the hint suggests, we will apply stirling’s approximation
lg(n!) = lg
(2πn
( (^) n e
)n ( 1 + Θ
n
lg(2πn) + n lg(n) − n lg(e) + lg
n + 1 n
Note that this last term is O(lg(n)) if we just add the two expression we get when we break up the lg instead of subtract them. So, the whole expression is dominated by n lg(n). So, we have that lg(n!) = Θ(n lg(n)).
lim n→∞
2 n n!
= lim n→∞
2 πn(1 + Θ( (^1) n ))
2 e n
)n ≤ lim n→∞
2 e n
)n
If we restrict to n > 4 e, then this is
≤ lim n→∞
2 n^
lim n→∞
nn n!
= lim n→∞
2 πn(1 + Θ( (^) n^1 ))
en^ = lim n→∞ O(n−.^5 )en^ ≥ lim n→∞
en c 1
n
≥ lim n→∞
en c 1 n
= lim n→∞
en c 1
Exercise 3.2-
The function dlog ne! is not polynomially bounded. If it were, there would exist constants c, a, and n 0 such that for all n ≥ n 0 the inequality dlog ne! ≤ cna^ would hold. In particular, it would hold when n = 2k^ for k ∈ N. Then
so that (^) lnn n = Ω(k). Similarly, we have ln k + ln(ln k) = ln(k ln k) ≤ ln(c 2 n) = ln(c 2 ) + ln(n) so ln(n) = Ω(ln k). Let c 4 be such that ln n ≥ c 4 ln k. Then
n ln n
n c 4 ln k
k c 1 c 4
so that (^) lnn n = O(k). By Theorem 3.1 this implies (^) lnn n = Θ(k). By symmetry, k = Θ
( (^) n ln n
Problem 3-
a. If we pick any c > 0, then, the end behavior of cnk^ − p(n) is going to infinity, in particular, there is an n 0 so that for every n ≥ n 0 , it is positive, so, we can add p(n) to both sides to get p(n) < cnk.
b. If we pick any c > 0, then, the end behavior of p(n) − cnk^ is going to infinity, in particular, there is an n 0 so that for every n ≥ n 0 , it is positive, so, we can add cnk^ to both sides to get p(n) > cnk.
c. We have by the previous parts that p(n) = O(nk) and p(n) = Ω(nk). So, by Theorem 3.1, we have that p(n) = Θ(nk).
d.
lim n→∞
p(n) nk^
= lim n→∞
nd(ad + o(1)) nk^
< lim n→∞
2 adnd nk^
= 2ad lim n→∞ nd−k^ = 0
e. lim n→∞
nk p(n)
= lim n→∞
nk ndO(1)
< lim n→∞
nk nd^
= lim n→∞ nk−d^ = 0
Problem 3-
A B O o Ω ω Θ lgk^ n n^ yes yes no no no nk^ cn^ yes yes no no no √ n nsin^ n^ no no no no no 2 n^2 n/^2 no no yes yes no nlog^ c^ clog^ n^ yes no yes no yes log(n!) log(nn) yes no yes no yes Problem 3-
a.
n+
22
n
(n + 1)! n! n 2 n en ( 2 n 3 2
)n
(lg(n))! nlg(lg(n))^ lg(n)lg(n) n^3 n^2 4 lg(n) n lg(n) lg(n!) 2 lg(n)^ n (
2)lg(n) 2
2 lg(n) lg^2 (n) √ln(n) lg(n) ln(ln(n)) 2 lg ∗(n) lg∗(n) lg∗(lg(n)) lg(lg∗(n) 1 n^1 /^ lg(n) The terms are in decreasing growth rate by row. Functions in the same row are Θ of each other.
b. If we define the function
f (n) =
g 1 (n)! n mod 2 = 0 1 n n^ mod 2 = 1 Note that f (n) meets the asymptotically positive requirement that this chap- ter puts on the functions analyzed. Then, for even n, we have
lim n→∞
f (2n) gi(2n)
≥ lim n→∞
f (2n) g 1 (2n) = lim n→∞ (g 1 (2n) − 1)!
= ∞
And for odd n, we have