Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Algorithm Analysis: Big-O and Time-Space Complexity of SCAN and STOR - Prof. Gary Locklair, Study notes of Computer Science

An overview of big-o notation and its application in analyzing algorithms for efficiency. The text compares the time and space complexity of two algorithms, scan and stor, used for finding duplicates in a list. The analysis includes a discussion on the purpose of big-o notation, the concept of order, and the comparison of the algorithms based on their time and space complexity.

Typology: Study notes

Pre 2010

Uploaded on 07/23/2009

koofers-user-5qe
koofers-user-5qe 🇺🇸

10 documents

1 / 4

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
26 September, Day 9
Collect Wassn3
Return / review Sassn1 (presentation) grades
Return / review Wassn2 (if not done last time)
Chapter 15 – time and space complexity
quick overview
Big-O notation is used to analyze algorithms for efficiency – example …
Big-O notation; O is “order”
Example use: we say an algorithm is O(n) “Order n”
Purpose of Big-O notation: convenience in comparing algorithms on
time and space complexity (usage)
What does this mean? Ex: which sorting algorithm is “better?” {bubble,
shell, quick, insertion, etc} Can answer based upon Order.
Quicksort, which is O(n log n) on average, running on a small PC can
beat bubble sort, which is O(n²), running on a supercomputer if there
are many values to sort. To sort 1,000,000 numbers, quicksort takes
20,000,000 steps on average, while bubble sort takes
1,000,000,000,000 (1 trillion) steps!
Chapter problem: find duplicates in a list
Two possible algorithms: 1. Scan, 2. Stor
Which is “better?”
SCAN algorithm
a simple-minded doubly nested loop; SCAN will exit (stop) when it finds
the first match.
Space complexity means memory or storage for data. In this case, it
depends upon the size of array n
Space complexity is O(n+2). n for the number of array elements in A.
Where did the +2 come from? need memory for variables i and j.
Time complexity means ‘time of execution.’ Count number of executed
statements; what’s the worst case? {Answer O 3n2 … a quadratic}
Time complexity …starts out with an example, say n is 4.
outer loop i <- 1, 3
inner loop j <- i+1 to n
1st time outer loop i = 1 [tallest lines for outer loop]
inner loop j = 2 (i + 1)
inner loop j = 3
CSC 490 Course Notes and Outline, © Dr. Gary Locklair, Fall 2006
pf3
pf4

Partial preview of the text

Download Algorithm Analysis: Big-O and Time-Space Complexity of SCAN and STOR - Prof. Gary Locklair and more Study notes Computer Science in PDF only on Docsity!

26 September , Day 9 Collect Wassn Return / review Sassn1 (presentation) grades Return / review Wassn2 (if not done last time) Chapter 15 – time and space complexity quick overview Big-O notation is used to analyze algorithms for efficiency – example … Big-O notation; O is “order” Example use: we say an algorithm is O(n) “Order n” Purpose of Big-O notation: convenience in comparing algorithms on time and space complexity (usage) What does this mean? Ex: which sorting algorithm is “better?” {bubble, shell, quick, insertion, etc} Can answer based upon Order. Quicksort, which is O(n log n) on average, running on a small PC can beat bubble sort, which is O(n²), running on a supercomputer if there are many values to sort. To sort 1,000,000 numbers, quicksort takes 20,000,000 steps on average, while bubble sort takes 1,000,000,000,000 (1 trillion) steps! Chapter problem: find duplicates in a list Two possible algorithms: 1. Scan, 2. Stor Which is “better?” SCAN algorithm a simple-minded doubly nested loop; SCAN will exit (stop) when it finds the first match. Space complexity means memory or storage for data. In this case, it depends upon the size of array n Space complexity is O(n+2). n for the number of array elements in A. Where did the +2 come from? need memory for variables i and j. Time complexity means ‘time of execution.’ Count number of executed statements; what’s the worst case? {Answer O 3n^2 … a quadratic} Time complexity …starts out with an example, say n is 4. outer loop i <- 1, 3 inner loop j <- i+1 to n 1 st^ time outer loop i = 1 [tallest lines for outer loop] inner loop j = 2 (i + 1) inner loop j = 3

2 nd^ time outer loop i = 2 inner loop j = 3 3 rd^ time outer loop i = 3 inner loop j = 4 note example uses A = {7, 8, 4, 4} for time complexity, assume each statement uses the same amount of time for execution (just a benchmark) Two questions on top of page 98 – specific answers depend upon the actual values in the array A. Problem is: want to ensure that our list of integers is unique; that is, there are no duplicates in the list. As soon as we find one duplicate, then we can end … SCAN algorithm (one approach to solving that problem) similar idea to the bubble-sort – it will keep scanning the list looking for duplicates. First time, it compares position 1 to all the others, then compares position 2 to all the remaining … etc Advantage of SCAN – low space complexity, O(n) Disadvantage of SCAN – higher time complexity, O(n^2) why is it O(n^2)? Don’t consider the coefficient (3), and we only consider the greatest quantity (eg, n^2, not n or 2) STOR algorithm (a different approach to the problem) employs a hashing function (associative addressing) {test} Associative addressing … a value is the address Question: SCAN vs. STOR, which should I use? Which is “better?” Does it matter which one I use? Time complexity may help us decide which is “better.” SCAN O(n^2) STOR O(n) When n is large, SCAN is much, much slower than STOR! But, tradeoff is space complexity.

code both algorithms in C++