Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Asymptotic Analysis of Algorithm Complexity: Lecture 4 Summary, Slides of Introduction to Computers

A summary of lecture 4 in the coms21103 course, focusing on asymptotic analysis of algorithm complexity. The lecture covers the insertion sort algorithm, analysis on a random access machine (ram), and the importance of worst-case and average case analysis. The document also includes examples of analyzing the sum of degrees of vertexes in a graph given as adjacency matrix and adjacency list.

Typology: Slides

2010/2011

Uploaded on 09/06/2011

stifler_11
stifler_11 🇬🇧

4.5

(8)

273 documents

1 / 3

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
COMS21103: Summary of Lecture 4
Little “oh”
We’ve started by looking at strict upper bounds, so we defined o(g) to mean strict upper bounds.
This notion is defined properly in the notes for lecture 3.
Asymptotic analysis
We started by looking at the Insertion sort algorithm. The algorithm is presented at page at page
17 of CLRS. A more careful analysis than we have done in class, but which essentially uses the
abstractions that we have discussed is given at pages 23-25.
Pseudocode. The algorithm is written in pseudocode, a very convenient way of writing algorithms
so that we can focus on the substance of the algorithms and not get bogged down in the details of
the implementation.
Random Access Machine(RAM). We analyze the execution of such algorithm on an idealized
machine called the Random Access Machine (RAM). This machine has an infinite amount of mem-
ory, the basic instructions (i.e. assignment) take constant time, and accessing any memory location
also takes constant time.
I made the following points in the first lecture, but let me restate them here, since this is the
proper context.
Worst-case analysis. We will mostly be concerned with worst-case asymptotic behaviour of
algorithms. This means that for each algorithm we have to determine the time that the algorithm
takes on the worst possible input (i.e. the input that makes the algorithm run longer). This analysis
guarantees upper-bounds on the running time of the algorithm (and upper-bounds is really what
we care for). Notice that it’s important to give as tight of an upper-bound as possible: it’s better
to know that the running time of an algorithm is O(n2) than O(n3).
Average case analysis. This is an alternative to worst-case analysis where, if we know that the
inputs to the algorithm come from some distribution, we look at the time that the algorithm is
expected to run, when its inputs are selected from the distribution.
Here’s an example that should clarify what this means: Say that we have some algorithm A
for which the inputs come from some domain D={x1, x2, . . . , xl}, and assume that on input xi
the algorithm takes time ti. Imagine that the input that is actually passed to the algorithm is
determined by a distribution on D, meaning that the input xiis passed to Awith some probability
pi. Then, the expected running time of Ais how much time it takes on the average, i.e.:
l
X
i=1
pi·ti
1
pf3

Partial preview of the text

Download Asymptotic Analysis of Algorithm Complexity: Lecture 4 Summary and more Slides Introduction to Computers in PDF only on Docsity!

COMS21103: Summary of Lecture 4

Little “oh”

We’ve started by looking at strict upper bounds, so we defined o(g) to mean strict upper bounds. This notion is defined properly in the notes for lecture 3.

Asymptotic analysis

We started by looking at the Insertion sort algorithm. The algorithm is presented at page at page 17 of CLRS. A more careful analysis than we have done in class, but which essentially uses the abstractions that we have discussed is given at pages 23-25. Pseudocode. The algorithm is written in pseudocode, a very convenient way of writing algorithms so that we can focus on the substance of the algorithms and not get bogged down in the details of the implementation. Random Access Machine(RAM). We analyze the execution of such algorithm on an idealized machine called the Random Access Machine (RAM). This machine has an infinite amount of mem- ory, the basic instructions (i.e. assignment) take constant time, and accessing any memory location also takes constant time. I made the following points in the first lecture, but let me restate them here, since this is the proper context. Worst-case analysis. We will mostly be concerned with worst-case asymptotic behaviour of algorithms. This means that for each algorithm we have to determine the time that the algorithm takes on the worst possible input (i.e. the input that makes the algorithm run longer). This analysis guarantees upper-bounds on the running time of the algorithm (and upper-bounds is really what we care for). Notice that it’s important to give as tight of an upper-bound as possible: it’s better to know that the running time of an algorithm is O(n^2 ) than O(n^3 ). Average case analysis. This is an alternative to worst-case analysis where, if we know that the inputs to the algorithm come from some distribution, we look at the time that the algorithm is expected to run, when its inputs are selected from the distribution. Here’s an example that should clarify what this means: Say that we have some algorithm A for which the inputs come from some domain D = {x 1 , x 2 ,... , xl}, and assume that on input xi the algorithm takes time ti. Imagine that the input that is actually passed to the algorithm is determined by a distribution on D, meaning that the input xi is passed to A with some probability pi. Then, the expected running time of A is how much time it takes on the average, i.e.:

∑^ l

i=

pi · ti

Sum of degrees of vertexes of a graph

To illustrate that (sometimes) it is important to count carefully the instructions that an algorithm executes we looked at two algorithms for a simple task, namely counting the sum of the degrees of the vertexes of a graph. Input: A directed graph G = (V, E) Output: The sum of the degrees of the vertexes of G (the degree of a vertex is the number of its neighbours) Here we have assumed that the graph has no self loops meaning that for any vertex v, we have that (v, v) is not an edge in the graph. (This assumption is not really necessary, but makes everyone’s life easier.) Here we looked at two different possibilities for the input data.

Graph given as adjacency matrix We are given matrix AG such that AG[vi, vj ] = 1 if and only if (vi, vj ) is an edge in G. (Assume that the vertexes are given by integers 1, 2 ,... , |V |.) We then write the following algorithm:

  1. d ← 0
  2. for v ∈ V do
  3. dv ← 0
  4. for u ∈ V do
  5. dv ← dv + A[u, v]
  6. d ← d + dv
  7. return d

One way to analyze this algorithm is to observe that: line 2) is executed |V | times, line 3) is executed |V | times, line 5) executed |V | · |V | = |V |^2 , and line 6) is executed |V | times. We therefore get that the worst-case running time of this algorithm is |V |^2 + 3|V | which is O(|V |^2 ).

Graph given by adjacency list 1 We are given for each vertex v ∈ V a list adj(v) of neighbours of v. We have the following algorithm:

  1. d ← 0
  2. for v ∈ V do
  3. dv ← 0
  4. for u ∈ adj(v) do
  5. dv ← dv + 1
  6. d ← d + dv
  7. return d (^1) The algorithm written in class did not use dv. Here I introduced this variable so that there’s more instructions

to look at.