Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Analysis of Algorithm LAB MANUAL, Lab Reports of Algorithms and Programming

PROGRAMS AND OUTPUT WITH WRITEUPS

Typology: Lab Reports

2024/2025

Uploaded on 07/05/2025

mrs-kranti-gajmal
mrs-kranti-gajmal 🇮🇳

1 document

1 / 21

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
GHARDA FOUNDATION’S
GHARDA INSTITUTE OF TECHNOLOGY
A/P:-LAVEL, TALUKA: KHED, DIST. RATNAGIRI, STATE:MAHARASTRA,
LABORATORY
MANUAL
Department
Of
Computer Engineering
Analysis of Algorithms
Semester :- IV
Prepared By
Prof. K.M.Gajmal
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15

Partial preview of the text

Download Analysis of Algorithm LAB MANUAL and more Lab Reports Algorithms and Programming in PDF only on Docsity!

GHARDA FOUNDATION’S

GHARDA INSTITUTE OF TECHNOLOGY

A/P:-LAVEL, TALUKA: KHED, DIST. RATNAGIRI, STATE:MAHARASTRA,

LABORATORY

MANUAL

Department

Of

Computer Engineering

Analysis of Algorithms

Semester :- IV

Prepared By

Prof. K.M.Gajmal

GHARDA FOUNDATION’S

GHARDA INSTITUTE OF TECHNOLOGY

A/P:-LAVEL, TALUKA: KHED,DIST. RATNAGIRI, STATE:MAHARASTRA

University of Mumbai

Class: S.E. Branch: Computer Engineering

Semester: IV

Subject: Analysis Of Algorithms (Abbreviated as AOA)

Periods per Week

(each 60 min)

Lecture 05

Practical 02

Tutorial --

Hours Marks

Evaluation System

Theory 03 80

Practical and Oral 02 25

Oral --- --

Term Work --- 25

Total 05 150

GHARDA FOUNDATION’S

GHARDA INSTITUTE OF TECHNOLOGY

A/P:-LAVEL, TALUKA: KHED,DIST. RATNAGIRI, STATE:MAHARASTRA

Experiment No.

Title: To Analyze and implement the insertion sort algorithm.

Theory:

Insertion sort belongs to the O(n^2 ) sorting algorithms. Unlike many sorting algorithms with quadratic complexity, it is actually applied in practice for sorting small arrays of data. For instance, it is used to improve quick sort routine. Some sources notice, that people use same algorithm ordering items, for example, hand of cards.

Insertion sort algorithm somewhat resembles selection sort. Array is imaginary divided into two parts - sorted one and unsorted one. At the beginning, sorted part contains first element of the array and unsorted one contains the rest. At every step, algorithm takes first element in the unsorted part and inserts it to the right place of the sorted one. When unsorted part becomes empty , algorithm stops.

Algorithm:

INSERTION-SORT(A) for i = 1 to n key ← A [i] j ← i – 1 while j > = 0 and A[j] > key A[j+1] ← A[j] j ← j – 1 End while A[j+1] ← key End for

Complexity analysis

Worst Case/Average Case: Insertion sort's overall complexity is O(n^2 ), regardless of the method of insertion.

Best Case: On the almost sorted arrays insertion sort shows better performance, up to O(n) in case of applying insertion sort to a sorted array. N-1 Comparisons but no moves of elements.

So complexity is O(n)

Space Complexity:O (1)

Insertion sort properties

 adaptive (performance adapts to the initial order of elements);  stable (insertion sort retains relative order of the same elements);  in-place (requires constant amount of additional space);  online (new elements can be added during the sort).

Experiment No.

Title: Implementation and analysis of Binary Search Algorithm.

Theory:

A binary search algorithm is a technique for finding a particular value in a linear array, by ruling out half of the data at each step, widely but not exclusively used in computer science.

A binary search finds the median, makes a comparison to determine whether the desired value comes before or after it, and then searches the remaining half in the same manner. Another explanation would be: Search a sorted array by repeatedly dividing the search interval in half Begin with an interval covering the whole array. If the value of the search key is less than the item in the middle of the interval, narrow the interval to the lower half Otherwise, narrow it to the upper half.

Algorithm:

Binsearch(a,l,h,x) { If(l==x) { If(x==a[l]) return l else return 0; } Else { Mid=(l+h)/2; If(x=a[mid]) then return mid Else if(x<a[mid]) then return Binsearch(a,l,mid-1,x) else return Binsearch(a,mid+1,h,x) } }

Complexity Analysis:

Experiment No.

Title: Implementation and analysis of Single Source Shortest Path using Dynamic Programming

Theory:

This is also called a Bellman-Ford algorithm where Given a graph and a source vertex src in graph, we have to find shortest paths from src to all vertices in the given graph. The graph may contain negative weight edges

Algorithm

Algorithm Bellmanford(v,cost,dist,n) { for i=1 to n do dist[i]=cost[v,i]; for k=2 to n-1 do for each u such that u <> v and u has at least one incoming edge do for each <i,u> in the graph do if dist[u]>dist[i]+cost[i,u] then dist[u]= dist[i]+cost[i,u] }

Complexity Annalysis:

IEI *IVI-

Best case Time complexity : O(IVI-1* IEI)= O(n-1* n) =O(n^2 )

Worst Case for complete graph: IEI=n(n-1)/2 so : O(IVI IEI)= n(n-1)/2* (n-1)= O(n^3 )

Output

BELLMAN FORD

Enter no. of vertices: 4 Enter graph in matrix form: 0 4 0 5 0 0 0 0 0 -10 0 0 0 0 3 0 Enter source: 1 Vertex 1 -> cost = 0 parent = 0 Vertex 2 -> cost = -2 parent = 3 Vertex 3 -> cost = 8 parent = 4 Vertex 4 -> cost = 5 parent = 1

No negative weight cycle

Experiment No.

Title: Write a program to find Longest Common Subsequence from the given

two sequences.

Theory:

A Longest Common Subsequence is a common subsequence of maximum length. In the longest common subsequence problem we are given two sequences X=<x 1 , x2…xm> and Y=<y 1 ,y 2 …yn> and wish to find a maximum-length common subsequence of x and y. Computing the length of an LCS :

Procedure LCS-LENGTH takes two sequences X=<x 1 , x 2 … xm> and Y=<y 1 , y 2 … yn> as inputs. It stores the c[i,j] values in a table c[0…m,0…n] whose entries are computed in row-major order i.e. the first row of c is filled in form left to right, then second row and so on. It also maintains the table b [1…m, 1…n] to simplify construction of an optimal solution. Initially, b[i,j] points to the table entry corresponding to the optimal sub problem solution. The procedure returns the b and c tables; c[ m,n] contains the length of an LCS of X and Y.

Algorithm LCS(A[m],B[n]) { for i=0 to m- for j=0 to n- if(A[i]=B[j]) LCS[i,j]=1+LCS[i-1,j-1] Else LCS[i,j]=max(LCS[i-1,j], LCS[i,j-1] }

Constructing an LCS : Bottom Up approach

The b table returned by LCS_LENGTH can be used to quickly construct an LCS of X=<x 1 , x 2 … xm> and Y=<y 1 , y 2 … yn>. We begin at b[m, n] and trace through the table following the arrows. Whenever we encounter a “ ” in entry b[i, j] it implies that xi=yj is an element of the LCS. The elements of the LCS are encountered in reverse order by this method.

Complexity: O(mn) WHERE m= Length of first string , n= Length of second string*

Experiment No.

Title: Implementation and analysis of Knapsack Problem using greedy Approach.

Theory:

Problem Definition:

We are given a empty knapsack of capacity „W‟ and we are given „n‟ different objects from i=1,2………n. Each object „i‟ has some positive weight „wi‟ and some profit value is associated with each object which is denoted as „pi‟. We want to fill the knapsack with total capacity „W‟ such that profit earned is maximum. When we solve this problem main goal is :

  1. Choose only those objects that give maximum profit.
  2. The total weight of selected objects should be <= W

Most problem have n input and require us to obtain a subset that satisfies some constraints is called as a feasible solution. We are required to find a feasible solution that optimize (minimize or maximizes ) a given objective function. The feasible solution that does this is called an optimal solution.

Algorithm :

  1. Let „W‟ be the maximum weight of the knapsack
  2. Let „wi‟ and „pi‟ be the weight and profit of individual items i.e. for i=1……n
  3. Calculate pi / wi ratio and arrange that in decreasing order.
  4. initially weight=0 and profit = 0
  5. for i=1 to n { add item in knapsack till weight <= W profit= profit + pi }

6. Stop

Time Complexity

As main time taking step is sorting, the whole problem can be solved in

O(n log n) only.

Algorithm

Algorithm: Greedy-Fractional-Knapsack (w[1..n], p[1..n], W) for i = 1 to n do x[i] = 0 weight = 0 for i = 1 to n if weight + w[i] ≤ W then x[i] = 1 weight = weight + w[i] else x[i] = (W - weight) / w[i] weight = W break return x

Program:

Output:

Enter no of objets: 7 Enter profits of each object: 10 5 15 7 6 18 3 Enter weights of each object: 2 3 5 7 1 4 1 Enter capacity of knapsack: 15

Ratios of Profit and Weigh are: Ratio1 1=5. Ratio1 1=1. Ratio1 1=3. Ratio1 1=1. Ratio1 1=6. Ratio1 1=4. Ratio1 1=3. Maximum profit is :55.

So, careful analysis suggests that in order to minimize the MRT, programs having greater lengths should be put towards the end so that the summation is reduced. Or, the lengths of the programs should be sorted in increasing order. That’s the Greedy Algorithm in use.

Time complexity is complexity of sorting elements which is o(nlogn). If we sort with bubble sort complexity can be o(n2)

Algorithm

MRT(L[],n) { Sort array of length in increasing order //print order For i=0 to n Printf L[i]

//calculate MRT Mrt=0, sum= float sum=0; //to save sum of tape

for(i=0;i<n;i++)

{ sum=sum+l[i]; mrt=mrt+sum;

} mrt=mrt/n; print mrt. }

Program:

Output

Enter no of programs 4 Enter length of program 1 3 Enter length of program 2 2 Enter length of program 3 8 Enter length of program 4 5 program 1 length 2 program 2 length 3 program 3 length 5 program 4 length 8 MRT of tape is 8.

Experiment No.

Title: Implementation and analysis of Sum Of Subsets problem

Theory:

The subset-sum problem is to find a subset of a set of integers that

sums to a given value. The decision problem of finding out if such a

subset exists is NP-complete. One way of solving the problem is to use

backtracking.

Backtracking

In Backtracking algorithm as we go down along depth of tree we add elements so far, and if the added sum is satisfying explicit constraints, we will continue to generate child nodes further. Whenever the constraints are not met, we stop further generation of sub-trees of that node, and backtrack to previous node to explore the nodes not yet explored. We need to explore the nodes along the breadth and depth of the tree. Generating nodes along breadth is controlled by loop and nodes along the depth are generated using recursion (post order traversal).

Steps :

  1. Start with an empty set
  2. Add the next element from the list to the set
  3. If the subset is having sum M, then stop with that subset as solution.
  4. If the subset is not feasible or if we have reached the end of the set, then backtrack through the subset

until we find the most suitable value.

  1. If the subset is feasible (sum of seubset < M) then go to step 2.
  2. If we have visited all the elements without finding a suitable subset and if no backtracking is possible

then stop without solution.

Algorithm

sumOfSub (s,k,r)

//Find All subsets of w[1:n] that sum to m. The value of x[j], 1<=j<=k have

already determined.

k- 1 n

S= ∑ w[j]*x[j] and r=∑ w[j]

J=1 j=k

Experiment No.

Title: Implementation and analysis of Graph coloring problem with

Backtracking in C

Theory:

A graph coloring is an assignment of labels, called colors, to the vertices of

a graph such that no two adjacent vertices share the same color. The chromatic

number of a graph G G is the minimal number of colors for which such an

assignment is possible. Other types of colorings on graphs also exist, most

notably edge colorings that may be subject to various constraints.

Applications of Graph Coloring

Some applications of graph coloring include −

 Register Allocation  Map Coloring  Bipartite Graph Checking  Mobile Radio Frequency Assignment  Making time table, etc.

Algorithm

The steps required to color a graph G with n number of vertices are as follows −

Step 1 − Arrange the vertices of the graph in some order.

Step 2 − Choose the first vertex and color it with the first color.

Step 3 − Choose the next vertex and color it with the lowest numbered color that has not been colored on any vertices adjacent to it. If all the adjacent vertices are colored with this color, assign a new color to it. Repeat this step until all the vertices are colored.

Complexity

O(V^2 + E) in worst case.(where V is the vertex and E is the edge).


OUTPUT

Enter no. of vertices : 4

Colored vertices of Graph G

Enter no. of edges : 5 Enter indexes where value is 1--> 0 1 1 2 1 3 2 3 3 0 Colors of vertices --> Vertex[1] : 1 Vertex[2] : 2 Vertex[3] : 1 Vertex[4] : 3

txt[] = "AABCCAADDEE"; pat[] = "FAA"; The number of comparisons in best case is O(n).

worst case? The worst case of Naive Pattern Searching occurs in following scenarios.

  1. When all characters of the text and pattern are same. txt[] = "AAAAAAAAAAAAAAAAAA"; pat[] = "AAAAA";
  2. Worst case also occurs when only the last character is different.

txt[] = "AAAAAAAAAAAAAAAAAB"; pat[] = "AAAAB"; The number of comparisons in the worst case is O(m*(n-m+1)). Although strings which have repeated characters are not likely to appear in English text, they may well occur in other applications (for example, in binary texts).

Program:

Output: Pattern found at index 0 Pattern found at index 9 Pattern found at index 13

Experiment No.

Title: Implementation and analysis of Rabin Carp for Pattern Searching

Theory:

Rabin-Karp is another pattern searching algorithm. It is the string matching algorithm that was

proposed by Rabin and Karp to find the pattern in a more efficient way. Like the Naive

Algorithm, it also checks the pattern by moving the window one by one, but without checking

all characters for all cases, it finds the hash value. When the hash value is matched, then only it

proceeds to check each character. In this way, there is only one comparison per text

subsequence making it a more efficient algorithm for pattern searching.

Given a text txt[0..n-1] and a pattern pat[0..m-1] , write a function search(char pat[], char

txt[]) that prints all occurrences of pat[] in txt[]. You may assume that n > m.

Algorithm

rabinkarp_algo(text, pattern, prime)

Input − The main text and the pattern. Another prime number of find hash location

Output − locations, where the pattern is found

Start pat_len := pattern Length str_len := string Length patHash := 0 and strHash := 0, h := 1 maxChar := total number of characters in character set for index i of all character in the pattern, do h := (hmaxChar) mod prime for all character index i of pattern, do patHash := (maxCharpatHash + pattern[i]) mod prime strHash := (maxCharstrHash + text[i]) mod prime for i := 0 to (str_len - pat_len), do if patHash = strHash, then for charIndex := 0 to pat_len - 1, do if text[i+charIndex] ≠ pattern[charIndex], then break if charIndex = pat_len, then print the location i as pattern found at i position. if i < (str_len - pat_len), then strHash := (maxChar(strHash – text[i]*h)+text[i+patLen]) mod prime, then if strHash < 0, then strHash := strHash + prime End