Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

computer vision Experiment 1, Lab Reports of Computer Vision

it is the Computer vision lab experiment no 1.it consists of the whole experiment with the output of the code .

Typology: Lab Reports

2019/2020

Uploaded on 12/28/2023

xpyker
xpyker 🇮🇳

1 document

1 / 3

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
CourseName:Computer Vision Lab
Course Code: CSP-422
Name: Sahil
UID: 20BCS2574
Experiment:1.1
Aim:
Write a program to implement various feature extraction techniques for image
classification.
Software Required:
Any Python IDE e.g., PyCharm
Description:
Here's a concise description of the various feature extraction techniques for
image classification and an outline of the experiment:
Feature Extraction Techniques for Image Classification:
SIFT (Scale-Invariant Feature Transform): Identifies keypoints and extracts local invariant
descriptors, robust to scale, rotation, and illumination changes.
SURF (Speeded-Up Robust Features): Detects and describes local features, computationally
efficient, suitable for real-time applications.
HOG (Histogram of Oriented Gradients): Computes gradient orientations' distribution,
effective for shape and edge information, useful in object detection.
CNN (Convolutional Neural Networks): Deep learning model that learns hierarchical
features, revolutionized image classification, excels in various tasks.
Color Histograms: Captures color distribution, quantizes pixel colors into bins, effective for
certain image classification problems.
LBP (Local Binary Patterns): Encodes texture by comparing pixel intensities with neighbors,
useful for texture analysis and classification.
Gabor Filters: Captures localized frequency and orientation information using linear filters,
applied in texture and recognition tasks.
Deep Convolutional Features: Extracts features from pre-trained CNN models' intermediate
layers, retains high-level semantics, generalizes well.
pf3

Partial preview of the text

Download computer vision Experiment 1 and more Lab Reports Computer Vision in PDF only on Docsity!

Experiment:1.

Aim:

Write a program to implement various feature extraction techniques for image

classification.

Software Required:

Any Python IDE e.g., PyCharm

Description:

Here's a concise description of the various feature extraction techniques for

image classification and an outline of the experiment:

Feature Extraction Techniques for Image Classification:

  • SIFT (Scale-Invariant Feature Transform): Identifies keypoints and extracts local invariant descriptors, robust to scale, rotation, and illumination changes.
  • SURF (Speeded-Up Robust Features): Detects and describes local features, computationally efficient, suitable for real-time applications.
  • HOG (Histogram of Oriented Gradients): Computes gradient orientations' distribution, effective for shape and edge information, useful in object detection.
  • CNN (Convolutional Neural Networks): Deep learning model that learns hierarchical features, revolutionized image classification, excels in various tasks.
  • Color Histograms: Captures color distribution, quantizes pixel colors into bins, effective for certain image classification problems.
  • LBP (Local Binary Patterns): Encodes texture by comparing pixel intensities with neighbors, useful for texture analysis and classification.
  • Gabor Filters: Captures localized frequency and orientation information using linear filters, applied in texture and recognition tasks.
  • Deep Convolutional Features: Extracts features from pre-trained CNN models' intermediate layers, retains high-level semantics, generalizes well.

Pseudo code/Algorithms/Flowchart/Steps:

1. Import libraries like OpenCV and scikit-image.

2. Load labeled image dataset for training and testing.

3. Preprocess images: resize, normalize, apply transformations.

4. Extract features using techniques such as HOG, SIFT, SURF, LBP, CNN, etc.

5. Split dataset into training and testing sets.

6. Train a classifier (e.g., SVM, Random Forest) using extracted features and labels.

7. Evaluate classifier performance on the testing set using metrics like accuracy,

precision, recall, F1-score.

8. Compare performance of feature extraction techniques by analyzing evaluation

results.

9. Experiment with various technique and classifier combinations, exploring impacts

on classification performance.

10. Document observations and conclusions drawn from the experiment.

Implementation: