Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Computer vision and explanation, Schemes and Mind Maps of Computer Communication Systems

Vision of computer science and tech . Explanation of computer vision

Typology: Schemes and Mind Maps

2021/2022

Uploaded on 08/28/2023

md-sazzad-hossen
md-sazzad-hossen 🇺🇸

1 document

1 / 2

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Graph Neural Networks for Visual Defect
Classification
Sazzad Hossen, Avimanyu Sahoo, and Huaxia Wang
Abstract
Traditional deep learning approaches for computer vision, such as Convolutional Neu-
ral Networks (CNNs), which are known to be effective for many classification appli-
cations, often underperform when tasked with comprehending irregular and intricate
objects. The Graph Neural Network (GNN) was found to be more efficient in such
image classification tasks, as it employs a graph network architecture for learning,
where the graph nodes are segments of the image, and the edges represent the rela-
tions among these segments. In our current research, we leveraged the capabilities
of both GNNs and CNNs along with Feed-Forward Network (FFN) layers, allowing
them to perceive an image as a graphical network collaboratively and classification.
This integration offered a flexible and adaptable representation of the image for learn-
ing and classification. Initially, the image is segmented into multiple patches, which
are used as the graph nodes. The edges, or connections among these nodes, are then
established based on proximity. Our proposed model uses a CNN to extract the
features of the images, which is followed by graph convolution. This design enables
efficient information aggregation and updates. A FFN module with dual linear layers
is utilized for node feature transformation. The FFN layer is employed both before
and after the graph convolution to solve the over-smoothing problem, a common issue
in deep Graph GNNs. Comprehensive testing for image recognition and object detec-
tion tasks, utilizing open-source datasets, has been carried out. Our model employs
a 6-layer graph convolution, GELU activation, batch normalization, a 0.3 dropout
rate, and a 0.001 learning rate, achieving noteworthy reductions in training loss and
improvements in accuracy. With a training accuracy of 98% and a test accuracy
of 82% for this dataset, our model exhibits promising potential in classification. In
this presentation, we will discuss the details of the deep GNN architecture and com-
pare it with the state-of-the-art. Additionally, we will cover the loss function, GNN
training, and results. We envision that our exploration of graph-based approaches,
1
pf2

Partial preview of the text

Download Computer vision and explanation and more Schemes and Mind Maps Computer Communication Systems in PDF only on Docsity!

Graph Neural Networks for Visual Defect

Classification

Sazzad Hossen, Avimanyu Sahoo, and Huaxia Wang

Abstract

Traditional deep learning approaches for computer vision, such as Convolutional Neu- ral Networks (CNNs), which are known to be effective for many classification appli- cations, often underperform when tasked with comprehending irregular and intricate objects. The Graph Neural Network (GNN) was found to be more efficient in such image classification tasks, as it employs a graph network architecture for learning, where the graph nodes are segments of the image, and the edges represent the rela- tions among these segments. In our current research, we leveraged the capabilities of both GNNs and CNNs along with Feed-Forward Network (FFN) layers, allowing them to perceive an image as a graphical network collaboratively and classification. This integration offered a flexible and adaptable representation of the image for learn- ing and classification. Initially, the image is segmented into multiple patches, which are used as the graph nodes. The edges, or connections among these nodes, are then established based on proximity. Our proposed model uses a CNN to extract the features of the images, which is followed by graph convolution. This design enables efficient information aggregation and updates. A FFN module with dual linear layers is utilized for node feature transformation. The FFN layer is employed both before and after the graph convolution to solve the over-smoothing problem, a common issue in deep Graph GNNs. Comprehensive testing for image recognition and object detec- tion tasks, utilizing open-source datasets, has been carried out. Our model employs a 6-layer graph convolution, GELU activation, batch normalization, a 0.3 dropout rate, and a 0.001 learning rate, achieving noteworthy reductions in training loss and improvements in accuracy. With a training accuracy of 98% and a test accuracy of 82% for this dataset, our model exhibits promising potential in classification. In this presentation, we will discuss the details of the deep GNN architecture and com- pare it with the state-of-the-art. Additionally, we will cover the loss function, GNN training, and results. We envision that our exploration of graph-based approaches,

combined with traditional Convolutional Networks, will contribute significantly to the advancement of future research in computer vision.

1 Results

We trained our model on Mini Image net data set using a 6 layer Graph convolution, GELU activation, batch normalization, a 0.3 dropout rate, and a 0.001 learning rate, achieving noteworthy training loss and accuracy.

Our Training Loss and Accuracy is:

Figure 1: Training on Mini Imagenet Datasets

Comparison of Accuracy and F1 score of our model , PNasNet-5 and ResNet

Metric PNasNet-5 ResNet-152 Vision GNN Accuracy 82.8% 79.5% 85% F1 score 81.5% 78.8% 84.6%

Table 1: Testing results comparison between PNasNet-5, ResNet-152, and Our Model