Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

25S-CS222-week8-2.pdf, Lecture notes of Natural Language Processing (NLP)

**CS 222 / EE 228: Deep Learning**

Typology: Lecture notes

2024/2025

Uploaded on 05/23/2025

fancycode
fancycode 🇺🇸

7 documents

1 / 35

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Instructor: Yue Dong
Office hour: Thursday 8 am - 9:30 am MRB 4135 (or right after
each class)
CS 222: Natural Language Processing (NLP)
8-2: VLMs
Spring 2025
Slides modified from CMU 10-423/10-623 Generative AI & MIT
EfficientML.ai
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23

Partial preview of the text

Download 25S-CS222-week8-2.pdf and more Lecture notes Natural Language Processing (NLP) in PDF only on Docsity!

Instructor: Yue Dong

Office hour: Thursday 8 am - 9:30 am MRB 4135 (or right after

each class)

CS 222: Natural Language Processing (NLP)

8-2: VLMs

Spring 2025

Slides modified from CMU 10-423/10-623 Generative AI & MIT

EfficientML.ai

Vision Language Model

VLM intuition:

  • Standard Text-only transformer: Take an input text (like

“How to feed a pig efficiently? … ”), transfer it into a

sequence of tokens.

  • [182142, 5123, 99817, 52321, 477, 325, …]
  • Transformer “accepts” input like a sequence of tokens.

VLM intuition:

  • VLM input:
    • Here is an image <|image_1|>, tell me what is in this image.
  • VLM encoder transfers the special token <|image_1|> to a

sequence of tokens, which is acceptable by transformer as

input.

VLM Encoder

  • Roughly speaking, there are two types of VLM

encoders.

a. CLIP based VLM encoder (Used in GPT-V)

b. VǪ-VAE based VLM encoder

VLM - ViT -CLIP

Convert 2D Images to a Sequence of Patches

  • Convert the 2D Image to a sequence of patches

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [Dosovitskiy et al. , 2021]

Convert 2D Images to a Sequence of Patches

Each patch is a token

Image size: 96x Patch size: 32x

Number of tokens: 3x3= Dimension of each token: 3x32x32=

  • Convert the 2D Image to a sequence of patches

Practical Implementation

Image size: 96x Patch size: 32x

Number of tokens: 3x3= Dimension of each token: 3x32x32=

13

Convert the 2D Image to a sequence of patches

32x32 Conv, stride 32, padding 0 in_channels=3, out_channels = 768

Apply the Standard Transformer Encoder

Convert the 2D Image to a sequence of patches

Feed patch embeddings to the standard transformer encoder

Image size: 96x Patch size: 32x

Number of tokens: 3x3= Dimension of each token: 3x32x32=

14

Image Classification Results

Inferior to CNNs When the Dataset Size is Limited

CNN

ViT

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [Dosovitskiy et al. , 2021]

16

Image Classification Results

Surpasses CNNs When Pre-training with Large Dataset

ViT

An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [Dosovitskiy et al. , 2021]

17

CNN

Motivation

19

● ViT needs large

datasets to work

well.

● Labeling large

datasets is costly.

Image credit: https://web.cs.ucdavis.edu/~hpirsiav/papers/transfer_cvpr18.pdf

20

Solution: training with unlabeled dataset