




Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
An overview of data compression, focusing on lossy and lossless techniques. It discusses the importance of data compression in various applications, such as communications and multimedia. The document also introduces the concepts of modeling and coding, including markov models and composite source models. It is a valuable resource for students and professionals in computer science, engineering, and related fields.
Typology: Lecture notes
1 / 8
This page cannot be seen from the preview
Don't miss anything!
1.A
10.B
11.Lossy and Lossless
12.Compact
13.Intigrity
14.Information
15.Shorter
16.Winzip
17.JPEG
18.H = -summation P(A) log(P(A))
19.B0/B1 no of bits before and after compression
20.Repeated, 1
21.4 bits
23.Suppose we have an event A, which is a set of outcomes of some random experiment. If P(A) is the probability that the event A will occur, then the self-information associated with A is given by
Introduction to Information Theory
For Ex. In speech- related applications, knowledge about the physics of speech production can be used to construct a mathematical model for the sampled speech process. Sampled speech can be encoded using this model.
Real life Application: Residential electrical meter readings
For a source that generates letters from an alphabet $A = { a1 , a2 , …….. am}$ we can have a probability model $P= { P (a1) , P (a2)……… P (aM)}$
Most compression schemes take advantage of the fact that data contains a lot of repetition. For example, alphanumeric characters are normally
F
F 0
F 0
F
Markov Models: Markov models are particularly useful in text
compression, where the probability of the next letter is heavily
influenced by the preceding letters. In current text compression, the $K^
{th}$ order Markov Models are more widely known as finite context
models, with the word context being used for what we have earlier
defined as state. Consider the word ‘preceding’. Suppose we have
already processed ‘preceding’ and we are going to encode the next
ladder. If we take no account of the context and treat each letter a
surprise, the probability of letter ‘g’ occurring is relatively low. If we use a
1st order Markov Model or single letter context we can see that the
probability of g would increase substantially. As we increase the context
size (go from n to in to din and so on), the probability of the alphabet
becomes more and more skewed which results in lower entropy.
word context being used for what we have earlier defined as state. Consider the word ‘preceding’. Suppose we have already processed ‘preceding’ and we are going to encode the next ladder. If we take no account of the context and treat each letter a surprise, the probability of letter ‘g’ occurring is relatively low. If we use a 1st order Markov Model or single letter context we can see that the probability of g would increase substantially. As we increase the context size (go from n to in to din and so on), the probability of the alphabet becomes more and more skewed which results in lower entropy.
4. Composite Source Model: In many applications it is not easy to use a single model to describe the source. In such cases, we can define a composite source, which can be viewed as a combination or composition of several sources, with only one source being active at any given time. A composite source can be represented as a number of individual sources $S_i$ , each with its own model $M_i$ and a switch that selects a source $S_i$ with probability $P_i$. This is an exceptionally rich model and can be used to describe some very complicated processes.
Figure 1.1 Composite Source Model