Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

coding theory-lecture-notes, Study notes of Algorithms and Programming

coding_theory-lecture-notes

Typology: Study notes

2013/2014

Uploaded on 08/20/2014

shivani.soni25
shivani.soni25 🇮🇳

4.4

(14)

4 documents

1 / 73

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Introduction to Coding Theory
Lecture Notes
Yehuda Lindell
Department of Computer Science
Bar-Ilan University, Israel
January 25, 2010
Abstract
These are lecture notes for an advanced undergraduate (and beginning graduate) course in Coding
Theory in the Computer Science Department at Bar-Ilan University. These notes contain the technical
material covered but do not include much of the motivation and discussion that is given in the lectures.
It is therefore not intended for self study, and is not a replacement for what we cover in class. This is a
first draft of the notes and they may therefore contain errors.
These lecture notes are based on notes taken by Alon Levy in 2008. We thank Alon for his work.
i
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49

Partial preview of the text

Download coding theory-lecture-notes and more Study notes Algorithms and Programming in PDF only on Docsity!

Introduction to Coding Theory

Lecture Notes

Yehuda Lindell

Department of Computer Science

Bar-Ilan University, Israel

January 25, 2010

Abstract These are lecture notes for an advanced undergraduate (and beginning graduate) course in Coding Theory in the Computer Science Department at Bar-Ilan University. These notes contain the technical material covered but do not include much of the motivation and discussion that is given in the lectures. It is therefore not intended for self study, and is not a replacement for what we cover in class. This is a first draft of the notes and they may therefore contain errors.

∗These lecture notes are based on notes taken by Alon Levy in 2008. We thank Alon for his work.

i

Contents

iv

1 Introduction

The basic problem of coding theory is that of communication over an unreliable channel that results in errors in the transmitted message. It is worthwhile noting that all communication channels have errors, and thus codes are widely used. In fact, they are not just used for network communication, USB channels, satellite communication and so on, but also in disks and other physical media which are also prone to errors. In addition to their practical application, coding theory has many applications in the theory of computer science. As such it is a topic that is of interest to both practitioners and theoreticians.

Examples:

  1. Parity check: Consider the following code: For any x = x 1 ,... , xn define C(x) = ⊕ni=1xi. This code can detect a single error because any single change will result in the parity check bit being incorrect. The code cannot detect two errors because in such a case one codeword will be mapped to another.
  2. Repetition code: Let x = x 1 ,... , xn be a message and let r be the number of errors that we wish to correct. Then, define C(x) = x‖x‖ · · · ‖x, where the number of times that x is written in the output is 2r + 1. Decoding works by taking the n-bit string x that appears a majority of the time. Note that this code corrects r errors because any r errors can change at most r of the x values, and thus the r + 1 values remain untouched. Thus the original x value is the majority.

The repetition code demonstrates that the coding problem can be solved in principal. However, the problem with this code is that it is extremely wasteful.

The main questions of coding theory:

  1. Construct codes that can correct a maximal number of errors while using a minimal amount of redun- dancy
  2. Construct codes (as above) with efficient encoding and decoding procedures

1.1 Basic Definitions

We now proceed to the basic definitions of codes.

Definition 1.1 Let A = {a 1 ,... , aq} be an alphabet; we call the ai values symbols. A block code C of length n over A is a subset of An. A vector c ∈ C is called a codeword. The number of elements in C, denoted |C|, is called the size of the code. A code of length n and size M is called an (n, M )-code. A code over A = { 0 , 1 } is called a binary code and a code over A = { 0 , 1 , 2 } is called a ternary code.

Remark 1.2 We will almost exclusively talk about “sending a codeword c” and then finding the codeword c that was originally sent given a vector x obtained by introducing errors into c. This may seem strange at first since we ignore the problem of mapping a message m into a codeword c and then finding m again from c. As we will see later, this is typically not a problem (especially for linear codes) and thus the mapping of original messages to codewords and back is not a concern.

The rate of a code is a measure of its efficiency. Formally:

Definition 1.3 Let C be an (n, M )-code over an alphabet of size q. Then, the rate of C is defined by

rate(C) =

logq M n

Restating what we have discussed above, the aim of coding theory is to construct a code with a short n, and large M and d; equivalently, the aim is to construct a code with a rate that is as close to 1 as possible and with d as large as possible. We now show a connection between the distance of a code and the possibility of detecting and correcting errors.

Definition 1.8 Let C be a code of length n over alphabet A.

  • C detects u errors if for every codeword c ∈ C and every x ∈ An^ with x 6 = c, it holds that if d(x, c) ≤ u then x /∈ C.
  • C corrects v errors if for every codeword c ∈ C and every x ∈ An^ it holds that if d(x, c) ≤ v then nearest neighbor decoding of x outputs c. The following theorem is easily proven and we leave it as an easy exercise.

Theorem 1.

  • A code C detects u errors if and only if d(C) > u.
  • A code C corrects v errors if and only if d(C) ≥ 2 v + 1.

1.2 A Probabilistic Model

The model that we have presented until now is a “worst-case model”. Specifically, what interests us is the amount of errors that we can correct and we are not interested in how or where these errors occur. This is the model that we will refer to in most of this course and was the model introduced by Hamming. However, there is another model (introduced by Shannon) which considers probabilistic errors. We will use this model later on (in particular in order to prove Shannon’s bounds). In any case, as we will see here, there is a close connection between the two models.

Definition 1.10 A communication channel is comprised of an alphabet A = {a 1 ,... , aq } and a set of forward channel probabilities of the form Pr[aj received | ai was sent] such that for every i:

∑^ q

j=

Pr[aj received | ai was sent] = 1

A communication channel is memoryless if for all vectors x = x 1... xn and c = c 1... cn it holds that

Pr[x received | c was sent] =

∏^ n

i=

Pr[xi received | ci was sent]

Note that in a memoryless channel, all errors are independent of each other. This is not a realistic model but is a useful abstraction. We now consider additional simplifications:

Definition 1.11 A symmetric channel is a memoryless communication channel for which there exists a p < (^12) such that for every i, j ∈ { 0 , 1 }n^ with i 6 = j it holds that

∑^ q

j=1(j 6 =i)

Pr[aj received | ai was sent] = p

Note that in a symmetric channel, every symbol has the same probability of error. In addition, if a symbol is received with error, then the probability that it is changed to any given symbol is the same as it being changed to any other symbol. A binary symmetric channel has two probabilities: Pr[1 received | 0 was sent] = Pr[0 received | 1 was sent] = p Pr[1 received | 1 was sent] = Pr[0 received | 0 was sent] = 1 − p

The probability p is called the crossover probability.

Maximum likelihood decoding. In this probabilistic model, the decoding rule is also a probabilistic one:

Definition 1.12 Let C be a code of length n over an alphabet A. The maximum likelihood decoding rule states that every x ∈ An^ is decoded to cx ∈ C when

Pr[x received | cx was sent] = max c∈C Pr[x received | c was sent]

If there exist more than one c with this maximum probability, then ⊥ is returned.

We now show a close connection between maximum likelihood decoding in this probabilistic model, and nearest neighbor decoding.

Theorem 1.13 In a binary symmetric channel with p < 12 , maximum likelihood decoding is equivalent to nearest neighbor decoding.

Proof: Let C be a code and x the received word. Then for every c and for every i we have that d(x, c) = i if and only if Pr[x received | c was sent] = pi(1 − p)n−i

Since p < 12 we have that 1 − p p> 1. Thus

pi(1 − p)n−i^ = pi+1(1 − p)n−i−^1 ·

1 − p p

pi+1(1 − p)n−i−^1.

This implies that p^0 (1 − p)n^ > p(1 − p)n−^1 >... > pn(1 − p)^0

and so the nearest neighbor yields the codeword that maximizes the required probability.

2.2 Code Weight and Code Distance

Definition 2.6 Let x ∈ Fnq. The Hamming weight of x, denoted wt(x) is defined to be the number of

coordinates that are not zero. That is, wt(x) def = d(x, 0).

Notation. For y = (y 1 ,... , yn), x = (x 1 ,... , xn), define x ∗ y = (x 1 y 1 ,... , xnyn)

Lemma 2.7 If x, y ∈ Fn 2 , then wt(x + y) = wt(x) + wt(y) − 2 wt(x ∗ y).

Proof: Looking at each coordinate separately we have:

wt(x + y) wt(x) wt(y) wt(x ∗ y) 0 0 0 0 1 0 1 0 1 1 0 0 0 1 1 1

The lemma is obtained by summing over all coordinates.

Corollary 2.8 If x, y ∈ Fn 2 then wt(x) + wt(y) ≥ wt(x + y).

Lemma 2.9 For every prime power q it holds that for every x, y ∈ Fnq

wt(x) + wt(y) ≥ wt(x + y) ≥ wt(x) − wt(y)

We leave the proof of this lemma as an exercise.

Definition 2.10 Let C be a code (not necessarily linear). The weight of C, denoted wt(C), is defined by

wt(C) = min c∈C;c 6 =

{wt(c)}

The following theorem only holds for linear codes:

Theorem 2.11 Let C be a linear code over Fnq. Then d(C) = wt(C).

Proof: Let d = d(C). By the definition of the distance of a code, there exist x′, y′^ ∈ C such that d(x′, y′) = d. Then by linearity we have that x′^ − y′^ ∈ C. Now, the weight of the codeword x′^ − y′^ is d and so we have found a codeword with weight d implying that wt(C) ≤ d = d(C). Now, let w = wt(C). By the definition of weight, there exists a codeword c ∈ C such that d(c, 0) = wt(C) = w. Now, since 0 ∈ C it follows that there exist two codewords in C with distance w from each other. Thus, d(C) ≤ w = wt(C). We have shown that wt(C) ≤ d(C) and d(C) ≤ wt(C). Thus, d(C) = wt(C), as required. The above theorem is interesting. In particular, it gives us the first step forward for determining the distance of a code. Previously, in order to calculate the distance of a code, we would have to look at all pairs of codewords and measure their distance (this is quadratic in the size of the code). Using Theorem 2.11 it suffices to look at each codeword in isolation and measure its weight (this is thus linear in the size of the code).

Advantages of Linear Codes

  1. A code can be described using its basis. Furthermore, such a basis can be found via Gaussian elimina- tion of a matrix comprised of the codewords as rows.
  2. The code’s distance equals its weight
  3. As we shall see, mapping a message into the code and back is simple.

2.3 Generator and Parity-Check Matrices

Definition 2.

  1. A generator matrix G for a linear code C is a matrix whose rows form a basis for C.
  2. A parity check matrix H for C is a generator matrix for the dual code C⊥.

Remarks:

1. If C is a linear [n, k]-code then G ∈ Fkq ×n(recall that k denotes the number of rows and n the number

of columns), and H ∈ F(qn−k)×n

  1. The rows of a generator matrix are linearly independent.
  2. In order to show that a k-by-n matrix G is a generator matrix of a code C it suffices to show that the rows of G are codewords in C and that they are linearly independent.

Definition 2.

  1. A generator matrix is said to be in standard form if it is of the form (Ik | X), where Ik denotes the k-by-k identity matrix
  2. A parity check matrix is said to be in standard form if it is of the form (Y | In−k)

Note that the dimensions of X above are k-by-(n − k), and the dimensions of Y are (n − k)-by-k.

Lemma 2.14 Let C be a linear [n, k]-code with generator matrix G. Then for every v ∈ Fnq it holds that

v ∈ C⊥^ if and only if v · GT^ = 0. In particular, a matrix H ∈ F(qn −k)×nis a parity check matrix if and only

if its rows are linearly independent and H · GT^ = 0.

Proof: Denote the rows of G by r 1 ,... , rk (each ri ∈ Fnq ). Then for every c ∈ C we have that

c =

∑^ k

i=

λiri

for some λ 1 ,... , λk ∈ Fq. Now, if v ∈ C⊥^ then for every ri it holds that v · ri = 0 (this holds because each

ri ∈ C). This implies that v · GT^ = 0, as required. For the other direction, if v · GT^ = 0 then for every i it holds that v · ri = 0. Let c be any codeword and

let λ 1 ,... , λk ∈ Fq be such that c =

∑k i=1 λi^ ·^ ri. It follows that

v · c =

∑^ k

i=

v · (λi · ri) =

∑^ k

i=

λi · (v · ri) =

∑^ k

i=

λi · 0 = 0.

This holds for every c ∈ C and thus v ∈ C⊥.

For the “in particular” part of the lemma, let H ∈ F

(n−k)×n q.^ If^ H^ is a parity check matrix then it’s rows are linearly independent and in C⊥. Thus, by what we have proven it holds that H · GT^ = 0. For the other direction, if H · GT^ = 0 then for every row it holds that v · GT^ = 0 and so every row is in C⊥^ (by the first part of the proof). Since the rows of the matrix are linearly independent and since the matrix is of the correct dimension, we conclude that H is a parity check matrix for C, as required.

An equivalent formulation: Lemma 2.14 can be equivalently worded as follows.

Let C be a linear [n, k]-code with a parity-check matrix H. Then v ∈ C if and only if v · HT^ = 0.

This equivalent formulation immediately yields an efficient algorithm for error detection.

2.4 Equivalence of Codes

Definition 2.18 Two (n, M )-codes are equivalent if one can be derived from the other by a permutation of the coordinates and multiplication of any specific coordinate by a non-zero scalar.

Note that permuting the coordinates or multiplying by a non-zero scalar makes no difference to the parameters of the code. In this sense, the codes are therefore equivalent.

Theorem 2.19 Every linear code C is equivalent to a linear code C′^ with a generator matrix in standard form.

Proof: Let G be a generator matrix for C. Then, using Gaussian elimination, find the reduced row echelon form of G. (In this form, every row begins with a one, and there are zeroes below and above it). Given this reduced matrix, we apply a permutation to the columns so that the identity matrix appears in the first k rows. The code generated by the resulting matrix is equivalent to the original one.

2.5 Encoding Messages in Linear Codes

First, we remark that it is possible to always work with standard form matrices. In particular, given a generator G for a linear code, we can efficiently compute G′^ of standard form using Gaussian elimination. We can then compute H′^ as we saw previously. Thus, given any generator matrix it is possible to efficiently find its standard-form parity-check matrix (or more exactly the standard-form parity-check matrix of an equivalent code). Note that given this parity-check matrix, it is then possible to compute d by looking for the smallest d for which there are d dependent columns. Unfortunately, we do not have any efficient algorithms for this last task.

Now, let C be a linear [n, k]-code over Fq, and let v 1 ,... , vk be a basis for it. This implies that for every

c ∈ C there are unique λ 1 ,... , λk ∈ Fq such that

∑k

i=1 λivi^ =^ c. In addition, for every^ λ^1 ,... , λk^ ∈^ Fq^ it

follows that

∑k i=1 λivi^ ∈^ C. Therefore, the mapping

(λ 1 ,... , λk) →

∑^ k

i=

λivi

is a 1–1 and onto mapping Fkq to the code C. This is true for every basis, and in particular for the generator

matrix. We therefore define an encoding procedure EC as follows. For every λ ∈ Fkq , define

EC (λ) = λ · G

Observe that if G is in standard form, then EC (λ) = λ · (Ik | X) = (λ, λ · X). Thus, it is trivial to map a codeword EC (λ) back to its original message λ (just take its first k coordinates). Specifically, if we are only interested in error detection, then its possible to first compute xHT^ ; if the result equals 0 then just output the first k coordinates. The above justifies why we are only interested in the following decoding problem:

Given a vector x ∈ Fnq find the closest codeword c ∈ C to x.

The problems of encoding an original message into a codeword and retrieving it back from the codeword are trivial (at least for linear codes).

2.6 Decoding Linear Codes

Cosets – background. We recall the concept of cosets from algebra.

Definition 2.20 Let C be a linear code of length n over Fq and let u ∈ Fnq. Then the coset of C determined

by u is defined to be the set C + u = {c + u | c ∈ C}

Example. Let C = { 000 , 101 , 010 , 111 } be a binary linear code. Then,

C + 000 = C, C + 010 = { 010 , 111 , 000 , 101 } = C, and C + 001 = { 001 , 100 , 011 , 110 }

Note that C ∪ (C + 001) = F^32.

Theorem 2.21 Let C be a linear [n, k]-code over Fq. Then

1. For every u ∈ Fnq there exists a coset of C that contains u.

2. For every u ∈ Fnq we have that |C + u| = |C| = qk

3. For every u, v ∈ Fnq , u ∈ C + v implies that C + u = C + v

4. For every u, v ∈ Fnq : either C + u = C + v or (C + u) ∩ (C + v) = ∅

  1. There are qn−k^ different cosets for C.

6. For every u, v ∈ Fnq it holds that u − v ∈ C if and only if u and v are in the same coset.

Proof:

  1. The coset C + u contains u.
  2. By definition |C| = qk. In addition, c + u = c′^ + u if and only if c = c′. Thus, |C + u| = |C| = qk.
  3. Let u ∈ C + v. Then, there exists a c ∈ C such that u = c + v. Let x ∈ C + u; likewise, there exists a c′^ ∈ C such that x = c′^ + u. This implies that x = c′^ + c + v. Since c′^ + c ∈ C we have that x ∈ C + v and thus C + u ⊆ C + v. Since |C + u| = |C + v| from (2), we conclude that C + u = C + v.
  4. Let C + u and C + v be cosets. If there exists an x such that x ∈ (C + u) ∩ (C + v) then from (3) it holds that C + u = C + x and C + v = C + x and thus C + u = C + v. Thus, either C + u = C + v or they are disjoint.

5. From what we have seen so far, each coset is of size qk, every vector in Fnq is in some coset, and all

cosets are disjoint. Thus there are qn/qk^ = qn−k^ different cosets.

  1. Assume that u − v ∈ C and denote c = u − v. Then, u = c + v ∈ C + v. Furthermore, as we have seen v ∈ C + v. Thus, u and v are in the same coset. For the other direction, if u and v are in the same coset C + x then u = c + x and v = c′^ + x for some c, c′^ ∈ C. Thus, u − v = c − c′^ ∈ C as required.

Remark. The above theorem shows that the cosets of C constitution a partitioning of the vector space

Fnq. Furthermore, item (6) hints at a decoding procedure: Given u, find v from the same coset and decode

to the codeword u − v. The question remaining is which v should be taken?

Definition 2.22 The leader of a coset is defined to be the word with the smallest hamming weight in the coset.

Nearest Neighbor Decoding

The above yields a simple algorithm. Let C be a linear code. Assume that the code word v was sent and the word w received. The error word is e = w − v ∈ C + w. Therefore, given the vector w, we search for the word of the smallest weight in C + w. Stated differently, given w we find the leader e of the coset C + w and output v = w − e ∈ C. The problem with this method is that it requires building and storing an array of all the cosets, and this is very expensive.

Constructing an SDA. Naively, an SDA can be built in time qn^ by traversing over all the cosets and computing the leader and its syndrome. A faster procedure for small d is as follows:

  1. For every e for which wt(e) ≤

⌊ (^) d− 1 2

define e to be the leader of a coset (it doesn’t matter what the coset is).

  1. Store (e, S(e)) for every such e

The complexity of this algorithm is linear in the number of words of weight at most (d − 1)/2. Thus, it takes time and memory ⌈ (^) ∑d− 2 1 ⌉

i=

n i

(q − 1)i.

This can also be upper bound by ( (^) n ⌈ d− 1 2

q⌈^

d− 2 1 ⌉

because you write all possible combinations of symbols in all possible ⌈ d− 2 1 ⌉ places. Importantly, if d is a constant, then this procedure runs in polynomial time.^1 In order to justify that this suffices, we remark that every coset has at most one leader with weight less than or equal to

⌊ (^) d− 1 2

. (Otherwise we can take the difference of the two vectors, which must be in C since they are in the same coset. However, by their assumed weight, the weight of the difference must be smaller than d in contradiction to the assumption regarding the distance of C). Thus, no coset leader could have been missed. On the other hand, if there is a coset with a leader of weight larger than

⌊ (^) d− 1 2

then that number of errors cannot anyway be corrected, and so there is no reason to store it in the table.

2.7 Summary

A linear code is a vector subspace. Each such code has a generator and parity-check matrix which can be used for encoding messages, computing error detection, and computing the distance of the code. There also exists an algorithm for decoding that is, unfortunately, not polynomial-time in general. However, for d that is constant, it does run in polynomial time. As we proceed in the course, we will see specific linear codes that have efficient decoding procedures.

(^1) We remark that this SDA is smaller than the previous one. This is possible because not every coset is necessarily relevant when we consider only (d − 1)/2 errors.

3 Bounds

For an (n, M, d)-code, the larger the value of M the more efficient the code. We now turn to study bounds on the size of M. In this context, a lower bound on M states that it is possible to construct codes that are at least as good as given in the bound, whereas an upper bound states that no code with M this large exists (for some given n and d). Observe that lower bounds and upper bounds here have a reverse meaning as in algorithms (here a lower bound is “good news” whereas an upper bound is “bad news”). Our aim is to find an optimal balance between parameters. We note that sometimes there are different goals that yield different optimality criteria.

3.1 The Main Question of Coding Theory

Recall that the rate of the code is defined to be R(C) = logq M n ; for a linear [n, k]-code we can equivalently write R(C) = k n. We now define a similar notion that combines the distance and length:

Definition 3.1 For a code C over Fq with parameters (n, M, d), the relative distance of C is defined to be

δ(C) =

d − 1 n

We remark that relative distance is often defined as d/n; however taking (d − 1)/n makes some of the calculations simpler.

Examples:

1. For the trivial code C = Fnq we have d(C) = 1 and δ(C) = 0

  1. Define the repetition code to be the [n, 1 , n]-code C = { 0 n, 1 n}. We have that δ(C) = n− n 1 → 1, whereas R(C) = (^) n^1 → 0.

Definition 3.2 Let A be an alphabet of size q > 1 and fix n, d. We define

Aq (n, d) = max{M | there exists an (n, M, d)-code over A}

An (n, M, d)-code for which M = Aq (n, d) is called an optimal code.

Definition 3.3 Let q > 1 be a prime power and fix n, d. We define

Bq (n, d) = max{qk^ | there exists a linear [n, k, d]-code over Fnq }

A linear [n, k, d]-code for which qk^ = Bq (n, d) is called an optimal linear code.

We remark that Aq (n, d) and Bq(n, d) depend only on the size q of the alphabet, and not on the alphabet itself.

Theorem 3.4 Let q ≥ 2 be a prime power. Then, for every n,

  1. For every 1 ≤ d ≤ n it holds that Bq (n, d) ≤ Aq (n, d) ≤ qn
  2. Bq(n, 1) = Aq (n, 1) = qn
  3. Bq(n, n) = Aq (n, n) = q

Proof:

  1. Directly from the definition and from the fact that every code is a subset of An^ and so M ≤ qn.

Motivation for the bound. Let C be a code and draw a sphere of radius d − 1 around every codeword. In an optimal code, it must be the case that the spheres include all of An; otherwise, there exists a word that can be added to the code that would be of distance d from all existing codewords. This yields a larger code, in contradiction to the assumed optimality. Thus, the number of codewords in an optimal code is at least the size of the number of spheres that it takes to cover the entire space An. Formally:

Theorem 3.8 (sphere-covering bound): For every natural number q > 1 and every n, d ∈ N such that

1 ≤ d ≤ n it holds that

Aq(n, d) ≥ qn V (^) qn (d − 1)

Proof: Let C = {c 1 ,... , cM } be an optimal (n, M, d)-code over an alphabet of size q. That is, M = Aq (n, d). Since C is optimal, there does not exist any word in An^ of distance at least d from every ci ∈ C (otherwise, we could add this word to the code without reducing the distance, in contradiction to the optimality of the code). Thus, for every x ∈ An^ there exists at least one ci ∈ C such that x ∈ SA(ci, d − 1). This implies that

An^ ⊆

⋃^ M

i=

SA(ci, d − 1)

and so

qn^ ≤

∑^ M

i=

|SA(ci, d − 1)| = M · V (^) qn (d − 1)

Since C is optimal we have M = Aq(n, d) and hence qn^ ≤ Aq (n, d) · V (^) qn (d − 1), implying that

Aq(n, d) ≥ qn V (^) qn (d − 1)

3.3 The Hamming (Sphere Packing) Upper Bound

We now prove an upper bound, limiting the maximum possible size of any code. The idea behind the upper bound is that if we place spheres of radius ⌊ d− 2 1 ⌋ around every codeword, then the spheres must be disjoint

(otherwise there exists a word that is at distance at most ⌊ d− 2 1 ⌋ from two codewords and by the triangle inequality there are two codewords at distance at most d − 1 from each other). The bound is thus derived by computing how many disjoint spheres of this size can be “packed” into the space.

Theorem 3.9 (sphere-packing bound): For every natural number q > 1 and n, d ∈ N such that 1 ≤ d ≤ n

it holds that Aq (n, d) ≤

qn V (^) qn

(⌊ (^) d− 1 2

Proof: Let C = {c 1 ,... , cM } be an optimal code with |A| = q, and let e =

⌊ (^) d− 1 2

. Since d(C) = d the spheres SA(ci, e) are all disjoint. Therefore

⋃^ M

i=

SA(ci, e) ⊆ An

where the union is a disjoint one. Therefore:

M · V (^) qn

d − 1 2

≤ qn.

Using now the fact that M = Aq (n, d) we conclude that

Aq (n, d) ≤

qn Vq n

(⌊ (^) d− 1 2

We stress that it is impossible to prove the existence of a code in this way. This is due to the fact that a word that is not in any of the spheres (and so is at distance greater than (d − 1)/2 from all codewords) cannot necessarily be added to the code.

Corollary 3.10 For every natural number q > 1 and n, d ∈ N such that 1 ≤ d ≤ n it holds that

qn V (^) qn (d − 1)

≤ Aq (n, d) ≤

qn V (^) qn

(⌊ (^) d− 1 2

Note that there is a huge gap between these two bounds.

3.4 Perfect Codes

We now show that there exist codes that achieve the Hamming (sphere-packing) upper bound. Unfortunately, the codes that we show do not exist for all parameters.

Definition 3.11 A code C over an alphabet of size q with parameters (n, M, d) is called a perfect code if

M =

qn V (^) qn

(⌊ (^) d− 1 2

We remark that every perfect code is an optimal code, but not necessarily the other way around.

3.4.1 The Binary Hamming Code

Definition 3.12 Let r ≥ 2 and let C be a binary linear code with n = 2r^ − 1 whose parity-check matrix H

is such that the columns are all of the non-zero vectors in Fr 2. This code C is called a binary Hamming code

of length 2 r^ − 1 , denoted Ham(r, 2).

The above definition does not specify the order of the columns and thus there are many Hamming codes. Before proceeding we remark that the matrix H specified in the definition is “legal” because it contains all of the r vectors of weight 1. Thus, H contains Ir and so its rows are linearly independent(since the columns

are vectors in Fr 2 , the matrix H has r rows).

Example. We write the parity-check matrix for Ham(r, 2).

H =

As can be seen, for Ham(r, 2) we have n = 7, k = 4 and H ∈ F^72 × 3.

Proposition 3.

  1. All binary Hamming codes of a given length are equivalent.

2. For every r ∈ N, the dimension of Ham(r, 2) is k = 2r^ − 1 − r.

3. For every r ∈ N, the distance of Ham(r, 2) is d = 3 and so the code can correct exactly one error.