Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Inner Product Spaces - Foundations of Engineering Systems Analysis - Lecture Notes, Study notes of Engineering Systems Analysis

Review of the fundamentals of real analysis and point set topology. Concepts of finite-dimensional vector spaces from both algebraic and topological points of view. Introduction to infinite-dimensional vector spaces and function spaces along with the notion of completeness. Key points in this lecture handout are: Inner Product Spaces, Homogeneity, Hermiticity, Orthogonality in Inner Product Spaces, Orthogonal Projection, Optimization in a Hilbert Space, Orthonormal Sets and Bases, Fourier Series

Typology: Study notes

2012/2013

Uploaded on 10/02/2013

aanila
aanila 🇮🇳

4.4

(36)

171 documents

1 / 24

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
ME(EE) 550
Foundations of Engineering Systems Analysis
Chapter Four: Inner Product Spaces
The concept of normed vector spaces, presented in Chapter 3, synergistically
combines the topological structure of metric spaces and the algebraic structure
of vector spaces. An inner product spaces are a special case of a normed space,
where every inner product induces a unique norm; and the resulting geometry is
largely similar to the familiar Euclidean spaces. Many familiar notions (e.g., root
mean square value, standard deviation, least squares estimation, and orthogonal
functions) in engineering analysis can be explained from the perspectives of inner
product spaces. For example, the inner product of two vectors is a generalization
of the familiar dot product in vector calculus. This chapter should be read along
with Chapter 5 Parts A and B of Naylor & Sell. Specifically, both solved examples
and exercises in Naylor & Sell are very useful.
1 Basic Concepts
Definition 1.1. (Inner Product) Let (V, ) be a vector space over a (complete)
field F, where we choose Fto be Ror C. Then, a function h•,•i :V×VFis
defined to be an inner product if the following conditions hold x, y , z VαF:
Additivity:h(xy), zi=hx, z i+hy, zi.
Homogeneity:hαx, yi=αhx, yi.
Hermiticity:hy , xi=hx, yi.
Positive Definiteness:hx, xi>0 if x6=0V.
A vector space with an inner product is called an inner product space.
1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18

Partial preview of the text

Download Inner Product Spaces - Foundations of Engineering Systems Analysis - Lecture Notes and more Study notes Engineering Systems Analysis in PDF only on Docsity!

ME(EE) 550

Foundations of Engineering Systems Analysis

Chapter Four: Inner Product Spaces

The concept of normed vector spaces, presented in Chapter 3, synergistically combines the topological structure of metric spaces and the algebraic structure of vector spaces. An inner product spaces are a special case of a normed space, where every inner product induces a unique norm; and the resulting geometry is largely similar to the familiar Euclidean spaces. Many familiar notions (e.g., root mean square value, standard deviation, least squares estimation, and orthogonal functions) in engineering analysis can be explained from the perspectives of inner product spaces. For example, the inner product of two vectors is a generalization of the familiar dot product in vector calculus. This chapter should be read along with Chapter 5 Parts A and B of Naylor & Sell. Specifically, both solved examples and exercises in Naylor & Sell are very useful.

1 Basic Concepts

Definition 1.1. (Inner Product) Let (V, ⊕) be a vector space over a (complete) field F, where we choose F to be R or C. Then, a function 〈•, •〉 : V × V → F is defined to be an inner product if the following conditions hold ∀x, y, z ∈ V ∀α ∈ F:

  • Additivity: 〈(x ⊕ y), z〉 = 〈x, z〉 + 〈y, z〉.
  • Homogeneity: 〈αx, y〉 = α〈x, y〉.
  • Hermiticity: 〈y, x〉 = 〈x, y〉.
  • Positive Definiteness: 〈x, x〉 > 0 if x 6 = (^0) V.

A vector space with an inner product is called an inner product space.

Example 1.1. Examples of Inner Product Spaces:

  1. Let V = Fn^ and x, y ∈ V. Then,

∑n k=1 y¯kxk^ is an inner product, where the vector x ≡ [x 1 , x 2 , · · · , xn].

  1. Let V = Fn×m^ and A, B ∈ V. Then, trace(BH^ A) is an inner product.
  2. Let V = ℓ 2 (F) and x, y ∈ V. Then, ∑∞ k=1 y¯k xk is an inner product.
  3. Let V = L 2 (μ) and x, y ∈ V. Then,

dμ(t) ¯y(t) x(t) is an inner product.

Remark 1.1. Let V be a vector space over a (complete) field F and x ∈ V. Then 〈x, x〉 ∈ R, because it follows from the homogeneity property in Definition 1.1 that 〈x, x〉 = 〈x, x〉. Furthermore, it follows from the additivity and positive definiteness properties in Definition 1.1 that 〈x, x〉 ∈ [0, ∞) ∀x ∈ V.

Remark 1.2. The following results are derived from Definition 1.1.

  • 〈y, αx〉 = ¯α〈x, y〉.
  • 〈x, (y ⊕ z)〉 = 〈x, y〉 + 〈x, z〉.
  • If 〈x, x〉 = 0, then then x = (^0) V.
  • If 〈x, y〉 = 0 ∀y ∈ V , then x = (^0) V.

Lemma 1.1. (Cauchy-Schwarz Inequality) Let V be a vector space over a (com- plete) field F. Then, the following inequality holds: |〈x, y〉| ≤ ‖x‖‖y‖ ∀x, y ∈ V , where ‖x‖ ,

〈x, x〉. The equality holds if and only if either x = αy or y = αx for some α ∈ F.

Proof. The proof follows from the fact: 0 ≤ 〈(x−αy), (x−αy)〉 = ‖x‖^2 − |〈 ‖x,yy‖〉| 22

Corollary 1.1. The equality part in Lemma 1.1 holds if and only if x and y are collinear, i.e., if either x = αy or y = αx for some α ∈ F.

Proof. The proof follows by substituting x or y in the inequality in Lemma 1.1.

Remark 1.3. Note that ‖x ⊕ y‖ = ‖x‖ + ‖y‖ or ‖x ⊕ y‖ = |‖x‖ − ‖y‖| if and only if x and y are collinear.

Lemma 1.2. Let V be a vector space and x ∈ V. Then ‖x‖ ,

〈x, x〉 is a valid norm on the vector space V.

It is also a fact that the converse of Theorem 1.1 is true. That is, if V is a normed vector space and satisfies the parallelogram equality, then there is a unique inner product defined on the V , which generates the norm. In other words, V is an inner product space if and only if the norm satisfies the parallelogram equality. The issue is how to find the inner product from the knowledge of the norm. This issue is addressed in the following theorem.

Theorem 1.2. (Polarization Identity) Let (V, ‖ • ‖) be a normed vector space that satisfies the parallelogram equality. Then, the (unique) inner product 〈•, •〉 on V that satisfies the the relation 〈x, x〉 = ‖x‖^2 ∀x ∈ V is given as follows:

  1. If the scalar field F over which the vector space is constructed is the real field R, then the inner product is constructed from the given norm as follows:

〈x, y〉 =^14

‖x ⊕ y‖^2 − ‖x ⊕ (−y)‖^2

  1. If the scalar field F over which the vector space is constructed is the complex field C, then the inner product is constructed from the given norm as follows:

〈x, y〉 =^14

‖x ⊕ y‖^2 − ‖x ⊕ (−y)‖^2 + i‖x ⊕ iy‖^2 − i‖x ⊕ i(−y)‖^2

where i ,

Proof. The proof follows by substituting 〈x, x〉 for ‖x‖^2 and performing algebraic operations.

Definition 1.2. A complete inner product space is called a Hilbert space.

Remark 1.5. Every inner product space (V, 〈•, •〉 induces a unique norm in the vector space V defined over the field F. Since a norm is a valid metric in a vector space, every inner product space is a metric space. Therefore, the topology of an inner product space is the one generated by the metric: √ d(x, y) , ‖x ⊕ (−y)‖ = 〈(x ⊕ (−y)), (x ⊕ (−y))〉; for example, this is the usual topology in the Euclidean space Rn^ or the unitary space Cn. It is also seen in Chapter 1 that every metric space has an (essentially) unique completion; therefore, every inner product space has an (essentially) unique completion that leads to the construction of a unique Hilbert space from the given inner product space. In the next lemma, the inner product 〈•, •〉 is viewed as a (continuous) mapping of the product space (V, 〈•, •〉)× (V, 〈•, •〉) into the scalar field F.

1.1 Orthogonality in Inner Product Spaces

The Hilbert space is a close cousin of the Euclidean space in the sense that the geometrical notions in these spaces are similar. The rationale for this similarity is largely due to the concept of orthogonality that is admissible in inner product spaces.

Definition 1.3. (Orthogonality) Let (V, 〈•, •〉) be an inner product space. Then, two vectors x, y ∈ V are said to orthogonal (to each other) if 〈x, y〉 = 0. Orthog- onality of x and y is denoted as x ⊥ y. Two subsets (not necessarily subspaces) A, B ⊆ V are said to be orthogonal (to each other) if 〈x, y〉 = 0 for all x ∈ A and all y ∈ B; this is denoted as A ⊥ B.

Lemma 1.4. (Pythagorean Theorem) Let (V, 〈•, •〉) be an inner product space and let x, y ∈ V. If x ⊥ y, then ‖x ⊕ y‖^2 = ‖x‖^2 + ‖y‖^2.

Proof. ‖x ⊕ y‖^2 = 〈(x ⊕ y), (x ⊕ y)〉 = ‖x‖^2 + 〈x, y〉 + 〈y, x〉 + ‖y‖^2 = ‖x‖^2 + ‖y‖^2 because 〈x, y〉 = 〈y, x〉 = 0.

Definition 1.4. (Direct Sum) Let V be a vector space and let U and W be two subspaces of V. Then, a subspace Y is the direct sum of U and W , denoted as Y = U ⊕^ W , if ∀ y ∈ Y ∃ u ∈ U and w ∈ W such that there is a unique representation y = u ⊕ w; and U is called the algebraic complement of W (alternatively, W is called the algebraic complement of U ) in Y.

Definition 1.5. (Orthogonal Complement) Let (V, 〈•, •〉) be an inner product space and let ∅ ⊂ M ⊆ V. (Note that M may or may not be a vector space.) Then, the orthogonal complement of M in (V, 〈•, •〉) is defined as

M ⊥^ =

x ∈ V : x ⊥ y ∀ y ∈ M

i.e., M ⊥^ is made of all vectors in V , which are orthogonal to each vector in M. An x ∈ M ⊥^ is denoted as x ⊥ M.

Remark 1.6. If M = { (^0) V }, then M ⊥^ = V and if M = V , then M ⊥^ == { (^0) V }. Also note that

M

M ⊥^ =

{ (^0) V } if (^0) V ∈ M ∅ if (^0) V ∈/ M

  1. If x ∈ S⊥, then ∀ y ∈ S it follows from Pythagorean theorem that

‖x ⊕ (−y)‖ = ‖x‖^2 + ‖y‖^2 ≥ ‖x‖^2

Since S is dense in V , y can be chosen such that ‖x ⊕ (−y)‖ becomes arbi- trarily small. Hence, ‖x‖ = 0. (Note that if S⊥^ = { (^0) V }, then S is dense in V only if (V, 〈•, •〉) is a Hilbert space.)

Theorem 1.5. Let (H, 〈•, •〉) be a Hilbert space and let G be a subspace of H. Then, the following statements are valid.

  1. G = G⊥⊥, where G is the closure of G.
  2. If G is closed in (H, 〈•, •〉), then G⊥⊥^ = G.
  3. If G⊥^ = { (^0) V } if and only if G is dense in H.
  4. If G is closed in (H, 〈•, •〉) and if G⊥^ = { (^0) V }, then G = H.

Note: These results do need completion of the inner product space (H, 〈•, •〉).

Proof. The proof is as follows.

  1. It is known that G ⊆ G. It follows from Theorem 1.4 that G ⊆ G⊥⊥, It also follows from Theorem 1.3 that G ⊆ G⊥⊥. Now, if G 6 = G⊥⊥, then there is a nonzero vector z ∈ G⊥⊥^ such that z⊥G.since G ⊆ G, it follows that z⊥G. Therefore, z ∈ G ⋂^ G⊥^ z = (^0) V. This is a contradiction. Hence, G = G⊥⊥.
  2. The proof follows directly from part (1) above.
  3. The “if” part follows from Theorem 1.4 part (5). Now let G⊥^ = { (^0) V }. Then, G⊥⊥^ = V and it follows from part (1) that G = V. Hence, G is dense in V.
  4. This part follows from part (3) above.

Theorem 1.6. Let (H, 〈•, •〉) be a Hilbert space and let F and G be two closed subspaces of H. If F ⊥ G, the direct sum F ⊕^ G is a closed subspace of H. Note: This result does need completion of the inner product space (H, 〈•, •〉)

Proof. Let {zk} be a convergent sequence in F

G with zk → z. we want to show that z ∈ F

G. That is, zk = xk ⊕ yk, where xk ∈ F and yk ∈ G. Since F ⊥ G, it follows from Pythagorean Theorem that

‖zk − zℓ‖^2 = ‖xk − xℓ‖^2 + ‖yk − yℓ‖^2

hence, {xk} and {yk} are Cauchy sequences in F and G , respectively. It follows from continuity of addition that z = x ⊕ y. Therefore, F ⊕^ G is closed in V.

Theorem 1.7. (Projection Theorem: Version 1) Let (H, 〈•, •〉) be a Hilbert space and let G be a closed subspace of H. Then,

  1. H = G ⊕^ G⊥.
  2. Each x ∈ H can be uniquely expressed as x = y ⊕z, where y ∈ G and z ∈ G⊥, and ‖x‖^2 = ‖y‖^2 + ‖z‖^2.

Note: This result does need completion of the inner product space (H, 〈•, •〉)] and closedness of G in H.

Proof. It follows from Theorem 1.6 that G

G⊥^ is a closed subspace of H. Since G ⊆ G

G⊥^ and G⊥^ ⊆ G

G⊥, it follows from Theorem 1.4 and Theorem 1. that H = G

G⊥.

See Naylor & Sell: Proof of Theorem 5.15.6 in Page 298 for more details.

1.2 Orthogonal Projection

Definition 1.6. (Orthogonal Projection) A projection P on an inner product space is called an orthogonal projection if the range space and null space of P are orthog- onal, i.e., R(P )⊥N (P ).

Remark 1.7. It follows from Definition 1.6 that if P is an orthogonal projection, then so is I − P.

Theorem 1.8. (Continuity of Orthogonal Projection) An orthogonal projection in an inner product space is continuous.

Proof. See Naylor & Sell: Proof of Theorem 5.16.2 in Page 300.

Theorem 1.9. Let P be an orthogonal projection on an inner product space V. Then,

Theorem 1.11. (Uniqueness of the minimizing vector) Let V be an inner product space and let W be a subspace of V. Given an arbitrary vector y ∈ V , if there exists a vector ˆx ∈ W such that ‖(y − xˆ)‖ ≤ ‖(y − x)‖ ∀y ∈ V , then xˆ ∈ W is unique. A necessary and sufficient condition that xˆ be a unique minimizing vector in W is that the error vector (y − ˆx) is orthogonal to W.

Proof. First we apply a contradiction to show that if ˆx ∈ W is a minimizing vector, then the error vector (y − ˆx) is orthogonal to W. Let us assume that there exists an x ∈ W \ { (^0) H } that is not orthogonal to (y − xˆ). It is noted that ‖x‖ > 0 and let ε , 〈(y− ‖xxˆ‖) ,x〉; obviously, ε 6 = 0. Let ˜x , xˆ + εx. Then,

‖y − ˜x‖^2 = ‖y − xˆ − εx‖^2 = ‖y − ˆx‖^2 − 〈(y − xˆ), εx〉 − 〈εx, (y − ˆx)〉 + |ε|^2 ‖x‖^2

⇒ ‖y − ˜x‖^2 = ‖y − xˆ‖^2 − |ε|^2 |x‖^2 < ‖y − xˆ‖^2

Thus, if (y − xˆ) is not orthogonal to W , then ˆx is not a minimizing vector. Next we show that if (y − xˆ) is orthogonal to W , then ˆx is a unique minimizing vector. For any x ∈ W , the Pythagorean Theorem yields

‖y − x‖^2 = ‖(y − xˆ) + (ˆx − x)‖^2 = ‖y − xˆ‖^2 + ‖ˆx − x‖^2

Thus, ‖y − x‖ > ‖y − ˆx‖ ∀x 6 = ˆx.

Next we address the conditions for existence of the minimizing vector by strength- ening the above theorem on uniqueness. Thus, we will satisfy the criteria for both existence and uniqueness of the minimizing vector.

Theorem 1.12. (Existence and uniqueness of the minimizing vector) Let H be a Hilbert space and G be a closed subspace of H. Then, corresponding to any vector y ∈ H, there exists a unique vector ˆx ∈ G such that ‖y − xˆ‖ ≤ ‖y − x‖ ∀x ∈ G. Furthermore, a necessary and sufficient condition that xˆ ∈ G be the unique minimizing vector is that (y − xˆ) be orthogonal to G.

Proof. The uniqueness and orthogonality have been established in Theorem 1.11. What remains to be establish is existence of the minimizing vector ˆx ∈ G. If y ∈ G, then trivially ˆx = y; let y /∈ G and δ , infx∈G ‖y − x‖. In order to identify an ˆx ∈ G such that ‖y − xˆ‖ = δ, we proceed as follows. Let {xk} be a sequence of vectors in G such that limk→∞ ‖y − xk^ ‖ = δ. Then, it follows by using the parallelogram law that

‖(xk^ − y) + (y − xℓ)‖^2 + ‖(xk^ − y) + (y − xℓ)‖^2 = 2

‖xk^ − y‖^2 + ‖y − xℓ‖^2

A rearrangement of the above expression yields

‖(xk^ − xℓ)‖^2 = 2‖xk^ − y‖^2 + 2‖y − xℓ‖^2 − 4

∥∥y − (x

k (^) + xℓ) 2

2

Since the vector (x

k (^) +xℓ) 2 ∈^ G^ ∀k, ℓ^ ∈^ N, it follows that

∥∥y − (xk^ + 2 xℓ)

∥∥ ≥ δ. Therefore,

‖(xk^ − xℓ)‖^2 ≤ 2 ‖xk^ − y‖^2 + 2‖y − xℓ‖^2 − 4 δ^2

As ‖y − xk‖^2 → δ^2 and ‖y − xℓ‖^2 → δ^2 , it follows that limk.ℓ→∞ ‖xk^ − xℓ‖^2 = 0. Therefore, {xk} is a Cauchy sequence in G that is a closed (and hence a complete) subspace of the Hilbert space H. Therefore, {xk} converges to the limit point in G. Then, by continuity of the norm, it follows that the limit is ˆx ∈ G.

Remark 1.8. Although the proof of the above theorem does not make any explicit reference to the inner product, it does make use of the parallelogram law that, in turn, makes use of the inner product for its proof.

2 Orthonormal Sets and Bases

We have presented the notion of Hamel basis in Chapter 02 and Schauder basis in Chapter 03, which involve only algebraic notions. Now we incorporate both algebraic and topological notions in the formulation of a basis.

Definition 2.1. (Orthogonal Set) Let I be an arbitrary nonempty index set (i.e., finite or countable or uncountable). A set {xα^ : α ∈ I} in an inner product space is called orthogonal if ∀α 6 = β, xα⊥xβ^ , i.e., 〈xα, xβ^ 〉 = 0. In addition, if 〈xα, xα〉 = 1 ∀α ∈ I, i.e., 〈xα, xβ^ 〉 = δαβ α, β ∈ I, then the set {xα} is called orthonormal. Note that

δαβ ,

1 if α = β 0 if α 6 = β

is called the Kronecker delta. Also note that any orthogonal set of non-zero vectors can be converted into an orthonormal set by replacing xα^ with (^) ‖xxαα‖ ∀α.

Lemma 2.1. Every orthonormal set of vectors in an inner product space is linearly independent.

Proof. Let {x^1 , · · · , xn} be a finite subset from {xk}. Then, by making use of the orthonarmality 〈xj^ , xk^ 〉 = δjk, it follows that

0 ≤

∥x −

( (^) ∑n

j=

〈x, xj^ 〉xj^

2

x −

∑^ n j=

〈x, xj^ 〉xj^

x −

∑^ n k=

〈x, xk^ 〉xj^

= 〈x, x〉 −

∑^ n j=

〈x, xj^ 〉〈xj^ , x〉 −

∑^ n k=

〈x, xk^ 〉〈xk, x〉 +

∑^ n j=

∑^ n k=

〈x, xj^ 〉〈xj^ , x〉〈x, xk^ 〉〈xk^ , x〉

= ‖x‖^2 −

∑^ n k=

| 〈x, xk^ 〉 |^2

Since the above inequality is true for all n ∈ N, it must also hold true for a (count- able) infinite sum. Hence, the proof is established.

Theorem 2.1. (Fourier Series Theorem) Let {xk} be an orthonormal set in a Hilbert space H. Then, the following statements are equivalent.

(a) (Orthonormal basis) {xk} be an orthonormal basis of H.

(b) (Fourier series expansion) Every x ∈ H can be expanded as

k〈x, xk^ 〉xk^.

(c) (Parseval equality) ∀x, y ∈ H, the following relation holds; 〈x, y〉 =

k〈x, xk^ 〉^ 〈y, xk〉.

(d) (Norm decomposition) The norm of every x ∈ H can be decomposed as:

‖x‖^2 =

k

|〈x, xk^ 〉|^2 < ∞.

(e) (Dense subspace of H) Let V be a subspace of H such that {xk} ⊂ V. Then, V is dense in H, i.e. closure(V )=H.

Proof. Following Naylor & Sell (pp. 307-312), the proof is completed by showing the equivalence of individual parts in the following order: (a) ⇒ (b) ⇒ (c) ⇒ (d) ⇒ (a) and (b) ⇒ (e) ⇒ (b).

Corollary 2.1. Let V be a closed subspace of a Hilbert space H, where V 6 = { (^0) H }; and let {xk} be an orthonormal basis of V. Then, the orthogonal projection P of H onto V is given by P x = ∑ k〈x, xk^ 〉xk^ ∀x ∈ H.

Proof. The proof follows from Theorem 2.1.

2.1.1 Gram-Schmidt Procedure

Let V be an inner product space and let {yk} be a linearly independent set with countable cardinality in V. The objective here is to construct an orthonormal sequence from the By making use of Bessel inequality in Lemma 2.2. Now we present the following theorem.

Theorem 2.2. (Gram-Schmidt Orthonormalization) Let {xk} be an at most count- able (i.e., finite or countably infinite) set of linearly independent vectors in an inner product space V. Then, there exists an orthonormal sequence {ek} ⊂ V such that, for every n ∈ N, Span[e^1 , · · · , en] = Span[x^1 , · · · , xn].

Proof. First let us normalize the first vector x^1 as e^1 , (^) ‖xx^11 ‖ that obviously spans the same space as x^1 does. Next we construct z^2 , x^2 − 〈x^2 , e^1 〉e^1. Also note that z^2 6 = (^0) V because x^2 and x^1 are linearly independent and Span[e^1 , e^2 ] = Span[x^1 , x^2 ]. Now, let e^2 , (^) ‖zz^22 ‖ , which assures 〈e^2 , e^1 〉 = 0 and 〈e^2 , e^2 〉 = 1. Proceeding in this way, the remaining ek’s are generated by induction as follows. Given zk^ , xk^ − ∑ j=1 k − 1 〈xk^ , ej^ 〉ej^ and ek^ , (^) ‖zzkk (^) ‖ for all k ∈ { 3 , · · · , n}. Since 〈ej^ , ek〉 = δjk ∀j, k ∈ { 1 , · · · , n}, we have 〈zn+1, ek〉 = 0, i.e., zn+1^ is orthogonal to Span[e^1 , · · · , en] = Span[x^1 , · · · , xn]. Next, define en+1^ , (^) ‖zznn+1+1‖. The proof by induction is thus completed.

Example 2.1. (Orthogonal polynomials) Let T = (− 1 , 1) and let us consider the (linearly independent) sequence of vectors {tk^ : k ∈ N ∪ { 0 } and t ∈ T } on the inner product space L 2 (T ) over the real field R, where the inner product is defined as 〈x, y〉 ,

T dt x(t)y(t)^ ∀x, y^ ∈^ L^2 (T^ ). Then, Gram-Schmidt orthonormalization of the sequence {tk} generates a set of orthonormal polynomials, called Legendre polynomials as seen below. Noting that 〈 1 , 1 〉 =

− 1 1 dt^ = 2^ ⇒^ ‖^1 ‖^ =^

2, we have e^1 = (^) ‖tt^00 ‖ = √^12. z^2 , t − 〈t, e^1 〉e^1 = t − (^12)

− 1 1 t dt^ =^ t^ ⇒^ e^2 =^

z^2 ‖z^2 ‖ =

2 t. z^3 , t^2 − 〈t^2 , √^12 〉 √^12 − 〈t^2 , √^32 t 〉 √^32 t = t^2 − 13 ⇒ e^3 = (^) ‖zz^33 ‖ =

(^458) (t (^2) − 13 ). .. . There are many other families of orthonormal polynomials that can by gram- Schmidt orthonormalization of the (linearly independent) sequence of vectors {tk^ : k ∈ N ∪ { 0 } and t ∈ T } on different inner product spaces. A few examples follow.

  1. Let n ∈ N. If f is a C∞-continuous function, then Fourier transform of ∂nf (t) ∂tn^ is (i^2 πξ)n^ fˆ(ξ); similarly, Fourier transform of (−i^2 πt)nf^ (t) is^

∂n^ fˆ (ξ) ∂ξn^ , provided that the Fourier transforms exist.

  1. Fourier transform of

∫ (^) t −∞ dμ(τ^ )f^ (τ^ ) is^

fˆ (ξ) i 2 πξ provided that fˆ(0) = 0.

  1. Let n ∈ N ∪ { 0 }. Then, (−i 2 π)n^

−∞ dμ(τ^ )τ^ n^ f^ (τ^ ) =^

∂n^ fˆ ∂ξn

∣ξ=0, provided that the integral and the derivatives exist.

Also note that if f ∈ L 1 (μ) and fˆ(ξ) = 0 ∀ξ ∈ R, then f (t) = 0 almost everywhere ( μ-a.e.) on R.

Next we introduce the notion of Plancherel Theorem. Since the Lebesgue mea- sure μ of R is not finite (it is σ-finite), we cannot claim L 2 (μ) to be a subspace of L 1 (μ). Fourier transform in Definition 2.4 is not directly applicable to every f ∈ L 2 (μ). However, the definition does apply if f ∈ L 1 (μ)

L 2 (μ) and it turns out that fˆ ∈ L 2 (μ). In fact, ‖ fˆ‖L 2 = ‖f ‖L 2. This isometry of L 1 (μ)

L 2 (μ) into L 2 (μ) extends to an isometry of L 2 (μ) onto L 2 (μ), and this defines an extension of the Fourier transform, which is often called as the Plancherel transform and is applicable to every f ∈ L 2 (μ). The resulting L 2 -theory has more symmetry than that in the L 1 -theory. In other words, f and fˆ play the same role in L 2 (μ).

Theorem 2.3. (Plancherel Theorem) Each function f ∈ L 2 (μ) can be associated to a function fˆ ∈ L 2 (μ) such that the following properties hold:

  1. If f ∈ L 1 (μ) ⋂^ L 2 (μ), then fˆ is the the Fourier transform of f , i.e., f^ ˆ(ξ) = ∫ R dt exp (−i 2 πξt)f (t) ∀ξ ∈ R.
  2. ‖ fˆ‖L 2 = ‖f ‖L 2 for every f ∈ L 2 (μ).
  3. The mapping f 7 → fˆ is a Hilbert space isomorphism of L 2 (μ) onto L 2 (μ).
  4. The following symmetric relation exists between f and fˆ : If ϕA(ξ) ,

∫ A

−A dμ(t) exp(−i^2 πξt)f^ (t)^ and^ ψA(t)^ ,^

∫ A

−A dμ(ξ) exp(i^2 πξt) fˆ^ (ξ), then ‖ϕA − fˆ‖L 2 → 0 and ‖ψA − f ‖L 2 → 0 as A → ∞.

Proof. See Real and Complex Analysis by Walter Rudin (pp.187-189).

2.1.3 Hilbert Dimension and Separable Hilbert Spaces

Let us classify Hilbert spaces (over the same field) by cardinality of the orthonor- mal basis set. Let B 1 and B 2 be two orthonormal bases of a Hilbert space H.

Since B 1 and B 2 must have the same cardinal numbers, this cardinal number is called the Hilbert dimension of H. For finite-dimension spaces H, the Hilbert di- mension of H is the same as the dimension related to a Hamel base. However, for infinite-dimensional Hilbert spaces, the Hibert dimension could be countable or uncountable. An example of countable Hilbert dimension is the Hilbert space of periodic functions L 2 [−π, π].

Example 2.2. The Hilbert space ℓ 2 is isometrically isomorphic to the Hilbert space L 2 [−π, π]. That is, there exists a linear bijective mapping f : ℓ 2 → L 2 [−π, π] such that the inner products satisfy the following equality

〈x, ˜x〉ℓ 2 = 〈f (x), f (˜x)〉L 2 [−π,π] ∀x, x˜ ∈ ℓ 2

To see this, we proceed as follows. Let {xk} be an orthonormal basis for L 2 [−π, π] and let f : ℓ 2 → L 2 [−π, π] be defined as: f (a) = ∑ k∈N akxk

We cite below an example of a Hilbert space having uncountable Hilbert di- mension.

Example 2.3. (Uncountable Hilbert dimension) Let us consider a trigonometric polynomial p : R → C such that ∀t ∈ R p(t) ,

∑m k=1 ck^ exp(irkt) for some m ∈ N, where r 1 , · · · , rm ∈ R and c 1 , · · · , cm ∈ C. Let ˜p : R → C be another trigonometric polynomial defined as ∀t ∈ R p˜(t) ,

∑ (^) m˜ k=1 ˜ck^ exp(ir˜kt) for some m˜ ∈ N, where ˜r 1 , · · · , r˜ (^) m˜ ∈ R and ˜c 1 , · · · , ˜c (^) m˜ ∈ C. Then, it follows that

p(t) p˜(t) =

∑^ m j=

∑m˜ k=

cj ˜ck exp

i(rj − ˜rk)t

We define an inner product in the space trigonometric polynomials as

〈p, p˜〉 , (^) Tlim →∞ 21 T

∫ T

−T

dt p(t) p˜(t) =

∑^ m j=

∑m˜ k=

cj ˜ck

because

Tlim →∞ 21 T

∫ T

−T

dt exp

irt

1 if r = 0 0 if r ∈ R \ { 0 } The completion of the inner product space of trigonometric polynomials is a Hilbert space called the space of almost periodic functions. Let us call this Hilbert space as HtrigP. For all r ∈ R let us consider the function ur(t) , exp(irt), t ∈ R.

Let ˜x =

∑n k=1 γkn ek^ be the orthogonal decomposition of ˜x^ ∈^ A. Then,

‖x − ˜x‖ =

∥x^ −

∑^ n k=

γkn ek

∥x −

∑^ n k=

〈x, ek〉ek

∑^ n k=

〈x, ek〉ek^ −

∑^ n k=

γkn ek

∥ < ε 2 + ε 2 = ε

which proves that A is dense in H; and since A is countable, H is separable.

Example 2.4. The spaces ℓ 2 and L 2 ([−π, π]), defined on the same field F, have the same Hilbert dimension. It is also true that ℓ 2 and L 2 ([−π, π]) are isometrically isomorphic as explained below. Let {xk} be an orthonormal basis for the Hilbert space L 2 ([−π, π]) and let f : ℓ 2 → L 2 ([−π, π]) be defined as follows:

f (a) ,

∑^ ∞

k=

akxk^ , where a , {a 1 a 2 · · · } ∈ ℓ 2.

Now we show that f is linear and bijective. Linearity is established as follows.

f (αa + b) =

∑^ ∞

k=

αa + b)k xk^ = α

∑^ ∞

k=

akxk^ +

∑^ ∞

k=

bkxk^ = αf (a) + f (b)

To establish injectivity of f , let f (a) = f (b). Since f and {xk} is orthonormal, it follows that

0 = ‖f (a) − f (b)‖^2 =

∑^ ∞

k=

(a − b)k xk

2

∑^ ∞

k=

|(a − b)k|^2 ‖xk‖^2 =

∑^ ∞

k=

|(a − b)k|^2

which implies that a = b. To establish surjectivity of f and isometry of the spaces ℓ 2 and L 2 ([−π, π]), let x ∈ L 2 ([−π, π]). Then, it follows from Fourier series theorem that x =

k=1〈x, xk^ 〉. Setting ak = 〈x, xk^ 〉, it follows from orthonormality of {xk} that

‖x‖^2 =

∑^ ∞

k=

ak xk

2

∑^ ∞

k=

|ak|^2 ‖xk‖^2 =

∑^ ∞

k=

|ak|^2 = ‖a‖^2

which implies that, for each x ∈ L 2 ([−π, π]), there exists an a ∈ ℓ 2 such that f (a) = x and ‖a‖ℓ 2 = ‖x‖L 2 ([−π,π]).

3 Applications of Hilbert Spaces

Hilbert spaces have numerous applications in Science and Engineering. In modern Physics (that started in early 120’s), the mathematics of Quantum Mechanics is built upon the concept of Hilbert spaces over the complex field C. While there is abundance of literature in mathematics of Quantum Mechanics, Chapter 11 of Introductory Functional Analysis with Applications by E. Kreyszig provides a brief explanation of the underlying concepts. In this section, we provide a few examples of applications of Hilbert spaces in Signal Processing and Quantum Mechanics.

3.1 Shannon Sampling: Bandlimited Signals

Sampling of continuous-time signal is a key step to discrete-time signal processing. In engineering practice, sampling is accomplished by analog-to-digital (A/D) con- version of continuously varying signals that are generated by various sensors. It is apparent that the sampling frequency should be selected depending on the band- width of the signal and the sensor that is used to measure the signal. For example, if the sampling is performed infrequently, the vital information content of the signal might be lost. On the other hand, a very high sampling frequency may generate excessively redundant data that would require additional memory and computation time. We present the analytical relationship between the signal bandwidth and the smallest sampling frequency for no loss of information.

Theorem 3.1. (The Shannon Sampling Theorem) Let a (real-valued) signal f ∈ L 2 (R) be band-limited by Ω, i.e., fˆ(ξ) = 0 ∀ ξ /∈ [−Ω, Ω]. Then, f (t) can be perfectly reconstructed from its samples f (tk), collected at time instants tk , (^) 2Ωk , k ∈ Z, by the following interpolation formula:

f (t) =

k∈Z

sin

2 πΩ (t − tk)

2 πΩ (t − tk) f^ (tk)

Proof. By Plancherel theorem, the L 2 -norm of Fourier transform is expressed as:

‖ fˆ ‖^2 L 2 (−Ω,Ω)=

−Ω

dξ | fˆ(ξ) |^2

=

−∞

dξ | fˆ (ξ) |^2 =‖ fˆ ‖^2 L 2 (R)

=‖ f ‖^2 L 2 (R)