







Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
Material Type: Assignment; Class: Applied Multivariate Analysis I; Subject: Statistics; University: Temple University; Term: Unknown 1989;
Typology: Assignments
1 / 13
This page cannot be seen from the preview
Don't miss anything!
Exercises 2.1, 2.6, 2.7 and 5.1 (from the sixth edition of the book)
2.1, p.
Let x’ = [5, 1, 3] and y’ = [-1, 3, 1].
a) Graph the two vectors.
See Attached Manual Graph
b) Find i) the length of x , ii) the angle between x and y , and iii) the projection of y on x.
i) To find the length of x , we must first determine x’x because
x’x = 5
2
2 +
3
2
= 25 + 1 + 9 = 35
= 5.
Similarly, to find the length of y , we must first determine y’y because (^) Ly y y '
y’y = -
2
2 + 1
2 = 1 + 9 + 1 = 11
= 3.
ii) Since we know that
x y
, if we know the values of^ x’y ,^ L x and^ Ly^ , we
. We have already calculated Lx
and (^) Ly so all
there is left to do is to find x’y:
x’y = the inner product of x and y , which is defined as:
x’y = x 1 y 1 + x 2 y 2 + … + xnyn
(^) x’y = 5(-1) + 13 + 3*1 = -5 + 3 + 3 = 1
cos 0.
We can calculate the angle in Excel using the ACOS function, which gives us the Arccosine,
or Inverse Cosine, of 0.05097:
=ACOS(0.05097) = 1.
This is the angle measured in radians. To convert it to degrees, we multiply it by 180 and
(Similarly, if we ever had to calculate Radians in Excel in order to find cos^ we would multiply
Stat 8108 – Fall ’07 Dr. Sarkar
iii) The projection of vector y on vector x is the “shadow” that vector y would leave on
vector x if the shadow fell perpendicularly onto vector x from the tip of vector y. Think
of vector y as the hypotenuse of a triangle formed with such a projection. Since is the
angle between vector x and vector y, cos^
x
projection
(^) so the projection = ( cos (^) )* Lx
Since we know that Lx x x ' , the projection =
cos * 35 = 0.05097 * 5.916 = 0.
c) Since (^) x = 3 and y = 1, graph [5-3, 1-3, 3-3] = [2, -2, 0] and [-1-1, 3-1, 1-1] = [-2, 2,0]
See Attached Manual Graph
2.6, p.
Let A =
a) Is A symmetric?
In order for a square matrix to be symmetric, it must be symmetric along its main diagonal
(which, in linear algebra, is the diagonal that runs from the top left to the bottom right i.e.
NW-SE). Being symmetric along the main diagonal means the matrix is equal to its
transpose. I.e. A must be equal to A’ , or ij ji
a a (^) for all values of i and j (Johnson, p.57). The
transpose operation of a matrix changes the matrix’s columns into rows such that the 1
st
column
becomes the 1
st row, the 2
nd column becomes the 2
nd row, etc. (Johnson, p.55). In other words, the
transpose of a matrix is what happens to the matrix when you “spin” it along its main
diagonal. (Note: such “spinning” would happen over 180 in a 3-dimensional sense,
although both the original matrix and the transpose matrix are in the same 2 dimensions.)
Let’s take a closer look at the transpose operation in the case of matrix A:
Now when we transpose, the 1
st column of A (yellow) becomes the 1
st row of A’ , and the 2
nd
column of A (green) becomes the 2
nd
row of A’ :
Stat 8108 – Fall ’07 Dr. Sarkar
simply be equal to the scalar multiplied by the corresponding element of the original matrix
(e.g. c* (^) ij
a )
For example, let’s look at our matrix A :
Since A is a 22 matrix, its corresponding identity matrix would also be a 22 matrix:
Now let’s multiply that identity matrix by the scalar:
Using scalar multiplication, I is thus simply =
Now, we want to find Eigenvalues for such that | A - I| = 0
I.e.:
= 0
Now how do we solve for ? Well, must satisfy the conditions of a determinant of the square k*k
matrix A which are defined as follows (Johnson, p.93): | A | = 1
1
k
j
j
a
|^ A1j |^
1
( 1)
j
if k>1.
This equation simplifies to the following if we’re dealing with a 2*2 matrix:
(^11 12 2 )
11 22 12 21 11 22 12 21
21 22
a a
a a a a a a a a
a a
2
-4 = 0
2
-15^ ^ +50 = 0
This is taking the form of the quadratic equation where a=1, b=-15, and c=50.
2
4
b b ac
a
=
2
( 15) ( 15) 41
=
=
=
=
or
Stat 8108 – Fall ’07 Dr. Sarkar
= 10 or 5
I.e. the Eigenvalues are: 1
(^) =10 and 2
Eigenvectors are defined as follows (Blyth, p.148; Johnson, p.98): Remember that an
Eigenvalue of matrix A is a scalar^ ^ for which there exists a non-zero n*1 matrix such that A x =
x. Such a (column) matrix x is called an Eigenvector (or “latent vector” or “characteristic
vector”) associated with . Eigenvectors are by definition non-zero. I.e. if A is a square k*k
matrix, and is an Eigenvalue of A , and x is a nonzero vector (x 0) such that A x = x, then
x is an Eigenvector of matrix A.
For a 2*2 matrix, the Eigenvectors can be found by plugging-in our matrix A and the Eigenvalues
we’ve found ( 1
(^) and 2
(^) ) into the equation A x = x, and then solving for the corresponding
(Eigen)vectors x. Note that for a 22 matrix, the Eigenvector dimensions will be 21.
I.e. we want to find a solution (for x 1 and x2) not all zero for A x i.e. such that (Johnson, pp98-99):
A x =
1
2
x
x
= 1
(^) x or 2
(^) x
In the case of the 1
st
Eigenvalue, A x = (^1)
x
Since we know that A = and that
1
=10, it follows that A x =
1
x is simply:
1
2
x
x
=10 *
1
2
x
x
=
1
2
x
x
Remember that when multiplying matrices like those on the left side of the equation, we
multiply the 1
st
row of matrix A by the 1
st
column of vector x to get the 1
st
element of the new
matrix. Then we multiply the 2
nd
row of matrix A by the 1
st
column of vector x to get the 2
nd
element of the new matrix. (Johnson, pp.90-91.) To multiply a row by a column, find the
sum of the individual products of corresponding elements
I.e. the 1
st element of the new matrix = (Element 1 of row 1 of matrix A)*(Element 1 of
column 1 of vector x) + (Element 2 of row 1 of matrix A)*(Element 2 of column 1 of vector
x). And do likewise for the 2
nd
element of the new matrix.
1 2
1 2
x x
x x
=
1
2
x
x
From the equation describing the 1
st
element, we can conclude:
Stat 8108 – Fall ’07 Dr. Sarkar
1 2
1 2
x x
x x
=
1
2
x
x
From the equation describing the 1
st element, we can conclude:
9 x 1 + (-2) (x 2 ) = 5 x 1
-2x 2 = - 4 x 1
x 2 = 2x 1
We make the same conclusion from the equation describing the 2
nd
element:
-2x 1 + 6x 2 = 5 x 2
x 2 = 2x 1
Let x 1 = 1. Then x 2 = 2x 1 = 2*1 = 2. In such a case, our Eigenvector solution would be
. Let’s
adjust it for length unity.
2 2 2 2
1 2
x x 1 2 1 4 5
Our Eigenvector solution, e 2 , would then be
.
b) Write the spectral decomposition of A.
Once we know the Eigenvalues and Eigenvectors of a square-symmetric matrix, we can
rewrite the matrix and the sum of smaller, easier-to-understand component matrices. The
decomposition of a matrix is often called a “factorization.” (Ientilucci, p.1; Johnson, p.62.)
This is accomplished simply by adding—for each dimension of the original matrix—the
products:
(Eigenvalue) * (Eigenvector) * (Transpose of Eigenvector)
I.e.:
e 1 e 1 ’+ (^2)
e 2 e 2 ’+…+ (^) n
e n e n’
For a n*n matrix, we expect to have n Eigenvalues, and n Eigenvectors (because there is 1
unit-length Eigenvector for each Eigenvalue), and so of course we’d also have n Transposes
of the Eigenvalues. I.e. for a 2*2 matrix, we expect to have 2 Eigenvalues, and 2
Eigenvectors, and 2 Transposes of the Eigenvalues.
Stat 8108 – Fall ’07 Dr. Sarkar
In our case, A = , 1
=10, 2
=5, e 1 =
, and e 2 =
. Obviously then, e 1 ’=
, and e 2 ’=
.
1
(^) e 1 e 1 ’+^2
(^) e 2 e 2 ’
= 10
c) Find A
-
For a square n*n matrix A , there exists a matrix A
-
such that A A
-
= I. (Johnson, pp.95-96.)
If the matrix A is a 2*2 matrix
11 12
21 22
a a
a a
Then, by this formula that we memorize:
-
22 12
21 11
1 a^ a
A a a
Stat 8108 – Fall ’07 Dr. Sarkar
From the equation describing the 1
st element, we can conclude:
1 2
0.12 x 0.04 x = 0.2 x 1
0.04x 2 = 0.08x 1
x 2 = 2x 1
We make the same conclusion from the equation describing the 2
nd
element:
1 2
0.04 x 0.18 x = 0.2 x 2
-0.02x 2 = -0.04x 1
x 2 = 2x 1
Let x 1 = 1. Then x 2 = 2x 1 = 2*1 = 2. In such a case, our Eigenvector solution would be
. Let’s
adjust it for length unity.
2 2 2 2
1 2
x x 1 2 1 4 5
Eigenvector e 1 =
.
In the case of 2
(^) =0.1, A x = 2
(^) x
Since A =
and 2
=0.1, A x = 2
x is simply:
1
2
x
x
=0.1 *
1
2
x
x
=
1
2
x
x
1 2 1
1 2 2
x x x
x x x
From the equation describing the 1
st
element, we can conclude:
1 2
0.12 x 0.04 x = 0.1 x 1
0.02 x 1 = -0.04 x 2
x 1 = -2 x 2
We make the same conclusion from the equation describing the 2
nd element:
1 2
0.04 x 0.18 x = 0.1 x 2
0.04x 1 = -0.08x 2
(^) x 1 = - 2x 2
Stat 8108 – Fall ’07 Dr. Sarkar
Let x 2 = 1. Then x 1 = - 2x 2 = -2*1 = -2. In such a case, our Eigenvector solution would be
.
Let’s adjust it for length unity.
=
2 2 2 2
1 2
x x ( 2) 1 4 1 5
Eigenvector e 2 =
5.1, p.
a) Evaluate T
2
, for testing Ho: ' =[7,11], using the data
In statistics, Hotelling’s T-square statistic is a generalization of Student’s t statistic that is used in
multivariate hypothesis testing (Wikipedia.) (Johnson, pp.213-216.)
1
2
x
x
x
=
2 2 2 2 2 2 2 2
11
s
12
(2 6)(12 10) (8 6)(9 10) (6 6)(9 10) (8 6)(10 10) ( 4)(2) (2)( 1) 0 0 8 2 10
3 3 3 3
s
21 12
s s
2 2 2 2 2 2 2
22
s
S
Stat 8108 – Fall ’07 Dr. Sarkar
Is
2
T
4[-1,-1]
2,4 2
(
Is
2
T
4[-1,-1]
2,
From the F tables for =0.05 (Johnson, p.762), we see that when 1
(^) =2 and 2
(^) =2, then the critical value
for F=19.00. Thus:
Is
2
T
4[-1,-1]
Is
2
T
4[-1,-1]
Is
2
T
4 [
Is
2
T
4*
Is
2
T =^ 13.^
We see that the
2
T
is NOT greater than the critical value for F. We ACCEPT the O
H at the 5% level
of significance.
References
Johnson RA and Wichern DW, “Applied Multivariate Statistical Analysis,” 2007.
Blyth TS and Robertson EF, “Basic Linear Algebra,” 1998.
Ientilucci EJ, “Using the Singular Value Decomposition,” http://www.cis.rit.edu/~ejipci/Reports/
svd.pdf, 2003.
Wikipedia, http://en.wikipedia.org/wiki/Hotelling's_T-square_distribution, 2007.
Stat 8108 – Fall ’07 Dr. Sarkar