Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Introduction to Statistical Learning ISLR Chapter 3 R Solution Manual, Exercises of Statistics

Linear Regression - Exercise R code as soutution manual ISLR Introduction to Statistical Learning James, Witten, Hastie, Tibshirani

Typology: Exercises

2020/2021
On special offer
30 Points
Discount

Limited-time offer


Uploaded on 05/26/2021

ekaant
ekaant 🇺🇸

4.6

(34)

270 documents

1 / 24

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
---
title: "Chapter 3: Linear Regression"
author: "Solutions to Exercises"
date: "January 7, 2016"
output:
html_document:
keep_md: no
---
***
## CONCEPTUAL
***
<a id="ex01"></a>
>EXERCISE 1:
`TV` and `radio` are related to `sales` but no evidence that `newspaper` is associated with `sales` in the
presence of other predictors.
***
<a id="ex02"></a>
>EXERCISE 2:
KNN regression averages the closest observations to estimate prediction, KNN classifier assigns
classification group based on majority of closest observations.
***
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
Discount

On special offer

Partial preview of the text

Download Introduction to Statistical Learning ISLR Chapter 3 R Solution Manual and more Exercises Statistics in PDF only on Docsity!

title: "Chapter 3: Linear Regression" author: "Solutions to Exercises" date: "January 7, 2016" output: html_document: keep_md: no



CONCEPTUAL


EXERCISE 1: TV and radio are related to sales but no evidence that newspaper is associated with sales in the presence of other predictors.


EXERCISE 2: KNN regression averages the closest observations to estimate prediction, KNN classifier assigns classification group based on majority of closest observations.


EXERCISE 3: Part a) Resulting fit formula is: Y = 50 + 20*GPA + 0.07*IQ + 35*Gender + 0.01*GPA:IQ - 10*GPA:Gender Point iii is correct: For GPA above 35/10=3.5, males will earn more. Part b) Salary = 50 + 20x4.0 + 0.07x110 + 35x1 + 0.01x4.0x110 - 10x4.0x = 137.1 thousand dollars Part c) FALSE: IQ scale is larger than other predictors (~100 versus 1-4 for GPA and 0-1 for gender) so even if all predictors have the same impact on salary, coefficients will be smaller for IQ predictors.


EXERCISE 4:

$$ a_{i'} = \frac{ x_{i} x_{i'} } { \sum_{j=1}^{n} x_{j}^{2} } $$


EXERCISE 6: Using equation (3.4) on page 62, when $x_{i}=\bar{x}$, then $\hat{\beta_{1}}=0$ and $\hat{\beta_{0}}=
bar{y}$ and the equation for $\hat{y_{i}}$ evaluates to equal $\bar{y}$


EXERCISE 7: [... will come back to this. maybe.] Given: For $\bar{x}=\bar{y}=0$, $$ R^{2} = \frac{TSS - RSS}{TSS} = 1- \frac{RSS}{TSS} $$ $$ TSS = \sum_{i=1}^{n} \left ( y_{i}-\bar{y}\right )^{2} = \sum_{i=1}^{n} y_{i}^{2} $$ $$ RSS = \sum_{i=1}^{n} \left ( y_{i}-\hat{y_{i}}\right )^{2} = \sum_{i=1}^{n} \left ( y_{i}-\left ( \hat{
beta_{0}} + \hat{\beta_{1}}x_{i} \right )\right )^{2} = \sum_{i=1}^{n} \left ( y_{i}-\left ( \frac{
sum_{j=1}^{n} x_{j}y_{j} }{\sum_{k=1}^{n} x_{k}^{2}} \right ) x_{i} \right )^{2} $$

$$ Cor \left( X, Y\right) = \frac{\sum_{i=1}^{n} x_{i} y_{i}}{\sqrt{\sum_{j=1}^{n}x_{j}^{2} \times
sum_{k=1}^{n}y_{k}^{2}} } $$ Prove: $$ R^{2} = \left[ Cor \left( X, Y\right)\right]^{2} $$


APPLIED


EXERCISE 8: Part a)

require(ISLR) data(Auto) fit.lm <- lm(mpg ~ horsepower, data=Auto) summary(fit.lm) # i. Yes, there is a relationship between predictor and response # ii. p-value is close to 0: relationship is strong # iii. Coefficient is negative: relationship is negative # iv. new <- data.frame(horsepower = 98) predict(fit.lm, new) # predicted mpg predict(fit.lm, new, interval = "confidence") # conf interval predict(fit.lm, new, interval = "prediction") # pred interval ## ``` __Part b)__ ```{r} cor(subset(Auto, select=-name)) 

Part c)

fit.lm <- lm(mpg~.-name, data=Auto) summary(fit.lm) 
  • There is a relationship between predictors and response
  • weight, year, origin and displacement have statistically significant relationships
  • 0.75 coefficient for year suggests that later model year cars have better (higher) mpg Part d)
par(mfrow=c(2,2)) plot(fit.lm) 
  • evidence of non-linearity
  • observation 14 has high leverage

Part e)

# try 3 interactions fit.lm0 <- lm(mpg~displacement+weight+year+origin, data=Auto) fit.lm1 <- lm(mpg~displacement+weight+year*origin, data=Auto) fit.lm2 <- lm(mpg~displacement+origin+year*weight, data=Auto) fit.lm3 <- lm(mpg~year+origin+displacement*weight, data=Auto) summary(fit.lm0) summary(fit.lm1) summary(fit.lm2) summary(fit.lm3) 

All 3 interactions tested seem to have statistically significant effects. Part f)

# try 3 predictor transformations fit.lm4 <- lm(mpg~poly(displacement,3)+weight+year+origin, data=Auto) fit.lm5 <- lm(mpg~displacement+I(log(weight))+year+origin, data=Auto) fit.lm6 <- lm(mpg~displacement+I(weight^2)+year+origin, data=Auto) summary(fit.lm4) summary(fit.lm5) summary(fit.lm6) 
  • displacement^2 has a larger effect than other displacement polynomials

Sales = 13.043 - 0.054 x Price - 0.022 x UrbanYes + 1.201 x USYes Part d) Can reject null hypothesis for Price and USYes (coefficients have low p-values) Part e)

fit.lm1 <- lm(Sales ~ Price + US, data=Carseats) summary(fit.lm1) 

Part f)

  • fit.lm (Price, Urban, US):
  • RSE = 2.
  • R^2 = 0.
  • fit.lm1 (Price, US):
  • RSE = 2.
  • R^2 = 0. fit.lm1 has a slightly better (lower) RSE value and one less predictor variable. Part g)
confint(fit.lm1) ## ``` __Part h)__ ```{r} par(mfrow=c(2,2)) # residuals v fitted plot doesn't show strong outliers plot(fit.lm1) par(mfrow=c(1,1)) # studentized residuals within -3 to 3 range plot(predict(fit.lm1), rstudent(fit.lm1)) # load car packages require(car) # no evidence of outliers qqPlot(fit.lm1, main="QQ Plot") # studentized resid leveragePlots(fit.lm1) # leverage plots plot(hatvalues(fit.lm1)) # average obs leverage (p+1)/n = (2+1)/400 = 0. # data may have some leverage issues 

EXERCISE 11: Part a)

 The two regression lines should be the same just with the axes switched, so it would make sense that the t-statistic is the same (both are 18.73). __Part f)__ ```{r} fit.lmY2 <- lm(y ~ x) fit.lmX2 <- lm(x ~ y) summary(fit.lmY2) summary(fit.lmX2) 

t-statistics for both regressions are 18.


EXERCISE 12: Part a) When $x_{i}=y_{i}$, or more generally when the beta denominators are equal $\sum x_{i}^2=\sum y_{i}^2$ Part b)

# exercise 11 example works set.seed(1) x <- rnorm(100) y <- 2*x + rnorm(100) fit.lmY <- lm(y ~ x) fit.lmX <- lm(x ~ y) summary(fit.lmY) summary(fit.lmX) 

1.99894 != 0. Part c)

set.seed(1) x <- rnorm(100, mean=1000, sd=0.1) y <- rnorm(100, mean=1000, sd=0.1) fit.lmY <- lm(y ~ x) fit.lmX <- lm(x ~ y) summary(fit.lmY) summary(fit.lmX) 

Both betas are 0.


```

x and y seem to be positively correlated Part e)

fit.lm <- lm(y ~ x) summary(fit.lm) 

Estimated $\hat{\beta_{0}}=-1.019$ and $\hat{\beta_{1}}=0.499$, which are close to actual betas used to generate y Part f)

plot(x,y) abline(-1, 0.5, col="blue") # true regression abline(fit.lm, col="red") # fitted regression legend(x = c(0,2.7), y = c(-2.5,-2), legend = c("population", "model fit"), col = c("blue","red"), lwd=2 ) 

Part g)

 fit.lm1 <- lm(y~x+I(x^2)) summary(fit.lm1) anova(fit.lm, fit.lm1) 

No evidence of better fit based on high p-value of coefficient for X^2. Estimated coefficient for $\hat{
beta_{1}}$ is farther from true value. Anova test also suggests polynomial fit is not any better. Part h)

eps2 <- rnorm(100, sd=0.1) # prior sd was 0. y2 <- -1 + 0.5*x + eps fit.lm2 <- lm(y2 ~ x) summary(fit.lm2) plot(x, y2) abline(-1, 0.5, col="blue") # true regression abline(fit.lm2, col="red") # fitted regression legend(x = c(-2,-0.5), y = c(-0.5,0), legend = c("population", "model fit"), col = c("blue","red"), lwd=2 ) 

Decreased variance along regression line. Fit for original y was already very good, so coef estimates are about the same for reduced epsilon. However, RSE and R^2 values are much improved. Part i)

EXERCISE 14:

Part a)

set.seed(1) x1 <- runif(100) x2 <- 0.5*x1 + rnorm(100)/ y <- 2 + 2*x1 + 0.3*x2 + rnorm(100) 

Population regression is $y = \beta_{0} + \beta_{1} x_1 + \beta_{2} x_2 + \varepsilon$, where $
beta_{0}=2$, $\beta_{1}=2$ and $\beta_{2}=0.3$ Part b)

cor(x1,x2) plot(x1,x2) 

Part c)

fit.lm <- lm(y~x1+x2) summary(fit.lm) 

Estimated beta coefficients are $\hat{\beta_{0}}=2.13$, $\hat{\beta_{1}}=1.44$ and $\hat{
beta_{2}}=1.01$. Coefficient for x1 is statistically significant but the coefficient for x2 is not given the presense of x1. These betas try to estimate the population betas: $\hat{\beta_{0}}$ is close (rounds to 2), $\hat{\beta_{1}}$ is 1.44 instead of 2 with a high standard error and $\hat{\beta_{2}}$ is farthest off. Reject $H_0 : \beta_1=0$; Cannot reject $H_0 : \beta_2=0$ Part d)

fit.lm1 <- lm(y~x1) summary(fit.lm1) 

p-value is close to 0, can reject $H_0 : \beta_1=0$ Part e)

fit.lm2 <- lm(y~x2) summary(fit.lm2) 

p-value is close to 0, can reject $H_0 : \beta_2=0$ Part f) No. Without the presence of other predictors, both $\beta_1$ and $\beta_2$ are statistically significant. In the presence of other predictors, $\beta_2$ is no longer statistically significant.