Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Machine Learning: Supervised, Deep, Unsupervised, and Reinforcement Learning, Exercises of Calculus

Given data like this, how can we learn to predict the prices of other houses in Portland, as a function of the size of their living areas?

Typology: Exercises

2022/2023

Uploaded on 05/11/2023

tomcrawford
tomcrawford 🇺🇸

4.1

(14)

263 documents

1 / 216

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
CS229 Lecture Notes
Andrew Ng
Updated by Tengyu Ma
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38
pf39
pf3a
pf3b
pf3c
pf3d
pf3e
pf3f
pf40
pf41
pf42
pf43
pf44
pf45
pf46
pf47
pf48
pf49
pf4a
pf4b
pf4c
pf4d
pf4e
pf4f
pf50
pf51
pf52
pf53
pf54
pf55
pf56
pf57
pf58
pf59
pf5a
pf5b
pf5c
pf5d
pf5e
pf5f
pf60
pf61
pf62
pf63
pf64

Partial preview of the text

Download Machine Learning: Supervised, Deep, Unsupervised, and Reinforcement Learning and more Exercises Calculus in PDF only on Docsity!

CS229 Lecture Notes

Andrew Ng

Updated by Tengyu Ma

Contents

  • I Supervised learning
  • 1 Linear regression
    • 1.1 LMS algorithm
    • 1.2 The normal equations
      • 1.2.1 Matrix derivatives
      • 1.2.2 Least squares revisited
    • 1.3 Probabilistic interpretation
    • 1.4 Locally weighted linear regression (optional reading)
  • 2 Classification and logistic regression
    • 2.1 Logistic regression
    • 2.2 Digression: the perceptron learning algorithn
    • 2.3 Another algorithm for maximizing `(θ)
  • 3 Generalized linear models
    • 3.1 The exponential family
    • 3.2 Constructing GLMs
      • 3.2.1 Ordinary least squares
      • 3.2.2 Logistic regression
      • 3.2.3 Softmax regression
  • 4 Generative learning algorithms
    • 4.1 Gaussian discriminant analysis
      • 4.1.1 The multivariate normal distribution
      • 4.1.2 The Gaussian discriminant analysis model
      • 4.1.3 Discussion: GDA and logistic regression
    • 4.2 Naive bayes
      • 4.2.1 Laplace smoothing
      • 4.2.2 Event models for text classification
  • CS229 Spring
  • 5 Kernel methods
    • 5.1 Feature maps
    • 5.2 LMS (least mean squares) with features
    • 5.3 LMS with the kernel trick
    • 5.4 Properties of kernels
  • 6 Support vector machines
    • 6.1 Margins: intuition
    • 6.2 Notation (option reading)
    • 6.3 Functional and geometric margins (option reading)
    • 6.4 The optimal margin classifier (option reading)
    • 6.5 Lagrange duality (optional reading)
    • 6.6 Optimal margin classifiers: the dual form (option reading)
    • 6.7 Regularization and the non-separable case (optional reading)
    • 6.8 The SMO algorithm (optional reading)
      • 6.8.1 Coordinate ascent
      • 6.8.2 SMO
  • II Deep learning
  • 7 Deep learning
    • 7.1 Supervised learning with non-linear models
    • 7.2 Neural networks
    • 7.3 Backpropagation
      • 7.3.1 Preliminary: chain rule
      • 7.3.2 One-neuron neural networks
        • putation 7.3.3 Two-layer neural networks: a low-level unpacked com-
      • 7.3.4 Two-layer neural network with vector notation
      • 7.3.5 Multi-layer neural networks
    • 7.4 Vectorization over training examples
  • III Generalization and regularization
  • 8 Generalization
    • 8.1 Bias-variance tradeoff
      • 8.1.1 A mathematical decomposition (for regression)
    • 8.2 The double descent phenomenon
  • CS229 Spring
    • 8.3 Sample complexity bounds (optional readings)
      • 8.3.1 Preliminaries
      • 8.3.2 The case of finite H
      • 8.3.3 The case of infinite H
  • 9 Regularization and model selection
    • 9.1 Regularization
    • 9.2 Implicit regularization effect
    • 9.3 Model selection via cross validation
    • 9.4 Bayesian statistics and regularization
  • IV Unsupervised learning
  • 10 Clustering and the k-means algorithm
  • 11 EM algorithms
    • 11.1 EM for mixture of Gaussians
    • 11.2 Jensen’s inequality
    • 11.3 General EM algorithms
      • 11.3.1 Other interpretation of ELBO
    • 11.4 Mixture of Gaussians revisited
      • reading) 11.5 Variational inference and variational auto-encoder (optional
  • 12 Principal components analysis
  • 13 Independent components analysis
    • 13.1 ICA ambiguities
    • 13.2 Densities and linear transformations
    • 13.3 ICA algorithm
  • 14 Self-supervised learning and foundation models
    • 14.1 Pretraining and adaptation
    • 14.2 Pretraining methods in computer vision
    • 14.3 Pretrained large language models
      • 14.3.1 Zero-shot learning and in-context learning
  • CS229 Spring
  • V Reinforcement Learning and Control
  • 15 Reinforcement learning
    • 15.1 Markov decision processes
    • 15.2 Value iteration and policy iteration
    • 15.3 Learning a model for an MDP
    • 15.4 Continuous state MDPs
      • 15.4.1 Discretization
      • 15.4.2 Value function approximation
    • 15.5 Connections between Policy and Value Iteration (Optional)
  • 16 LQR, DDP and LQG
    • 16.1 Finite-horizon MDPs
    • 16.2 Linear Quadratic Regulation (LQR)
    • 16.3 From non-linear dynamics to LQR
      • 16.3.1 Linearization of dynamics
      • 16.3.2 Differential Dynamic Programming (DDP)
    • 16.4 Linear Quadratic Gaussian (LQG)
  • 17 Policy Gradient (REINFORCE)

Let’s start by talking about a few examples of supervised learning prob- lems. Suppose we have a dataset giving the living areas and prices of 47 houses from Portland, Oregon:

Living area (feet^2 ) Price (1000$s) 2104 400 1600 330 2400 369 1416 232 3000 540 .. .

We can plot this data:

500 1000 1500 2000 2500 3000 3500 4000 4500 5000

0

100

200

300

400

500

600

700

800

900

1000

housing prices

square feet

price (in $1000)

Given data like this, how can we learn to predict the prices of other houses in Portland, as a function of the size of their living areas? To establish notation for future use, we’ll use x(i)^ to denote the “input” variables (living area in this example), also called input features, and y(i) to denote the “output” or target variable that we are trying to predict (price). A pair (x(i), y(i)) is called a training example, and the dataset that we’ll be using to learn—a list of n training examples {(x(i), y(i)); i = 1 ,... , n}—is called a training set. Note that the superscript “(i)” in the notation is simply an index into the training set, and has nothing to do with exponentiation. We will also use X denote the space of input values, and Y the space of output values. In this example, X = Y = R. To describe the supervised learning problem slightly more formally, our goal is, given a training set, to learn a function h : X 7 → Y so that h(x) is a “good” predictor for the corresponding value of y. For historical reasons, this

function h is called a hypothesis. Seen pictorially, the process is therefore like this:

Training set

house.)

(living area of

Learning algorithm

x (^) h predicted y(predicted price) of house)

When the target variable that we’re trying to predict is continuous, such as in our housing example, we call the learning problem a regression prob- lem. When y can take on only a small number of discrete values (such as if, given the living area, we wanted to predict if a dwelling is a house or an apartment, say), we call it a classification problem.

confusion, we will drop the θ subscript in hθ(x), and write it more simply as h(x). To simplify our notation, we also introduce the convention of letting x 0 = 1 (this is the intercept term), so that

h(x) =

∑^ d

i=

θixi = θT^ x,

where on the right-hand side above we are viewing θ and x both as vectors, and here d is the number of input variables (not counting x 0 ). Now, given a training set, how do we pick, or learn, the parameters θ? One reasonable method seems to be to make h(x) close to y, at least for the training examples we have. To formalize this, we will define a function that measures, for each value of the θ’s, how close the h(x(i))’s are to the corresponding y(i)’s. We define the cost function:

J(θ) =

∑^ n

i=

(hθ(x(i)) − y(i))^2.

If you’ve seen linear regression before, you may recognize this as the familiar least-squares cost function that gives rise to the ordinary least squares regression model. Whether or not you have seen it previously, let’s keep going, and we’ll eventually show this to be a special case of a much broader family of algorithms.

1.1 LMS algorithm

We want to choose θ so as to minimize J(θ). To do so, let’s use a search algorithm that starts with some “initial guess” for θ, and that repeatedly changes θ to make J(θ) smaller, until hopefully we converge to a value of θ that minimizes J(θ). Specifically, let’s consider the gradient descent algorithm, which starts with some initial θ, and repeatedly performs the update:

θj := θj − α

∂θj

J(θ).

(This update is simultaneously performed for all values of j = 0,... , d.) Here, α is called the learning rate. This is a very natural algorithm that repeatedly takes a step in the direction of steepest decrease of J. In order to implement this algorithm, we have to work out what is the partial derivative term on the right hand side. Let’s first work it out for the

case of if we have only one training example (x, y), so that we can neglect the sum in the definition of J. We have:

∂ ∂θj

J(θ) =

∂θj

(hθ(x) − y)^2

(hθ(x) − y) ·

∂θj

(hθ(x) − y)

= (hθ(x) − y) ·

∂θj

( (^) d ∑

i=

θixi − y

= (hθ(x) − y) xj

For a single training example, this gives the update rule:^1

θj := θj + α

y(i)^ − hθ(x(i))

x( ji ).

The rule is called the LMS update rule (LMS stands for “least mean squares”), and is also known as the Widrow-Hoff learning rule. This rule has several properties that seem natural and intuitive. For instance, the magnitude of the update is proportional to the error term (y(i)^ − hθ(x(i))); thus, for in- stance, if we are encountering a training example on which our prediction nearly matches the actual value of y(i), then we find that there is little need to change the parameters; in contrast, a larger change to the parameters will be made if our prediction hθ(x(i)) has a large error (i.e., if it is very far from y(i)). We’d derived the LMS rule for when there was only a single training example. There are two ways to modify this method for a training set of more than one example. The first is replace it with the following algorithm:

Repeat until convergence {

θj := θj + α

∑^ n

i=

y(i)^ − hθ(x(i))

x( ji ), (for every j) (1.1)

(^1) We use the notation “a := b” to denote an operation (in a computer program) in

which we set the value of a variable a to be equal to the value of b. In other words, this operation overwrites a with the value of b. In contrast, we will write “a = b” when we are asserting a statement of fact, that the value of a is equal to the value of b.

500 1000 1500 2000 2500 3000 3500 4000 4500 5000

0

100

200

300

400

500

600

700

800

900

1000

housing prices

square feet

price (in $1000)

If the number of bedrooms were included as one of the input features as well, we get θ 0 = 89. 60 , θ 1 = 0.1392, θ 2 = − 8 .738. The above results were obtained with batch gradient descent. There is an alternative to batch gradient descent that also works very well. Consider the following algorithm:

Loop {

for i = 1 to n, {

θj := θj + α

y(i)^ − hθ(x(i))

x( ji ), (for every j) (1.2)

}

}

By grouping the updates of the coordinates into an update of the vector θ, we can rewrite update (1.2) in a slightly more succinct way:

θ := θ + α

y(i)^ − hθ(x(i))

x(i)

In this algorithm, we repeatedly run through the training set, and each time we encounter a training example, we update the parameters according to the gradient of the error with respect to that single training example only. This algorithm is called stochastic gradient descent (also incremental gradient descent). Whereas batch gradient descent has to scan through the entire training set before taking a single step—a costly operation if n is large—stochastic gradient descent can start making progress right away, and

continues to make progress with each example it looks at. Often, stochastic gradient descent gets θ “close” to the minimum much faster than batch gra- dient descent. (Note however that it may never “converge” to the minimum, and the parameters θ will keep oscillating around the minimum of J(θ); but in practice most of the values near the minimum will be reasonably good approximations to the true minimum.^2 ) For these reasons, particularly when the training set is large, stochastic gradient descent is often preferred over batch gradient descent.

1.2 The normal equations

Gradient descent gives one way of minimizing J. Let’s discuss a second way of doing so, this time performing the minimization explicitly and without resorting to an iterative algorithm. In this method, we will minimize J by explicitly taking its derivatives with respect to the θj ’s, and setting them to zero. To enable us to do this without having to write reams of algebra and pages full of matrices of derivatives, let’s introduce some notation for doing calculus with matrices.

1.2.1 Matrix derivatives

For a function f : Rn×d^7 → R mapping from n-by-d matrices to the real numbers, we define the derivative of f with respect to A to be:

∇Af (A) =

∂f ∂A 11 · · ·^

∂f ∂A 1 d .. .

∂f ∂An 1 · · ·^

∂f ∂And

Thus, the gradient ∇Af (A) is itself an n-by-d matrix, whose (i, j)-element is

∂f /∂Aij. For example, suppose A =

[

A 11 A 12 A 21 A 22

]

is a 2-by-2 matrix, and

the function f : R^2 ×^2 7 → R is given by

f (A) =

A 11 + 5A^212 + A 21 A 22.

(^2) By slowly letting the learning rate α decrease to zero as the algorithm runs, it is also

possible to ensure that the parameters will converge to the global minimum rather than merely oscillate around the minimum.

Finally, to minimize J, let’s find its derivatives with respect to θ. Hence,

∇θJ(θ) = ∇θ

(Xθ − ~y)T^ (Xθ − ~y)

=

∇θ

(Xθ)T^ Xθ − (Xθ)T^ ~y − ~yT^ (Xθ) + ~yT^ ~y

∇θ

θT^ (XT^ X)θ − ~yT^ (Xθ) − ~yT^ (Xθ)

∇θ

θT^ (XT^ X)θ − 2(XT^ ~y)T^ θ

2 XT^ Xθ − 2 XT^ ~y

= XT^ Xθ − XT^ ~y

In the third step, we used the fact that aT^ b = bT^ a, and in the fifth step used the facts ∇xbT^ x = b and ∇xxT^ Ax = 2Ax for symmetric matrix A (for more details, see Section 4.3 of “Linear Algebra Review and Reference”). To minimize J, we set its derivatives to zero, and obtain the normal equations:

XT^ Xθ = XT^ ~y

Thus, the value of θ that minimizes J(θ) is given in closed form by the equation θ = (XT^ X)−^1 XT^ ~y.^3

1.3 Probabilistic interpretation

When faced with a regression problem, why might linear regression, and specifically why might the least-squares cost function J, be a reasonable choice? In this section, we will give a set of probabilistic assumptions, under which least-squares regression is derived as a very natural algorithm. Let us assume that the target variables and the inputs are related via the equation y(i)^ = θT^ x(i)^ + (i), (^3) Note that in the above step, we are implicitly assuming that XT (^) X is an invertible

matrix. This can be checked before calculating the inverse. If either the number of linearly independent examples is fewer than the number of features, or if the features are not linearly independent, then XT^ X will not be invertible. Even in such cases, it is possible to “fix” the situation with additional techniques, which we skip here for the sake of simplicty.

where (i)^ is an error term that captures either unmodeled effects (such as if there are some features very pertinent to predicting housing price, but that we’d left out of the regression), or random noise. Let us further assume that the (i)^ are distributed IID (independently and identically distributed) according to a Gaussian distribution (also called a Normal distribution) with mean zero and some variance σ^2. We can write this assumption as “(i)^ ∼ N (0, σ^2 ).” I.e., the density of (i)^ is given by

p((i)) =

2 πσ

exp

((i))^2 2 σ^2

This implies that

p(y(i)|x(i); θ) =

2 πσ

exp

(y(i)^ − θT^ x(i))^2 2 σ^2

The notation “p(y(i)|x(i); θ)” indicates that this is the distribution of y(i) given x(i)^ and parameterized by θ. Note that we should not condition on θ (“p(y(i)|x(i), θ)”), since θ is not a random variable. We can also write the distribution of y(i)^ as y(i)^ | x(i); θ ∼ N (θT^ x(i), σ^2 ). Given X (the design matrix, which contains all the x(i)’s) and θ, what is the distribution of the y(i)’s? The probability of the data is given by p(~y|X; θ). This quantity is typically viewed a function of ~y (and perhaps X), for a fixed value of θ. When we wish to explicitly view this as a function of θ, we will instead call it the likelihood function:

L(θ) = L(θ; X, ~y) = p(~y|X; θ).

Note that by the independence assumption on the (i)’s (and hence also the y(i)’s given the x(i)’s), this can also be written

L(θ) =

∏^ n

i=

p(y(i)^ | x(i); θ)

∏^ n

i=

2 πσ

exp

(y(i)^ − θT^ x(i))^2 2 σ^2

Now, given this probabilistic model relating the y(i)’s and the x(i)’s, what is a reasonable way of choosing our best guess of the parameters θ? The principal of maximum likelihood says that we should choose θ so as to make the data as high probability as possible. I.e., we should choose θ to maximize L(θ).

(^00 1 2 3 4 5 6 )

1

2

3

4

x

y

(^00 1 2 3 4 5 6 )

1

2

3

4

x

y

(^00 1 2 3 4 5 6 )

1

2

3

4

x

y

Instead, if we had added an extra feature x^2 , and fit y = θ 0 + θ 1 x + θ 2 x^2 , then we obtain a slightly better fit to the data. (See middle figure) Naively, it might seem that the more features we add, the better. However, there is also a danger in adding too many features: The rightmost figure is the result of fitting a 5-th order polynomial y =

j=0 θj^ x

j (^). We see that even though the

fitted curve passes through the data perfectly, we would not expect this to be a very good predictor of, say, housing prices (y) for different living areas (x). Without formally defining what these terms mean, we’ll say the figure on the left shows an instance of underfitting—in which the data clearly shows structure not captured by the model—and the figure on the right is an example of overfitting. (Later in this class, when we talk about learning theory we’ll formalize some of these notions, and also define more carefully just what it means for a hypothesis to be good or bad.) As discussed previously, and as shown in the example above, the choice of features is important to ensuring good performance of a learning algorithm. (When we talk about model selection, we’ll also see algorithms for automat- ically choosing a good set of features.) In this section, let us briefly talk about the locally weighted linear regression (LWR) algorithm which, assum- ing there is sufficient training data, makes the choice of features less critical. This treatment will be brief, since you’ll get a chance to explore some of the properties of the LWR algorithm yourself in the homework. In the original linear regression algorithm, to make a prediction at a query point x (i.e., to evaluate h(x)), we would:

  1. Fit θ to minimize

i(y (i) (^) − θT (^) x(i)) (^2).

  1. Output θT^ x.

In contrast, the locally weighted linear regression algorithm does the fol- lowing:

  1. Fit θ to minimize

i w (i)(y(i) (^) − θT (^) x(i)) (^2).

  1. Output θT^ x.

Here, the w(i)’s are non-negative valued weights. Intuitively, if w(i)^ is large for a particular value of i, then in picking θ, we’ll try hard to make (y(i)^ − θT^ x(i))^2 small. If w(i)^ is small, then the (y(i)^ − θT^ x(i))^2 error term will be pretty much ignored in the fit. A fairly standard choice for the weights is^4

w(i)^ = exp

(x(i)^ − x)^2 2 τ 2

Note that the weights depend on the particular point x at which we’re trying to evaluate x. Moreover, if |x(i)^ − x| is small, then w(i)^ is close to 1; and if |x(i)^ − x| is large, then w(i)^ is small. Hence, θ is chosen giving a much higher “weight” to the (errors on) training examples close to the query point x. (Note also that while the formula for the weights takes a form that is cosmetically similar to the density of a Gaussian distribution, the w(i)’s do not directly have anything to do with Gaussians, and in particular the w(i) are not random variables, normally distributed or otherwise.) The parameter τ controls how quickly the weight of a training example falls off with distance of its x(i)^ from the query point x; τ is called the bandwidth parameter, and is also something that you’ll get to experiment with in your homework. Locally weighted linear regression is the first example we’re seeing of a non-parametric algorithm. The (unweighted) linear regression algorithm that we saw earlier is known as a parametric learning algorithm, because it has a fixed, finite number of parameters (the θi’s), which are fit to the data. Once we’ve fit the θi’s and stored them away, we no longer need to keep the training data around to make future predictions. In contrast, to make predictions using locally weighted linear regression, we need to keep the entire training set around. The term “non-parametric” (roughly) refers to the fact that the amount of stuff we need to keep in order to represent the hypothesis h grows linearly with the size of the training set.

(^4) If x is vector-valued, this is generalized to be w(i) (^) = exp(−(x(i) (^) − x)T (^) (x(i) (^) − x)/(2τ 2 )),

or w(i)^ = exp(−(x(i)^ − x)T^ Σ−^1 (x(i)^ − x)/(2τ 2 )), for an appropriate choice of τ or Σ.