Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Dynamic Programming Solution to the 0-1 Knapsack Problem, Study notes of Computer Science

This lecture introduces the 0-1 knapsack problem and its formal description, followed by a dynamic programming solution to this optimization problem. It covers the structure of an optimal solution, recursive definition of the value of an optimal solution, bottom-up computation, and construction of an optimal solution.

Typology: Study notes

Pre 2010

Uploaded on 08/16/2009

koofers-user-zy0
koofers-user-zy0 🇺🇸

10 documents

1 / 17

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Lecture 13: The Knapsack Problem
Outline of this Lecture
Introduction of the 0-1 Knapsack Problem.
A dynamic programming solution to this problem.
1
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff

Partial preview of the text

Download Dynamic Programming Solution to the 0-1 Knapsack Problem and more Study notes Computer Science in PDF only on Docsity!

Lecture 13: The Knapsack Problem

Outline of this Lecture

Introduction of the 0-1 Knapsack Problem.

A dynamic programming solution to this problem.

0-1 Knapsack Problem

Informal Description: We have computed  data files

that we want to store, and we have available

bytes

of storage.

File  has size  bytes and takes  minutes to re-

compute.

We want to avoid as much recomputing as possible,

so we want to find a subset of files to store such that

The files have combined size at most

.

The total computing time of the stored files is as

large as possible.

We can not store parts of files, it is the whole file or

nothing.

How should we select the files?

Recall of the Divide-and-Conquer

  1. Partition the problem into subproblems.
  2. Solve the subproblems.
  3. Combine the solutions to solve the original one.

Remark: If the subproblems are not independent, i.e.

subproblems share subsubproblems, then a divide-

and-conquer algorithm repeatedly solves the common

subsubproblems.

Thus, it does more work than necessary!

Question: Any better solution?

Yes–Dynamic programming (DP)!

The Idea of Dynamic Programming

Dynamic programming is a method for solving

optimization problems.

The idea: Compute the solutions to the subsub-problems

once and store the solutions in a table, so that they

can be reused (repeatedly) later.

Remark: We trade space for time.

The Idea of Developing a DP Algorithm

Step 3: Bottom-up computation: Compute the value

of an optimal solution in a bottom-up fashion by

using a table structure.

Step 4: Construction of optimal solution: Construct

an optimal solution from computed information.

Steps 3 and 4 may often be combined.

Remarks on the Dynamic Programming Approach

Steps 1-3 form the basis of a dynamic-programming

solution to a problem.

Step 4 can be omitted if only the value of an opti-

mal solution is required.

Developing a DP Algorithm for Knapsack

Step 2: Recursively define the value of an optimal

solution in terms of solutions to smaller problems.

Initial Settings: Set

for

, no item

for ?

, illegal

Recursive Step: Use

5; @ ACBED

( (F 1 2 7

.G

for

.

Correctness of the Method for Computing 1 2 78 (

Lemma: For

,

H; @ ACBED

  F 1 2 7

G

Proof: To compute 1 2 <8 

we note that we have only

two choices for file  :

Leave file  : The best we can do with files

!#"

%$&  (

) and storage limit  is 1 2 7

(^).

Take file  (only possible if I /  ): Then we gain

  of computing time, but have spent   bytes of

our storage. The best we can do with remaining

files

J"

%$K  

) and storage

D

G

is

.

Totally, we get L F 1 2 7

.

Note that if 

 , then  F 1 2 7

so the lemma is correct in any case.

Example of the Bottom-up computation

Let

and

O P

,QSRUT V

1 2 3 4 5 6 7 8 9 10

XW V

0 0 0 0 0 0 0 0 0 0 0

1 0 0 0 0 0 10 10 10 10 10 10

2 0 0 0 0 40 40 40 40 40 50 50

3 0 0 0 0 40 40 40 40 40 50 70

4 0 0 0 50 50 50 50 90 90 90 90

Remarks:

Y

The final output is

O

P

[Z

Q

\ V]TXW ^_V

.

Y

The method described does not tell which subset gives the

optimal solution. (It is `

CQ

Z

ba

in this example).

The Dynamic Programming Algorithm

KnapSack(c

Q

SRdQ (^)  Qfe ) g

for (

R

W

V

to

e )

O

P

V

CQhRiT W V ;

for (

W

to  )

for (

R

W

V

to

e )

if (

R

P

3Tj R ) O P ,QkRUT W l mon `

O

P

Mp qQhRUTrQ c

P

3Tts

O

P

Mp qQhR p R

P

3T3T

a

;

else O P  ,QkRUT W

O

IP

Mp qQhRUT ;

return

O

P

Q

ue T ; v

Time complexity: Clearly, w

D

G

.

Constructing the Optimal Solution

Question: How do we use the values x#yqy(z|278 %

to

determine the subset

of items having the maximum

computing time?

If keep[  5

] is 1, then  }

. We can now repeat

this argument for keep[ 

  ].

If keep[ H

] is 0, the  }~

and we repeat the argu-

ment for keep[ 

].

Therefore, the following partial program will output the

elements of

:

;

for ( 

 downto 1)

if (keep 2 <

) €

output i;

 ; 

 2 7

; 

The Complete Algorithm for the Knapsack Problem

KnapSack(c

Q

SRdQ (^)  Qfe ) g

for (

R

W

V

to

e )

O

P

V

MQhRUT W V ;

for (

W

to  )

for (

R

W

V

to

e )

if ((

R

P

3TXj R ) and ( c

P

3Tts

O

P

Cp qQhR p R

P

3T3T‚

O

P

Mp qQhRUT )) g

OIP

,QhRUT W c

P

3Tts

O

P

Cp qQhR p R

P

3T3T

;

keep

P

,QƒRUT W

; v

else g

OIP

,QhRUT W

O

P

Mp qQhRUT ;

keep

P

,QƒRUT W V

; v

W e ;

for (

W

 downto 1)

if (keep

P

,Q

TJW W

) g

output i; „ W

p R

P

3T

; v

return

O

P

Q

e T ; v