Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Artificial Intelligence Programming: Decision Trees and Learning Agents - Prof. Christophe, Study notes of Computer Science

This document from the university of san francisco's computer science department covers the concept of learning agents, their performance standards, and the process of learning, including the acquisition of new knowledge and changes in behavior. It also delves into decision trees, their structure, and the process of constructing and using them for decision-making and classification. The document further explores different types of learning tasks and the goal of induction.

Typology: Study notes

Pre 2010

Uploaded on 07/30/2009

koofers-user-x8k
koofers-user-x8k 🇺🇸

10 documents

1 / 42

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
Artificial Intelligence Programming
Decision Trees
Chris Brooks
Department of Computer Science
University of San Francisco
Department of Computer Science University of San Francisco p. 1/??
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a

Partial preview of the text

Download Artificial Intelligence Programming: Decision Trees and Learning Agents - Prof. Christophe and more Study notes Computer Science in PDF only on Docsity!

Artificial Intelligence Programming

Decision Trees^ Chris BrooksDepartment of Computer ScienceUniversity of San Francisco

Department of Computer Science — University of San Francisco – p. 1/

Rule Learning • Previously, we’ve assumed that background knowledge wasgiven to us by experts. ◦^ Focused on how to use that knowledge. • Today, we’ll talk about how to acquire that knowledge fromobservation. • Focus on learning propositional rules ◦^ sunny

∧^ warm

→^ P layT ennis

◦^ cool

∧^ (rain

∨^ strongW ind

)^ → ¬

P layT ennis

Department of Computer Science — University of San Francisco – p. 2/

Learning • What does it mean for an agent to learn? • Agent acquires new knowledge • Agent changes its behavior • Agent improves its performance measure on a given task

Department of Computer Science — University of San Francisco – p. 4/

Learning Agents • Recall that at the beginning of the semester we talked about learning agents^ Performance standard^ Agent

Environment

Sensors Performanceelement changesknowledge Critic feedback Learningelementlearninggoals Problemgenerator

Actuators

Department of Computer Science — University of San Francisco – p. 5/

Deduction vs. Induction • Up to now, we’ve looked at cases where our agent is givengeneral knowledge and uses this to solve a particular problem. ◦^ It always takes Edge 2 minutes to cross the river, Suckalways cleans a room, etc. ◦^ This general-to-specific reasoning is known as

deduction

◦^ Advantage: deduction is sound, assuming your knowledgeis correct. • Sometimes, you may not have general information about aproblem. • Instead, you might have

data

about particular instances of a

problem. • the problem then is to figure out a general rule from specificdata. • This is called

induction

  • most learning is an inductive process.

◦^ Problem: induction is not sound.

Department of Computer Science — University of San Francisco – p. 7/

Example • We’ll begin with the example of an agent deciding whether weshould play tennis on a given day. • There are four observable percepts: ◦^ Outlook (sunny, rainy, overcast) ◦^ Temperature (hot, mild, cool) ◦^ Humidity (high, low) ◦^ Wind (strong, weak) • We don’t have a model, but we do have some data about pastdecsions. • Can we induce a general rule for when to play tennis?

Department of Computer Science — University of San Francisco – p. 8/

Types of Learning Tasks • Unsupervised Learning ◦^ In this case, there is no teacher to provide examples. ◦^ The agent typically tries to find a “concept” or pattern indata. ◦^ Statistical methods such as clustering fall into this category ◦^ Our agent might be told that day1, day 4 and day 7 aresimilar and need to determine what characteristics makethese days alike.

Department of Computer Science — University of San Francisco – p. 10/

Types of Learning Tasks • Reinforcement Learning ◦^ This is a particular version of learning in which the agentonly receives a

reward

for taking an action.

◦^ May not know how optimal a reward is. ◦^ Will not know the “best” action to take ◦^ Our agent might be presented with a Sunny, Hot, Lowhumidity, Strong wind day and asked to choose whether toplay tennis. ◦^ It chooses ’yes’ and gets a reward of 0.

Department of Computer Science — University of San Francisco – p. 11/

Classification • the particular learning problem we are focusing on issometimes known as

classification

◦^ For a given input, determine which class it belongs to. • Programs that can perform this task are referred to as classifiers

Department of Computer Science — University of San Francisco – p. 13/

Defining the Learning Problem • We can phrase the learning problem as that of estimating afunction

f^ that tells us how to classify a set of inputs.

-^ An example is a set of inputs

x^ and the corresponding

f^ (x)^

the class that

x^ belongs to. ◦^ << Overcast, Cool, Low, W eak >, playT ennis > • We can define the learning task as follows: ◦^ Given a collection of examples of

f^ , find a function

H^ that

approximates

f^ for our examples.

◦^ H^ is called a

hypothesis

Department of Computer Science — University of San Francisco – p. 14/

Inductive Bias • Notice that induction is not sound. • In picking a hypothesis, we make an educated guess. • the way in which we make this guess is called a

bias.

-^ All learning lagorithms have a bias; identifying it can help youunderstand the sorts of errors it will make. •^ Examples:^ ◦^

Occam’s razor ◦ Most specific hypothesis. ◦ Most general hypothesis. ◦ Linear function

Department of Computer Science — University of San Francisco – p. 16/

Observing Data • Agents may have different means of observing examples of ahypothesis. • A batch

learning algorithm is presented with a large set of data all at once and selects a single hypothesis. • An^ incremental

learning algorithm receives examples one at a time and continually modifies its hypothesis.^ ◦^ Batch is typically more accurate, but incremental may fitbetter with the agent’s environment. • An^ active

learning agent is able to choose examples.

-^ A^ passive

learning agent has examples presented to it by an outside source.^ ◦^ Active learning is more powerful, but may not fit with theconstraints of the domain.

Department of Computer Science — University of San Francisco – p. 17/

Learning Decision Trees • Decision trees are data structures that provide an agent with ameans of classifying examples. • At each node in the tree, an attribute is tested.

Feathers? Yes^

No

Does it Fly?^

Does it Eat Meat?

No Yes Yes Does it havelong legs?^

Albatross Yes Ostrich No No Penguin

Does it haveBlack Stripes?Yes

No Tiger

Does it havea long neck? Cheetah

Yes^

No Giraffe^

Zebra Department of Computer Science — University of San Francisco – p. 19/

Another Example • R & N show a decision tree fordetermining whether to wait at a busyrestaurant. • The problem has the followinginputs/attributes: ◦^ Alternative nearby ◦^ Has a bar ◦^ Day of week ◦^ Hungriness ◦^ Crowd ◦^ Price ◦^ Raining? ◦^ Reservation ◦^ Type of restuarant ◦^ Wait estimate

No^ Yes No Yes

No^ Yes^ No^

Yes

None^ Some

Full >60 30-^

10-^

0- No^ Yes

Hungry?^ Alternate? Reservation? Bar?^

Raining?

Patrons?^ Alternate?^ Fri/Sat? No^ Yes^ No^

Yes Yes^ Yes^ No

Yes No Yes Yes No Yes WaitEstimate? No Yes Yes No • Note that

not^

all^ atributes

are used.

Department of Computer Science — University of San Francisco – p. 20/