Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Data Mining - Bayesian classification, Study notes of Data Mining

Summary about Classification and Prediction, Bayesian Theorem: Basics, Bayesian Theorem, Towards Naïve Bayesian Classifier, Naïve Bayesian Classifier: Training Dataset, Avoiding the 0-Probability Problem.

Typology: Study notes

2010/2011

Uploaded on 09/03/2011

amit-mohta
amit-mohta 🇮🇳

4.2

(152)

89 documents

1 / 13

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
November 20, 2014 Data Mining: Concepts and
Techniques 1
Chapter 6. Classification and Prediction
What is classification? What is
prediction?
Issues regarding classification
and prediction
Classification by decision tree
induction
Bayesian classification
Rule-based classification
Classification by back
propagation
Support Vector Machines
(SVM)
Associative classification
Lazy learners (or learning from
your neighbors)
Other classification methods
Prediction
Accuracy and error measures
Ensemble methods
Model selection
Summary
pf3
pf4
pf5
pf8
pf9
pfa
pfd

Partial preview of the text

Download Data Mining - Bayesian classification and more Study notes Data Mining in PDF only on Docsity!

November 20, 2014 Data Mining: Concepts and 1

Chapter 6. Classification and Prediction

  • (^) What is classification? What is prediction?
  • (^) Issues regarding classification and prediction
  • (^) Classification by decision tree induction
  • (^) Bayesian classification
  • (^) Rule-based classification
  • (^) Classification by back propagation - (^) Support Vector Machines (SVM) - (^) Associative classification - (^) Lazy learners (or learning from your neighbors) - (^) Other classification methods - (^) Prediction - (^) Accuracy and error measures - (^) Ensemble methods - (^) Model selection - Summary

November 20, 2014 Data Mining: Concepts and 2 Bayesian Classification: Why?

  • (^) A statistical classifier: performs probabilistic prediction, i.e., predicts class membership probabilities( that a given tuple belongs to a particular class)
  • (^) Foundation: Based on Bayes’ Theorem given by Thomas Bayes
  • (^) Performance: A simple Bayesian classifier, naïve Bayesian classifier , has comparable performance with decision tree and selected neural network classifiers.
  • (^) Class Conditional Independence : Naïve Bayesian Classifiers assume that the effect of an attribute value on a given class is independent of the values of the other attributes. This assumption is called class conditional independence.
  • (^) Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct — prior knowledge can be combined with observed data
  • (^) Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured
  • (^) Bayesian Belief Network: are graphical models that allow the representation of dependencies among subsets of attributes

November 20, 2014 Data Mining: Concepts and 4 Bayesian Theorem

  • (^) Given training data X , posteriori probability of a hypothesis H , P(H| X ) , follows the Bayes theorem
  • (^) Informally, this can be written as posteriori = likelihood x prior/evidence
  • (^) Predicts X belongs to Ci iff the probability P(Ci| X ) is the highest among all the P(Ck|X) for all the k classes
  • (^) Practical difficulty: require initial knowledge of many probabilities, significant computational cost

X

X

X

P

P H P H

P H 

November 20, 2014 Data Mining: Concepts and 5 Towards Naïve Bayesian Classifier

  • (^) Let D be a training set of tuples and their associated class labels, and each tuple is represented by an n-D attribute vector X = (x 1 , x 2 , …, xn)
  • (^) Suppose there are m classes C 1 , C 2 , …, Cm.
  • (^) Classification is to derive the maximum posteriori, i.e., the maximal P(Ci| X )
  • (^) This can be derived from Bayes’ theorem
  • (^) Since P(X) is constant for all classes, only needs to be maximized ( ) ( | ) ( ) ( | ) X X X P i P C i P C i P C  ( | ) ( | ) ( ) i
P C

i

P C

i

P C X  X

November 20, 2014 Data Mining: Concepts and 7

Naïve Bayesian Classifier: Training Dataset

Class: C1:buys_com puter = ‘yes’ C2:buys_com puter = ‘noData sample X = (age <=30, Income = medium, Student = yes Credit_rating = Fair)

November 20, 2014 Data Mining: Concepts and 8 Naïve Bayesian Classifier: An Example

  • P(Ci): P(buys_computer = “yes”) = 9/14 = 0. P(buys_computer = “no”) = 5/14= 0.
  • (^) Compute P(X|Ci) for each class P(age = “<=30” | buys_computer = “yes”) = 2/9 = 0. P(age = “<= 30” | buys_computer = “no”) = 3/5 = 0. P(income = “medium” | buys_computer = “yes”) = 4/9 = 0. P(income = “medium” | buys_computer = “no”) = 2/5 = 0. P(student = “yes” | buys_computer = “yes) = 6/9 = 0. P(student = “yes” | buys_computer = “no”) = 1/5 = 0. P(credit_rating = “fair” | buys_computer = “yes”) = 6/9 = 0. P(credit_rating = “fair” | buys_computer = “no”) = 2/5 = 0.
  • (^) X = (age <= 30 , income = medium, student = yes, credit_rating = fair) P(X|Ci) : P(X|buys_computer = “yes”) = 0.222 x 0.444 x 0.667 x 0.667 = 0. P(X|buys_computer = “no”) = 0.6 x 0.4 x 0.2 x 0.4 = 0. P(X|Ci)P(Ci) :* P(X|buys_computer = “yes”) * P(buys_computer = “yes”) = 0. P(X|buys_computer = “no”) * P(buys_computer = “no”) = 0. Therefore, X belongs to class (“buys_computer = yes”)

November 20, 2014 Data Mining: Concepts and 10 Naïve Bayesian Classifier: Comments

  • (^) Advantages
    • (^) Easy to implement
    • (^) Good results obtained in most of the cases
  • (^) Disadvantages
    • (^) Assumption: class conditional independence,
therefore loss of accuracy
  • (^) Practically, dependencies exist among variables
    • (^) E.g., hospitals: patients: Profile: age, family history, etc. Symptoms: fever, cough etc., Disease: lung cancer, diabetes, etc.
    • (^) Dependencies among these cannot be modeled by Naïve Bayesian Classifier
  • (^) How to deal with these dependencies?
  • (^) Bayesian Belief Networks

November 20, 2014 Data Mining: Concepts and 11

Bayesian Belief Networks

  • (^) A Bayesian network (or a belief network ) is a probabilistic graphical model that represents a set of variables and their probabilistic independencies. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Bayesian belief network allows a subset of the variables conditionally independent
  • (^) Bayesian Belief Networks is defined by two components a) A Directed acyclic graph b) A set of conditional Probability Tables
  • (^) A graphical model of causal relationships
    • Represents dependency among the variables
    • (^) Gives a specification of joint probability distribution X^ Y
Z
P

 (^) Nodes: random variables  (^) Links: dependency  (^) X & Y are the parents of Z, & Y is the parent of P  (^) No dependency between Z and P  (^) Has no loops or cycles

November 20, 2014 Data Mining: Concepts and 13 Training Bayesian Networks

  • (^) Several scenarios:
    • (^) Given both the network structure and all variables observable: learn only the CPTs
    • (^) Network structure known, some hidden variables: gradient descent (greedy hill- climbing) method, analogous to neural network learning
    • (^) Network structure unknown, all variables observable: search through the model space to reconstruct network topology
    • (^) Unknown structure, all hidden variables: No good algorithms known for this purpose