Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Decision Tree Machine Learning, Study notes of Machine Learning

A decision tree is a widely used machine learning algorithm that is primarily employed for both classification and regression tasks. It is a supervised learning method that is used for making decisions by recursively partitioning the dataset into subsets, based on the values of input features, ultimately leading to a decision or prediction. Here's how a decision tree works: Tree Structure: A decision tree is structured like an upside-down tree with a root node at the top and branches extending downward. The root node represents the entire dataset. Nodes: The tree has internal nodes and leaf nodes. Internal nodes represent a decision or a test on an input feature, and they have branches to other nodes. Leaf nodes are the terminal nodes and represent the output or the class label. Splitting: At each internal node, the dataset is split into subsets based on a feature's value. The feature and its value that provides the best separation of data (maximizes information gain or minimizes i

Typology: Study notes

2023/2024

Available from 10/27/2023

faizanshaik-official
faizanshaik-official 🇮🇳

4 documents

1 / 8

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
In [1]:
In [2]:
In [3]:
In [4]:
In [5]:
In [6]:
RID age income student credit_rating buys_computer
0 1 youth high no fair no
1 2 youth high no excellent no
2 3 middle_aged high no fair yes
3 4 senior medium no fair yes
4 5 senior low yes fair yes
5 6 senior low yes excellent no
6 7 middle_aged low yes excellent yes
7 8 youth medium no fair no
8 9 youth low yes fair yes
9 10 senior medium yes fair yes
10 11 youth medium yes excellent yes
11 12 middle_aged medium no excellent yes
12 13 middle_aged high yes fair yes
13 14 senior medium no excellent no
import pandas as pd
import matplotlib.pyplot as plt
data=pd.read_excel('C:/Users/kriti/OneDrive/Desktop/machine Learning/experiments/dt.
print(data)
#Encode the text or non numerical data into numerical value
from sklearn.preprocessing import LabelEncoder
# created instances for class LableEncoder
le_age = LabelEncoder()
le_income = LabelEncoder()
le_student = LabelEncoder()
le_credit_rating = LabelEncoder()
le_buys_computer = LabelEncoder()
1
2
1
1
1
1
1
2
3
4
5
6
pf3
pf4
pf5
pf8

Partial preview of the text

Download Decision Tree Machine Learning and more Study notes Machine Learning in PDF only on Docsity!

In [2]: In [3]: In [4]: In [5]: In [6]: RID age income student credit_rating buys_computer 0 1 youth high no fair no 1 2 youth high no excellent no 2 3 middle_aged high no fair yes 3 4 senior medium no fair yes 4 5 senior low yes fair yes 5 6 senior low yes excellent no 6 7 middle_aged low yes excellent yes 7 8 youth medium no fair no 8 9 youth low yes fair yes 9 10 senior medium yes fair yes 10 11 youth medium yes excellent yes 11 12 middle_aged medium no excellent yes 12 13 middle_aged high yes fair yes 13 14 senior medium no excellent no import pandas as pd import matplotlib.pyplot as plt data = pd.read_excel('C:/Users/kriti/OneDrive/Desktop/machine Learning/experiments/dt. print(data) #Encode the text or non numerical data into numerical value from sklearn.preprocessing import LabelEncoder # created instances for class LableEncoder le_age = LabelEncoder() le_income = LabelEncoder() le_student = LabelEncoder() le_credit_rating = LabelEncoder() le_buys_computer = LabelEncoder()

In [8]: In [9]: Out[8]: RID age income student credit_rating buys_computer age_n income_n stude 0 1 youth high no fair no 2 0 1 2 youth high no excellent no 2 0 2 3 middle_aged high no fair yes 0 0 3 4 senior medium no fair yes 1 2 4 5 senior low yes fair yes 1 1 Out[9]: RID age_n income_n student_n credit_rating_n buys_computer_n 0 1 2 0 0 1 0 1 2 2 0 0 0 0 2 3 0 0 0 1 1 3 4 1 2 0 1 1 4 5 1 1 1 1 1 5 6 1 1 1 0 0 6 7 0 1 1 0 1 7 8 2 2 0 1 0 8 9 2 1 1 1 1 9 10 1 2 1 1 1 10 11 2 2 1 0 1 11 12 0 2 0 0 1 12 13 0 0 1 1 1 13 14 1 2 0 0 0 # fit_tranform data['age_n'] = le_age.fit_transform(data['age']) data['income_n'] = le_income.fit_transform(data['income']) data['student_n'] = le_student.fit_transform(data['student']) data['credit_rating_n'] = le_credit_rating.fit_transform(data['credit_rating']) data['buys_computer_n'] = le_buys_computer.fit_transform(data['buys_computer']) data.head() data_new = data.drop(['age','income','student','credit_rating','buys_computer'],axis = 1 data_new

In [14]: In [15]: In [16]: Out[15]: age_n income_n student_n credit_rating_n 5 1 1 1 0 8 2 1 1 1 2 0 0 0 1 1 2 0 0 0 13 1 2 0 0 4 1 1 1 1 7 2 2 0 1 10 2 2 1 0 3 1 2 0 1 6 0 1 1 0 Out[16]: 5 0 8 1 2 1 1 0 13 0 4 1 7 0 10 1 3 1 6 1 Name: buys_computer_n, dtype: int # for splitting from sklearn.model_selection import train_test_split x_train, x_test, y_train, y_test = train_test_split(x,y,test_size = 0.25,random_state = x_train y_train

In [18]: In [19]: Out[17]: age_n income_n student_n credit_rating_n 9 1 2 1 1 11 0 2 0 0 0 2 0 0 1 12 0 0 1 1 Out[18]: 9 1 11 1 0 0 12 1 Name: buys_computer_n, dtype: int Out[19]: age_n income_n student_n credit_rating_n buys_computer_n 5 1 1 1 0 0 8 2 1 1 1 1 2 0 0 0 1 1 1 2 0 0 0 0 13 1 2 0 0 0 4 1 1 1 1 1 7 2 2 0 1 0 10 2 2 1 0 1 3 1 2 0 1 1 6 0 1 1 0 1 x_test y_test # concatenating the training dataset pd.concat([x_train, y_train], axis = 1 )

cm = confusion_matrix(y_test, y_pred) plt.figure(figsize = ( 5 , 5 )) sns.heatmap(data = cm, annot = True , cmap = 'Blues') plt.ylabel('Actual values') plt.xlabel('predicted values') plt.show()

In [ ]: #graphical visualization of tree from sklearn.tree import plot_tree # help you to produce the figure of tree plt.figure(figsize = ( 12 , 12 )) dec_tree = plot_tree(decision_tree = dt,feature_names = feature_cols,class_names = ["0","1"] filled =True ,rounded =True )