Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Data Preprocessing in Data mining, Lecture notes of Data Mining

Data Preprocessing slides with all the stages covered.

Typology: Lecture notes

2017/2018

Uploaded on 05/04/2018

anmol-sharma-2
anmol-sharma-2 🇮🇳

3 documents

1 / 56

Toggle sidebar

This page cannot be seen from the preview

Don't miss anything!

bg1
1
Data
Preprocessing
Techniques
pf3
pf4
pf5
pf8
pf9
pfa
pfd
pfe
pff
pf12
pf13
pf14
pf15
pf16
pf17
pf18
pf19
pf1a
pf1b
pf1c
pf1d
pf1e
pf1f
pf20
pf21
pf22
pf23
pf24
pf25
pf26
pf27
pf28
pf29
pf2a
pf2b
pf2c
pf2d
pf2e
pf2f
pf30
pf31
pf32
pf33
pf34
pf35
pf36
pf37
pf38

Partial preview of the text

Download Data Preprocessing in Data mining and more Lecture notes Data Mining in PDF only on Docsity!

Data

Preprocessing

Techniques

Chapter 3: Data Preprocessing

Data Preprocessing: An Overview

Data Quality

Major Tasks in Data Preprocessing

Data Cleaning

Data Integration

Data Reduction

Data Transformation and Data Discretization

Summary

Major Tasks in Data Preprocessing

 Data cleaning

Fill in missing values, smooth noisy data, identify or
remove outliers, and resolve inconsistencies

Data integration
 Integration of multiple databases, data cubes, or files
 Data reduction

Dimensionality reduction
 Numerosity reduction

Data compression
 Data transformation and data discretization
 Normalization

Concept hierarchy generation

Chapter 3: Data Preprocessing

Data Preprocessing: An Overview

Data Quality

Major Tasks in Data Preprocessing

Data Cleaning

Data Integration

Data Reduction

Data Transformation and Data Discretization

Summary

Incomplete (Missing) Data  Data is not always available  E.g., many tuples have no recorded value for several attributes, such as customer income in sales data  (^) Missing data may be due to  equipment malfunction  inconsistent with other recorded data and thus deleted  data not entered due to misunderstanding  certain data may not be considered important at the time of entry  not register history or changes of the data  (^) Missing data may need to be inferred

How to Handle Missing Data?  Ignore the tuple: usually done when class label is missing (when doing classification)—not effective when the % of missing values per attribute varies considerably  Fill in the missing value manually: tedious + infeasible?  Fill in it automatically with  a global constant : e.g., “unknown”, a new class?!  the attribute mean  the attribute mean for all samples belonging to the same class: smarter  the most probable value: inference-based such as Bayesian formula or decision tree

How to Handle Noisy Data?

 Binning

first sort data and partition into (equal-frequency)

bins

then one can smooth by bin means, smooth by bin

median, smooth by bin boundaries, etc.

Regression

smooth by fitting the data into regression functions

Clustering

detect and remove outliers

Combined computer and human inspection

detect suspicious values and check by human

(e.g., deal with possible outliers)

Data Cleaning as a Process  (^) Data discrepancy detection  (^) Use metadata (e.g., domain, range, dependency, distribution)  (^) Check field overloading  (^) Check uniqueness rule, consecutive rule and null rule  (^) Use commercial tools  (^) Data scrubbing: use simple domain knowledge (e.g., postal code, spell-check) to detect errors and make corrections  (^) Data auditing: by analyzing data to discover rules and relationship to detect violators (e.g., correlation and clustering to find outliers)  (^) Data migration and integration  (^) Data migration tools: allow transformations to be specified  (^) ETL (Extraction/Transformation/Loading) tools: allow users to specify transformations through a graphical user interface  (^) Integration of the two processes  (^) Iterative and interactive (e.g., Potter’s Wheels)

Data Integration  (^) Data integration :  (^) Combines data from multiple sources into a coherent store  (^) Schema integration: e.g., A.cust-id  B.cust-#  (^) Integrate metadata from different sources  (^) Entity identification problem:  (^) Identify real world entities from multiple data sources, e.g., Bill Clinton = William Clinton  (^) Detecting and resolving data value conflicts  For the same real world entity, attribute values from different sources are different  (^) Possible reasons: different representations, different scales, e.g., metric vs. British units

Handling Redundancy in Data

Integration

 Redundant data occur often when integration of multiple databases  Object identification : The same attribute or object may have different names in different databases  Derivable data: One attribute may be a “derived” attribute in another table, e.g., annual revenue  (^) Redundant attributes may be able to be detected by correlation analysis and covariance analysis  Careful integration of the data from multiple sources may help reduce/avoid redundancies and inconsistencies and improve mining speed and quality

Chi-Square Calculation: An

Example

Χ^2 (chi-square) calculation (numbers in parenthesis

are expected counts calculated based on the data

distribution in the two categories)

It shows that like_science_fiction and play_chess

are correlated in the group

  1. 93 840 ( 1000 840 ) 360 ( 200 360 ) 210 ( 50 210 ) 90 ( 250 90 ) 2 2 2 2 2           Play chess Not play chess Sum (row) Like science fiction 250(90) 200(360) 450 Not like science fiction 50(210) 1000(840) 1050 Sum(col.) 300 1200 1500

Correlation Analysis (Numeric Data)

Correlation coefficient (also called Pearson’s product

moment coefficient)

where n is the number of tuples, and are the respective means of A and B, σA and σB are the respective standard deviation of A and B, and Σ(aibi) is the sum of the AB cross- product.

 If r

A,B > 0, A and B are positively correlated (A’s values

increase as B’s). The higher, the stronger correlation.

 r

A,B = 0: independent;^ rAB < 0: negatively correlated

A B n i i i A B n i i i A B n ab nA B n a A b B r   ( 1 )  ( ) ( 1 ) ( )( ) 1 1 ,       

A B

Co-Variance: An Example  (^) It can be simplified in computation as  (^) Suppose two stocks A and B have the following values in one week: (2, 5), (3, 8), (5, 10), (4, 11), (6, 14).  (^) Question: If the stocks are affected by the same industry trends, will their prices rise or fall together?  (^) E(A) = (2 + 3 + 5 + 4 + 6)/ 5 = 20/5 = 4  E(B) = (5 + 8 + 10 + 11 + 14) /5 = 48/5 = 9.  (^) Cov(A,B) = (2×5+3×8+5×10+4×11+6×14)/5 − 4 × 9.6 = 4  (^) Thus, A and B rise together since Cov(A, B) > 0.

Chapter 3: Data Preprocessing

Data Preprocessing: An Overview

Data Quality

Major Tasks in Data Preprocessing

Data Cleaning

Data Integration

Data Reduction

Data Transformation and Data Discretization

Summary