How to create and optimize a baseline Decision Tree model for Binary Classification?
MACHINE LEARNING RECIPES DATA CLEANING PYTHON DATA MUNGING PANDAS CHEATSHEET     ALL TAGS

How to create and optimize a baseline Decision Tree model for Binary Classification?

How to create and optimize a baseline Decision Tree model for Binary Classification?

This recipe helps you create and optimize a baseline Decision Tree model for Binary Classification

0

Recipe Objective

Many a times while working on a dataset and using a Machine Learning model we don"t know which set of hyperparameters will give us the best result. Passing all sets of hyperparameters manually through the model and checking the result might be a hectic work and may not be possible to do.

To get the best set of optimized hyperparameters we can use Grid Search. Grid Search passes all combinations of hyperparameters one by one into the model and check the result. Finally it gives us the set of hyperparemeters which gives the best result after passing in the model.

So this recipe is a short example of how can create and optimize a baseline Decision Tree model for Binary Classification.

Step 1 - Import the library - GridSearchCv

from sklearn import decomposition, datasets from sklearn import tree from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV, cross_val_score from sklearn.preprocessing import StandardScaler

Here we have imported various modules like decomposition, datasets, tree, Pipeline, StandardScaler and GridSearchCV from differnt libraries. We will understand the use of these later while using it in the in the code snipet.
For now just have a look on these imports.

Step 2 - Setup the Data

Here we have used datasets to load the inbuilt wine dataset and we have created objects X and y to store the data and the target value respectively. cancer = datasets.load_breast_cancer() X = cancer.data y = cancer.target

Step 3 - Using StandardScaler and PCA

StandardScaler is used to remove the outliners and scale the data by making the mean of the data 0 and standard deviation as 1. So we are creating an object std_scl to use standardScaler. sc = StandardScaler()

We are also using Principal Component Analysis(PCA) which will reduce the dimension of features by creating new features which have most of the varience of the original data. pca = decomposition.PCA()

Here, we are using Decision Tree Classifier as a Machine Learning model to use GridSearchCV. So we have created an object dec_tree. dtreeCLF = tree.DecisionTreeClassifier()

Step 5 - Using Pipeline for GridSearchCV

Pipeline will helps us by passing modules one by one through GridSearchCV for which we want to get the best parameters. So we are making an object pipe to create a pipeline for all the three objects std_scl, pca and dec_tree. pipe = Pipeline(steps=[("sc", sc), ("pca", pca), ("dtreeCLF", dtreeCLF)])

Now we have to define the parameters that we want to optimise for these three objects.
StandardScaler doesnot requires any parameters to be optimised by GridSearchCV.
Principal Component Analysis requires a parameter "n_components" to be optimised. "n_components" signifies the number of components to keep after reducing the dimension. n_components = list(range(1,X.shape[1]+1,1))

DecisionTreeClassifier requires two parameters "criterion" and "max_depth" to be optimised by GridSearchCV. So we have set these two parameters as a list of values form which GridSearchCV will select the best value of parameter. criterion = ["gini", "entropy"] max_depth = [2,4,6,8,10]

Now we are creating a dictionary to set all the parameters options for different objects. parameters = dict(pca__n_components=n_components, dec_tree__criterion=criterion, dec_tree__max_depth=max_depth)

Step 6 - Using GridSearchCV and Printing Results

Before using GridSearchCV, lets have a look on the important parameters.

  • estimator: In this we have to pass the models or functions on which we want to use GridSearchCV
  • param_grid: Dictionary or list of parameters of models or function in which GridSearchCV have to select the best.
  • Scoring: It is used as a evaluating metric for the model performance to decide the best hyperparameters, if not especified then it uses estimator score.
Making an object clf_GS for GridSearchCV and fitting the dataset i.e X and y clf_GS = GridSearchCV(pipe, parameters) clf_GS.fit(X, y) Now we are using print statements to print the results. It will give the values of hyperparameters as a result. print("Best Number Of Components:", clf_GSCV.best_estimator_.get_params()["pca__n_components"]) print(clf_GSCV.best_estimator_.get_params()["dtreeClf"]) CV_Result = cross_val_score(clf_GSCV, X, y, cv=3, n_jobs=-1, scoring="accuracy") print(CV_Result) print(CV_Result.mean()) print(CV_Result.std()) As an output we get:

Best Number Of Components: 11

DecisionTreeClassifier(class_weight=None, criterion="entropy", max_depth=10,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=None,
            splitter="best")

[0.90526316 0.95263158 0.91534392]

0.924412884062007

0.020373618128389104

Relevant Projects

Resume parsing with Machine learning - NLP with Python OCR and Spacy
In this machine learning resume parser example we use the popular Spacy NLP python library for OCR and text classification.

Learn to prepare data for your next machine learning project
Text data requires special preparation before you can start using it for any machine learning project.In this ML project, you will learn about applying Machine Learning models to create classifiers and learn how to make sense of textual data.

Zillow’s Home Value Prediction (Zestimate)
Data Science Project in R -Build a machine learning algorithm to predict the future sale prices of homes.

Machine Learning or Predictive Models in IoT - Energy Prediction Use Case
In this machine learning and IoT project, we are going to test out the experimental data using various predictive models and train the models and break the energy usage.

Credit Card Fraud Detection as a Classification Problem
In this data science project, we will predict the credit card fraud in the transactional dataset using some of the predictive models.

Deep Learning with Keras in R to Predict Customer Churn
In this deep learning project, we will predict customer churn using Artificial Neural Networks and learn how to model an ANN in R with the keras deep learning package.

Data Science Project in Python on BigMart Sales Prediction
The goal of this data science project is to build a predictive model and find out the sales of each product at a given Big Mart store.

Choosing the right Time Series Forecasting Methods
There are different time series forecasting methods to forecast stock price, demand etc. In this machine learning project, you will learn to determine which forecasting method to be used when and how to apply with time series forecasting example.

Human Activity Recognition Using Multiclass Classification in Python
In this human activity recognition project, we use multiclass classification machine learning techniques to analyse fitness dataset from a smartphone tracker.

Natural language processing Chatbot application using NLTK for text classification
In this NLP AI application, we build the core conversational engine for a chatbot. We use the popular NLTK text classification library to achieve this.