How to create and optimize a baseline Decision Tree model for Regression in python

This recipe helps you create and optimize a baseline Decision Tree model for Regression in python

Recipe Objective

Many a times while working on a dataset and using a Machine Learning model we don"t know which set of hyperparameters will give us the best result. Passing all sets of hyperparameters manually through the model and checking the result might be a hectic work and may not be possible to do.

To get the best set of hyperparameters we can use Grid Search. Grid Search passes all combinations of hyperparameters one by one into the model and check the result. Finally it gives us the set of hyperparemeters which gives the best result after passing in the model.

So this recipe is a short example of how can create and optimize a baseline Decision Tree model for Regression.

Access Loan Eligibility Prediction Projects with Source Code

Step 1 - Import the library - GridSearchCv

from sklearn import decomposition, datasets from sklearn import tree from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV, cross_val_score from sklearn.preprocessing import StandardScaler

Here we have imported various modules like decomposition, datasets, tree, Pipeline, StandardScaler and GridSearchCV from differnt libraries. We will understand the use of these later while using it in the in the code snipet.
For now just have a look on these imports.

Step 2 - Setup the Data

Here we have used datasets to load the inbuilt boston dataset and we have created objects X and y to store the data and the target value respectively. boston = datasets.load_boston() X = boston.data y = boston.target

Step 3 - Using StandardScaler and PCA

StandardScaler is used to remove the outliners and scale the data by making the mean of the data 0 and standard deviation as 1. So we are creating an object std_scl to use standardScaler. std_slc = StandardScaler()

We are also using Principal Component Analysis(PCA) which will reduce the dimension of features by creating new features which have most of the varience of the original data. pca = decomposition.PCA()

Here, we are using Decision Tree Regressor as a Machine Learning model to use GridSearchCV. So we have created an object dec_tree. dtreeReg = tree.DecisionTreeRegressor()

Step 5 - Using Pipeline for GridSearchCV

Pipeline will helps us by passing modules one by one through GridSearchCV for which we want to get the best parameters. So we are making an object pipe to create a pipeline for all the three objects std_scl, pca and dtreeReg. pipe = Pipeline(steps=[("std_slc", std_slc), ("pca", pca), ("dtreeReg", dtreeReg)])

Now we have to define the parameters that we want to optimise for these three objects.
StandardScaler doesnot requires any parameters to be optimised by GridSearchCV.
Principal Component Analysis requires a parameter "n_components" to be optimised. "n_components" signifies the number of components to keep after reducing the dimension. n_components = list(range(1,X.shape[1]+1,1))

DecisionTreeClassifier requires two parameters "friedman_mse" and "mse" to be optimised by GridSearchCV. So we have set these two parameters as a list of values form which GridSearchCV will select the best value of parameter. criterion = ["friedman_mse", "mse"] max_depth = [4,6,8,10]

Now we are creating a dictionary to set all the parameters options for different objects. parameters = dict(pca__n_components=n_components, dtreeReg__criterion=criterion, dtreeReg__max_depth=max_depth)

Step 6 - Using GridSearchCV and Printing Results

Before using GridSearchCV, lets have a look on the important parameters.

  • estimator: In this we have to pass the models or functions on which we want to use GridSearchCV
  • param_grid: Dictionary or list of parameters of models or function in which GridSearchCV have to select the best.
  • Scoring: It is used as a evaluating metric for the model performance to decide the best hyperparameters, if not especified then it uses estimator score.

Making an object clf_GS for GridSearchCV and fitting the dataset i.e X and y clf = GridSearchCV(pipe, parameters) clf.fit(X, y) Now we are using print statements to print the results. It will give the values of hyperparameters as a result. print("Best Number Of Components:", clf.best_estimator_.get_params()["pca__n_components"]) print(); print(clf.best_estimator_.get_params()["dtreeReg"]) CV_Result = cross_val_score(clf, X, y, cv=3, n_jobs=-1, scoring="r2") print(); print(CV_Result) print(); print(CV_Result.mean()) print(); print(CV_Result.std()) As an output we get:

Best Number Of Components: 13

DecisionTreeRegressor(criterion="friedman_mse", max_depth=10,
           max_features=None, max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, presort=False, random_state=None,
           splitter="best")

[-0.37455162  0.0133472   0.08187602]

-0.0931094667607707

0.20096652325569414

Download Materials

What Users are saying..

profile image

Anand Kumpatla

Sr Data Scientist @ Doubleslash Software Solutions Pvt Ltd
linkedin profile url

ProjectPro is a unique platform and helps many people in the industry to solve real-life problems with a step-by-step walkthrough of projects. A platform with some fantastic resources to gain... Read More

Relevant Projects

Learn to Build Generative Models Using PyTorch Autoencoders
In this deep learning project, you will learn how to build a Generative Model using Autoencoders in PyTorch

Build a Multi-Class Classification Model in Python on Saturn Cloud
In this machine learning classification project, you will build a multi-class classification model in Python on Saturn Cloud to predict the license status of a business.

Avocado Machine Learning Project Python for Price Prediction
In this ML Project, you will use the Avocado dataset to build a machine learning model to predict the average price of avocado which is continuous in nature based on region and varieties of avocado.

Deploy Transformer-BART Model on Paperspace Cloud
In this MLOps Project you will learn how to deploy a Tranaformer BART Model for Abstractive Text Summarization on Paperspace Private Cloud

Build Classification Algorithms for Digital Transformation[Banking]
Implement a machine learning approach using various classification techniques in Python to examine the digitalisation process of bank customers.

MLOps Project to Deploy Resume Parser Model on Paperspace
In this MLOps project, you will learn how to deploy a Resume Parser Streamlit Application on Paperspace Private Cloud.

Image Segmentation using Mask R-CNN with Tensorflow
In this Deep Learning Project on Image Segmentation Python, you will learn how to implement the Mask R-CNN model for early fire detection.

A/B Testing Approach for Comparing Performance of ML Models
The objective of this project is to compare the performance of BERT and DistilBERT models for building an efficient Question and Answering system. Using A/B testing approach, we explore the effectiveness and efficiency of both models and determine which one is better suited for Q&A tasks.

Build a Multi ClassText Classification Model using Naive Bayes
Implement the Naive Bayes Algorithm to build a multi class text classification model in Python.

Build a Credit Default Risk Prediction Model with LightGBM
In this Machine Learning Project, you will build a classification model for default prediction with LightGBM.