How to compare extratrees classifier and decision tree in ML in python

This recipe helps you compare extratrees classifier and decision tree in ML in python

Recipe Objective

Decision tree learns from one path while extratree learns from multiple tree. One other major difference between both lies in the fact that, decision tree computes the locally optimal feature/split combination while in extratree classifer, for each feature under consideration, a random value is selected for the split.

So this recipe is a short example on how to compare decision tree and extratree classifier. Let's get started.

Step 1 - Import the library

import pandas as pd import numpy as np from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import ExtraTreesClassifier from sklearn.datasets import load_iris

Let's pause and look at these imports. Numpy and Pandas are the usual ones. sklearn.ensemble contains Extra Tree Classifer classification model. sklearn.tree contains DecisionTreeClassifer classification model. Here sklearn.dataset is used to import one classification based model dataset.

Step 2 - Setup the Data

X,y=load_iris(return_X_y=True) print(X) print(y)

Here, we have used load_iris function to import our dataset in two list form (X and y) and therefore kept return_X_y to be True.

Now our dataset is ready

Step 3 - Building the model

Before we do that, let's look at the important parameters that we need to pass.

1) n_estimators
It decides the number of trees in the forest.

2) criterion
The function to measure the quality of a split. Supported criteria are “gini” for the Gini impurity and “entropy” for the information gain.

3) max_features
It decides the number of features to consider when looking for the best split.

Now that we understand, let's create the object

decision_tree_forest = DecisionTreeClassifier(criterion ='entropy', max_features = 2) extra_tree_forest = ExtraTreesClassifier(n_estimators = 5,criterion ='entropy', max_features = 2)

  • Here, we have build two model, one for Decision Tree and other for Extra Tree
  • As you can see, we have set n_estimator to be 5 in ExtraTreeClassifer
  • Criterion is set to be entropy for both
  • Max features is set here to be 2 for both

Step 4 - Fit the model and find results

decision_tree_forest.fit(X, y) extra_tree_forest.fit(X, y) decision_feature_importance = decision_tree_forest.feature_importances_ extra_feature_importance = extra_tree_forest.feature_importances_ print(decision_feature_importance) print(extra_feature_importance)

Here, we have simply fit used fit function to fit our both model on X and y and created two objects. There after, we are trying to understand the importance of each feature based on two models we built.

Step 5 - Lets look at our dataset now

Once we run the above code snippet, we will see:

Scroll down the ipython file to have a look at the results.
We can clearly see the difference that is arising due to two models we are using. Noise/Turbulence can be much better handled by ExtraTreeClassifier.

What Users are saying..

profile image

Ameeruddin Mohammed

ETL (Abintio) developer at IBM
linkedin profile url

I come from a background in Marketing and Analytics and when I developed an interest in Machine Learning algorithms, I did multiple in-class courses from reputed institutions though I got good... Read More

Relevant Projects

Natural language processing Chatbot application using NLTK for text classification
In this NLP AI application, we build the core conversational engine for a chatbot. We use the popular NLTK text classification library to achieve this.

AWS MLOps Project to Deploy a Classification Model [Banking]
In this AWS MLOps project, you will learn how to deploy a classification model using Flask on AWS.

MLOps Project on GCP using Kubeflow for Model Deployment
MLOps using Kubeflow on GCP - Build and deploy a deep learning model on Google Cloud Platform using Kubeflow pipelines in Python

Time Series Python Project using Greykite and Neural Prophet
In this time series project, you will forecast Walmart sales over time using the powerful, fast, and flexible time series forecasting library Greykite that helps automate time series problems.

Build an Image Classifier for Plant Species Identification
In this machine learning project, we will use binary leaf images and extracted features, including shape, margin, and texture to accurately identify plant species using different benchmark classification techniques.

Hands-On Approach to Causal Inference in Machine Learning
In this Machine Learning Project, you will learn to implement various causal inference techniques in Python to determine, how effective the sprinkler is in making the grass wet.

Build a Credit Default Risk Prediction Model with LightGBM
In this Machine Learning Project, you will build a classification model for default prediction with LightGBM.

Llama2 Project for MetaData Generation using FAISS and RAGs
In this LLM Llama2 Project, you will automate metadata generation using Llama2, RAGs, and AWS to reduce manual efforts.

Recommender System Machine Learning Project for Beginners-1
Recommender System Machine Learning Project for Beginners - Learn how to design, implement and train a rule-based recommender system in Python

Learn Hyperparameter Tuning for Neural Networks with PyTorch
In this Deep Learning Project, you will learn how to optimally tune the hyperparameters (learning rate, epochs, dropout, early stopping) of a neural network model in PyTorch to improve model performance.