MACHINE LEARNING RECIPES
DATA CLEANING PYTHON
DATA MUNGING
PANDAS CHEATSHEET
ALL TAGS
# How to find optimal parameters for CatBoost using GridSearchCV for Classification?

# How to find optimal parameters for CatBoost using GridSearchCV for Classification?

This recipe helps you find optimal parameters for CatBoost using GridSearchCV for Classification

Many a times while working on a dataset and using a Machine Learning model we don't know which set of hyperparameters will give us the best result. Passing all sets of hyperparameters manually through the model and checking the result might be a hectic work and may not be possible to do.

To get the best set of hyperparameters we can use Grid Search. Grid Search passes all combinations of hyperparameters one by one into the model and check the result. Finally it gives us the set of hyperparemeters which gives the best result after passing in the model.

This python source code does the following:

1. pip installs Catboost

2. Imports SKlearn dataset

3. Performs validation dataset from the existing dataset

4. Applies Catboost Classifier

5. Hyperparameter tuning using GridSearchCV

So this recipe is a short example of how we can find optimal parameters using GridSearchCV.

```
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.model_selection import GridSearchCV
from catboost import CatBoostClassifier
```

Here we have imported various modules like datasets, CatBoostClassifier and GridSearchCV from differnt libraries. We will understand the use of these later while using it in the in the code snipet.

For now just have a look on these imports.

Here we have used datasets to load the inbuilt iris dataset and we have created objects X and y to store the data and the target value respectively.
```
dataset = datasets.load_iris()
X = dataset.data; y = dataset.target
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30)
```

Here, we are using CatBoostClassifier as a Machine Learning model to use GridSearchCV. So we have created an object CBC.
```
CBC = CatBoostClassifier()
```

Now we have defined the parameters of the model which we want to pass to through GridSearchCV to get the best parameters. So we are making an dictionary called parameters in which we have four parameters learning_rate, subsample, n_estimators and max_depth.
```
parameters = {'depth' : [4,5,6,7,8,9, 10],
'learning_rate' : [0.01,0.02,0.03,0.04],
'iterations' : [10, 20,30,40,50,60,70,80,90, 100]
}
```

Before using GridSearchCV, lets have a look on the important parameters.

- estimator: In this we have to pass the models or functions on which we want to use GridSearchCV
- param_grid: Dictionary or list of parameters of models or function in which GridSearchCV have to select the best.
- Scoring: It is used as a evaluating metric for the model performance to decide the best hyperparameters, if not especified then it uses estimator score.
- cv : In this we have to pass a interger value, as it signifies the number of splits that is needed for cross validation. By default is set as five.
- n_jobs : This signifies the number of jobs to be run in parallel, -1 signifies to use all processor.

```
Grid_CBC = GridSearchCV(estimator=CBC, param_grid = parameters, cv = 2, n_jobs=-1)
Grid_CBC.fit(X_train, y_train)
```

Now we are using print statements to print the results. It will give the values of hyperparameters as a result.
```
print(" Results from Grid Search " )
print("\n The best estimator across ALL searched params:\n",Grid_CBC.best_estimator_)
print("\n The best score across ALL searched params:\n",Grid_CBC.best_score_)
print("\n The best parameters across ALL searched params:\n",Grid_CBC.best_params_)
```

As an output we get:
0: learn: 1.0891400 total: 2.94ms remaining: 85.2ms 1: learn: 1.0783511 total: 4.32ms remaining: 60.5ms 2: learn: 1.0694444 total: 6.3ms remaining: 56.7ms 3: learn: 1.0595396 total: 8.25ms remaining: 53.6ms 4: learn: 1.0503198 total: 9.34ms remaining: 46.7ms 5: learn: 1.0410468 total: 11.2ms remaining: 44.7ms 6: learn: 1.0321956 total: 13ms remaining: 42.7ms 7: learn: 1.0243880 total: 14.9ms remaining: 41ms 8: learn: 1.0171330 total: 16.9ms remaining: 39.4ms 9: learn: 1.0084122 total: 17.4ms remaining: 34.8ms 10: learn: 0.9976315 total: 18.5ms remaining: 31.9ms 11: learn: 0.9901578 total: 20.4ms remaining: 30.6ms 12: learn: 0.9800001 total: 20.9ms remaining: 27.4ms 13: learn: 0.9695078 total: 21.3ms remaining: 24.3ms 14: learn: 0.9621747 total: 23.2ms remaining: 23.2ms 15: learn: 0.9554019 total: 25.1ms remaining: 22ms 16: learn: 0.9474079 total: 27ms remaining: 20.7ms 17: learn: 0.9414282 total: 28.9ms remaining: 19.3ms 18: learn: 0.9321771 total: 29.4ms remaining: 17ms 19: learn: 0.9257181 total: 31.3ms remaining: 15.6ms 20: learn: 0.9168876 total: 31.8ms remaining: 13.6ms 21: learn: 0.9105180 total: 33.7ms remaining: 12.3ms 22: learn: 0.9035570 total: 35.7ms remaining: 10.9ms 23: learn: 0.8961079 total: 36.4ms remaining: 9.09ms 24: learn: 0.8890353 total: 38.3ms remaining: 7.66ms 25: learn: 0.8826284 total: 40.2ms remaining: 6.18ms 26: learn: 0.8734174 total: 40.6ms remaining: 4.52ms 27: learn: 0.8647497 total: 41ms remaining: 2.93ms 28: learn: 0.8583818 total: 42.9ms remaining: 1.48ms 29: learn: 0.8525250 total: 44.9ms remaining: 0us Results from Grid Search The best estimator across ALL searched params:The best score across ALL searched params: 0.9620827285921625 The best parameters across ALL searched params: {'depth': 9, 'iterations': 30, 'learning_rate': 0.01}

The goal of this data science project is to build a predictive model and find out the sales of each product at a given Big Mart store.

Text data requires special preparation before you can start using it for any machine learning project.In this ML project, you will learn about applying Machine Learning models to create classifiers and learn how to make sense of textual data.

Data Science Project in R-Predict the sales for each department using historical markdown data from the Walmart dataset containing data of 45 Walmart stores.

In this data science project, you will learn how to perform market basket analysis with the application of Apriori and FP growth algorithms based on the concept of association rule learning.

This data science in python project predicts if a loan should be given to an applicant or not. We predict if the customer is eligible for loan based on several factors like credit score and past history.

In this NLP AI application, we build the core conversational engine for a chatbot. We use the popular NLTK text classification library to achieve this.

Machine Learning Project in R-Detect fraudulent click traffic for mobile app ads using R data science programming language.

In this ensemble machine learning project, we will predict what kind of claims an insurance company will get. This is implemented in python using ensemble machine learning algorithms.

In this data science project, you will work with German credit dataset using classification techniques like Decision Tree, Neural Networks etc to classify loan applications using R.

In this human activity recognition project, we use multiclass classification machine learning techniques to analyse fitness dataset from a smartphone tracker.