How to optimize hyper parameters of a Logistic Regression model using Grid Search in Python?
HYPERPARAMETER TUNING DATA CLEANING PYTHON DATA MUNGING MACHINE LEARNING RECIPES PANDAS CHEATSHEET     ALL TAGS

How to optimize hyper parameters of a Logistic Regression model using Grid Search in Python?

How to optimize hyper parameters of a Logistic Regression model using Grid Search in Python?

This recipe helps you optimize hyper parameters of a Logistic Regression model using Grid Search in Python

2

Recipe Objective

Many a times while working on a dataset and using a Machine Learning model we don't know which set of hyperparameters will give us the best result. Passing all sets of hyperparameters manually through the model and checking the result might be a hectic work and may not be possible to do.

This data science python source code does the following:
1. Hyper-parameters of logistic regression.
2. Implements Standard Scaler function on the dataset.
3. Performs train_test_split on your dataset.
4. Uses Cross Validation to prevent overfitting.

To get the best set of hyperparameters we can use Grid Search. Grid Search passes all combinations of hyperparameters one by one into the model and check the result. Finally it gives us the set of hyperparemeters which gives the best result after passing in the model.

So this recipe is a short example of how to use Grid Search and get the best set of hyperparameters.

Step 1 - Import the library - GridSearchCv

import numpy as np from sklearn import linear_model, decomposition, datasets from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV from sklearn.preprocessing import StandardScaler

Here we have imported various modules like decomposition, datasets, linear_model, Pipeline, StandardScaler and GridSearchCV from differnt libraries. We will understand the use of these later while using it in the in the code snipet.
For now just have a look on these imports.

Step 2 - Setup the Data

Here we have used datasets to load the inbuilt wine dataset and we have created objects X and y to store the data and the target value respectively. dataset = datasets.load_wine() X = dataset.data y = dataset.target

Step 3 - Using StandardScaler and PCA

StandardScaler is used to remove the outliners and scale the data by making the mean of the data 0 and standard deviation as 1. So we are creating an object std_scl to use standardScaler. std_slc = StandardScaler()

We are also using Principal Component Analysis(PCA) which will reduce the dimension of features by creating new features which have most of the varience of the original data. pca = decomposition.PCA()

Here, we are using Logistic Regression as a Machine Learning model to use GridSearchCV. So we have created an object Logistic_Reg. logistic_Reg = linear_model.LogisticRegression()

Step 5 - Using Pipeline for GridSearchCV

Pipeline will helps us by passing modules one by one through GridSearchCV for which we want to get the best parameters. So we are making an object pipe to create a pipeline for all the three objects std_scl, pca and logistic_Reg. pipe = Pipeline(steps=[('std_slc', std_slc), ('pca', pca), ('logistic_Reg', logistic_Reg)])

Now we have to define the parameters that we want to optimise for these three objects.
StandardScaler doesnot requires any parameters to be optimised by GridSearchCV.
Principal Component Analysis requires a parameter 'n_components' to be optimised. 'n_components' signifies the number of components to keep after reducing the dimension. n_components = list(range(1,X.shape[1]+1,1))

Logistic Regression requires two parameters 'C' and 'penalty' to be optimised by GridSearchCV. So we have set these two parameters as a list of values form which GridSearchCV will select the best value of parameter. C = np.logspace(-4, 4, 50) penalty = ['l1', 'l2']

Now we are creating a dictionary to set all the parameters options for different modules. parameters = dict(pca__n_components=n_components, logistic_Reg__C=C, logistic_Reg__penalty=penalty)

Step 6 - Using GridSearchCV and Printing Results

Before using GridSearchCV, lets have a look on the important parameters.

  • estimator: In this we have to pass the models or functions on which we want to use GridSearchCV
  • param_grid: Dictionary or list of parameters of models or function in which GridSearchCV have to select the best.
  • Scoring: It is used as a evaluating metric for the model performance to decide the best hyperparameters, if not especified then it uses estimator score.
Making an object clf_GS for GridSearchCV and fitting the dataset i.e X and y clf_GS = GridSearchCV(pipe, parameters) clf_GS.fit(X, y) Now we are using print statements to print the results. It will give the values of hyperparameters as a result. print('Best Penalty:', clf_GS.best_estimator_.get_params()['logistic_Reg__penalty']) print('Best C:', clf_GS.best_estimator_.get_params()['logistic_Reg__C']) print('Best Number Of Components:', clf_GS.best_estimator_.get_params()['pca__n_components']) print(); print(clf_GS.best_estimator_.get_params()['logistic_Reg']) As an output we get:

Best Penalty: l1
Best C: 109.85411419875572
Best Number Of Components: 13

LogisticRegression(C=109.85411419875572, class_weight=None, dual=False,
          fit_intercept=True, intercept_scaling=1, max_iter=100,
          multi_class='warn', n_jobs=None, penalty='l1', random_state=None,
          solver='warn', tol=0.0001, verbose=0, warm_start=False)

Relevant Projects

Learn to prepare data for your next machine learning project
Text data requires special preparation before you can start using it for any machine learning project.In this ML project, you will learn about applying Machine Learning models to create classifiers and learn how to make sense of textual data.

Walmart Sales Forecasting Data Science Project
Data Science Project in R-Predict the sales for each department using historical markdown data from the Walmart dataset containing data of 45 Walmart stores.

Resume parsing with Machine learning - NLP with Python OCR and Spacy
In this machine learning resume parser example we use the popular Spacy NLP python library for OCR and text classification.

Music Recommendation System Project using Python and R
Machine Learning Project - Work with KKBOX's Music Recommendation System dataset to build the best music recommendation engine.

Build an Image Classifier for Plant Species Identification
In this machine learning project, we will use binary leaf images and extracted features, including shape, margin, and texture to accurately identify plant species using different benchmark classification techniques.

Zillow’s Home Value Prediction (Zestimate)
Data Science Project in R -Build a machine learning algorithm to predict the future sale prices of homes.

Ensemble Machine Learning Project - All State Insurance Claims Severity Prediction
In this ensemble machine learning project, we will predict what kind of claims an insurance company will get. This is implemented in python using ensemble machine learning algorithms.

Sequence Classification with LSTM RNN in Python with Keras
In this project, we are going to work on Sequence to Sequence Prediction using IMDB Movie Review Dataset​ using Keras in Python.

Ecommerce product reviews - Pairwise ranking and sentiment analysis
This project analyzes a dataset containing ecommerce product reviews. The goal is to use machine learning models to perform sentiment analysis on product reviews and rank them based on relevance. Reviews play a key role in product recommendation systems.

Mercari Price Suggestion Challenge Data Science Project
Data Science Project in Python- Build a machine learning algorithm that automatically suggests the right product prices.