How to create and optimize a baseline Ridge Regression model in python

This recipe helps you create and optimize a baseline Ridge Regression model in python

Recipe Objective

Many a times while working on a dataset and using a Machine Learning model we don"t know which set of hyperparameters will give us the best result. Passing all sets of hyperparameters manually through the model and checking the result might be a hectic work and may not be possible to do.

To get the best set of hyperparameters we can use Grid Search. Grid Search passes all combinations of hyperparameters one by one into the model and check the result. Finally it gives us the set of hyperparemeters which gives the best result after passing in the model.

So this recipe is a short example of how we can create and optimize a baseline Ridge regression model.

Step 1 - Import the library - GridSearchCv

from sklearn import decomposition, datasets from sklearn import linear_model from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV, cross_val_score from sklearn.preprocessing import StandardScaler

Here we have imported various modules like decomposition, datasets, linear_model, Pipeline, StandardScaler and GridSearchCV from differnt libraries. We will understand the use of these later while using it in the in the code snipet.
For now just have a look on these imports.

Step 2 - Setup the Data

Here we have used datasets to load the inbuilt boston dataset and we have created objects X and y to store the data and the target value respectively. dataset = datasets.load_boston() X = dataset.data y = dataset.target

Step 3 - Using StandardScaler and PCA

StandardScaler is used to remove the outliners and scale the data by making the mean of the data 0 and standard deviation as 1. So we are creating an object std_scl to use standardScaler. std_slc = StandardScaler()

We are also using Principal Component Analysis(PCA) which will reduce the dimension of features by creating new features which have most of the varience of the original data. pca = decomposition.PCA()

Here, we are using Ridge Regression as a Machine Learning model to use GridSearchCV. So we have created an object Ridge. ridge = linear_model.Ridge()

Step 5 - Using Pipeline for GridSearchCV

Pipeline will helps us by passing modules one by one through GridSearchCV for which we want to get the best parameters. So we are making an object pipe to create a pipeline for all the three objects std_scl, pca and ridge. pipe = Pipeline(steps=[("std_slc", std_slc), ("pca", pca), ("ridge", ridge)])

Now we have to define the parameters that we want to optimise for these three objects.
StandardScaler doesnot requires any parameters to be optimised by GridSearchCV.
Principal Component Analysis requires a parameter "n_components" to be optimised. "n_components" signifies the number of components to keep after reducing the dimension. n_components = list(range(1,X.shape[1]+1,1))

Logistic Regression requires two parameters "normalize" and "solver" to be optimised by GridSearchCV. So we have set these two parameters as a list of values form which GridSearchCV will select the best value of parameter. normalize = [True, False] solver = ["auto", "svd", "cholesky", "lsqr", "sparse_cg", "sag", "saga"]

Now we are creating a dictionary to set all the parameters options for different modules. parameters = dict(pca__n_components=n_components, ridge__normalize=normalize, ridge__solver=solver)

Step 6 - Using GridSearchCV and Printing Results

Before using GridSearchCV, lets have a look on the important parameters.

  • estimator: In this we have to pass the models or functions on which we want to use GridSearchCV
  • param_grid: Dictionary or list of parameters of models or function in which GridSearchCV have to select the best.
  • Scoring: It is used as a evaluating metric for the model performance to decide the best hyperparameters, if not especified then it uses estimator score.

Making an object clf_GS for GridSearchCV and fitting the dataset i.e X and y clf_GS = GridSearchCV(pipe, parameters) clf_GS.fit(X, y) Now we are using print statements to print the results. It will give the values of hyperparameters as a result. print("Best Number Of Components:", clf_GS.best_estimator_.get_params()["pca__n_components"]) print(); print(clf_GS.best_estimator_.get_params()["ridge"]) CV_Result = cross_val_score(clf_GS, X, y, cv=10, n_jobs=-1, scoring="r2") print(); print(CV_Result) print(); print(CV_Result.mean()) print(); print(CV_Result.std()) As an output we get:

Best Number Of Components: 4

Ridge(alpha=1.0, copy_X=True, fit_intercept=True, max_iter=None,
   normalize=False, random_state=None, solver="saga", tol=0.001)

[ 0.7366215   0.74795635 -0.1688405   0.57370647  0.62934032  0.66902423
  0.28958882  0.10813156 -0.21149751  0.21868053]

0.35927117736640224

0.3465122691847129

Download Materials

What Users are saying..

profile image

Abhinav Agarwal

Graduate Student at Northwestern University
linkedin profile url

I come from Northwestern University, which is ranked 9th in the US. Although the high-quality academics at school taught me all the basics I needed, obtaining practical experience was a challenge.... Read More

Relevant Projects

MLOps Project for a Mask R-CNN on GCP using uWSGI Flask
MLOps on GCP - Solved end-to-end MLOps Project to deploy a Mask RCNN Model for Image Segmentation as a Web Application using uWSGI Flask, Docker, and TensorFlow.

ML Model Deployment on AWS for Customer Churn Prediction
MLOps Project-Deploy Machine Learning Model to Production Python on AWS for Customer Churn Prediction

Skip Gram Model Python Implementation for Word Embeddings
Skip-Gram Model word2vec Example -Learn how to implement the skip gram algorithm in NLP for word embeddings on a set of documents.

Ensemble Machine Learning Project - All State Insurance Claims Severity Prediction
In this ensemble machine learning project, we will predict what kind of claims an insurance company will get. This is implemented in python using ensemble machine learning algorithms.

Recommender System Machine Learning Project for Beginners-1
Recommender System Machine Learning Project for Beginners - Learn how to design, implement and train a rule-based recommender system in Python

PyTorch Project to Build a LSTM Text Classification Model
In this PyTorch Project you will learn how to build an LSTM Text Classification model for Classifying the Reviews of an App .

Loan Eligibility Prediction using Gradient Boosting Classifier
This data science in python project predicts if a loan should be given to an applicant or not. We predict if the customer is eligible for loan based on several factors like credit score and past history.

Build an Image Segmentation Model using Amazon SageMaker
In this Machine Learning Project, you will learn to implement the UNet Architecture and build an Image Segmentation Model using Amazon SageMaker

Learn How to Build a Linear Regression Model in PyTorch
In this Machine Learning Project, you will learn how to build a simple linear regression model in PyTorch to predict the number of days subscribed.

Walmart Sales Forecasting Data Science Project
Data Science Project in R-Predict the sales for each department using historical markdown data from the Walmart dataset containing data of 45 Walmart stores.