REGRESSION EXAMPLES
# How to create and optimize a baseline Ridge Regression model?

# How to create and optimize a baseline Ridge Regression model?

This recipe helps you create and optimize a baseline Ridge Regression model

This data science python source code does the following: 1. Implements Decomposition methods to reduce dimensions. 2. Implements Ridge Regression using GridSearchCV for hyperparameter tuning. 3. Normalization of the dataset. 4. Prints out the final optimized output.

In [2]:

```
## How to create and optimize a baseline Ridge Regression model
def Snippet_148():
print()
print(format('How to create and optimize a baseline Ridge regression model','*^82'))
import warnings
warnings.filterwarnings("ignore")
# load libraries
from sklearn import decomposition, datasets
from sklearn import linear_model
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV, cross_val_score
from sklearn.preprocessing import StandardScaler
# Load the iris flower data
dataset = datasets.load_boston()
X = dataset.data
y = dataset.target
# Create an scaler object
sc = StandardScaler()
# Create a pca object
pca = decomposition.PCA()
# Create a logistic regression object with an L2 penalty
ridge = linear_model.Ridge()
# Create a pipeline of three steps. First, standardize the data.
# Second, tranform the data with PCA.
# Third, train a Decision Tree Classifier on the data.
pipe = Pipeline(steps=[('sc', sc),
('pca', pca),
('ridge', ridge)])
# Create Parameter Space
# Create a list of a sequence of integers from 1 to 30 (the number of features in X + 1)
n_components = list(range(1,X.shape[1]+1,1))
# Create lists of parameter for Ridge Regression
normalize = [True, False]
solver = ['auto', 'svd', 'cholesky', 'lsqr', 'sparse_cg', 'sag', 'saga']
# Create a dictionary of all the parameter options
# Note has you can access the parameters of steps of a pipeline by using '__’
parameters = dict(pca__n_components=n_components,
ridge__normalize=normalize,
ridge__solver=solver)
# Conduct Parameter Optmization With Pipeline
# Create a grid search object
clf = GridSearchCV(pipe, parameters)
# Fit the grid search
clf.fit(X, y)
# View The Best Parameters
print('Best Number Of Components:', clf.best_estimator_.get_params()['pca__n_components'])
print(); print(clf.best_estimator_.get_params()['ridge'])
# Use Cross Validation To Evaluate Model
CV_Result = cross_val_score(clf, X, y, cv=10, n_jobs=-1, scoring='r2')
print(); print(CV_Result)
print(); print(CV_Result.mean())
print(); print(CV_Result.std())
Snippet_148()
```

Data Science Project in Python- Build a machine learning algorithm that automatically suggests the right product prices.

Machine Learning Project - Work with KKBOX's Music Recommendation System dataset to build the best music recommendation engine.

PySpark Project-Get a handle on using Python with Spark through this hands-on data processing spark python tutorial.

In this data science project, you will predict borrowers chance of defaulting on credit loans by building a credit score prediction model.

In this machine learning project, we will use binary leaf images and extracted features, including shape, margin, and texture to accurately identify plant species using different benchmark classification techniques.

There are different time series forecasting methods to forecast stock price, demand etc. In this machine learning project, you will learn to determine which forecasting method to be used when and how to apply with time series forecasting example.

Machine Learning Project in R-Detect fraudulent click traffic for mobile app ads using R data science programming language.

In this data science project in R, we are going to talk about subjective segmentation which is a clustering technique to find out product bundles in sales data.

This project analyzes a dataset containing ecommerce product reviews. The goal is to use machine learning models to perform sentiment analysis on product reviews and rank them based on relevance. Reviews play a key role in product recommendation systems.

In this project, we are going to work on Deep Learning using H2O to predict Census income.