REGRESSION EXAMPLES
# How to create and optimize a baseline Lasso Regression model?

# How to create and optimize a baseline Lasso Regression model?

This recipe helps you create and optimize a baseline Lasso Regression model

This data science python source code does the following: 1. Imports datasets from sklearn library 2. Creates pipeline for visualizing roadmap of the code 3. Performs Standard scaling and decomposition for dimensionality reduction 4. Applies lasso Regression model and performs GridSearchCV for optimization of parameters

In [2]:

```
## How to create and optimize a baseline Lasso Regression model
def Snippet_149():
print()
print(format('How to create and optimize a baseline Lasso regression model','*^82'))
import warnings
warnings.filterwarnings("ignore")
# load libraries
from sklearn import decomposition, datasets
from sklearn import linear_model
from sklearn.pipeline import Pipeline
from sklearn.model_selection import GridSearchCV, cross_val_score
from sklearn.preprocessing import StandardScaler
# Load the iris flower data
dataset = datasets.load_boston()
X = dataset.data
y = dataset.target
# Create an scaler object
sc = StandardScaler()
# Create a pca object
pca = decomposition.PCA()
# Create a logistic regression object with an L2 penalty
lasso = linear_model.Lasso()
# Create a pipeline of three steps. First, standardize the data.
# Second, tranform the data with PCA.
# Third, train a Decision Tree Classifier on the data.
pipe = Pipeline(steps=[('sc', sc),
('pca', pca),
('lasso', lasso)])
# Create Parameter Space
# Create a list of a sequence of integers from 1 to 30 (the number of features in X + 1)
n_components = list(range(1,X.shape[1]+1,1))
# Create lists of parameter for Lasso Regression
normalize = [True, False]
selection = ['cyclic', 'random']
# Create a dictionary of all the parameter options
# Note has you can access the parameters of steps of a pipeline by using '__’
parameters = dict(pca__n_components=n_components,
lasso__normalize=normalize,
lasso__selection=selection)
# Conduct Parameter Optmization With Pipeline
# Create a grid search object
clf = GridSearchCV(pipe, parameters)
# Fit the grid search
clf.fit(X, y)
# View The Best Parameters
print('Best Number Of Components:', clf.best_estimator_.get_params()['pca__n_components'])
print(); print(clf.best_estimator_.get_params()['lasso'])
# Use Cross Validation To Evaluate Model
CV_Result = cross_val_score(clf, X, y, cv=10, n_jobs=-1, scoring='r2')
print(); print(CV_Result)
print(); print(CV_Result.mean())
print(); print(CV_Result.std())
Snippet_149()
```

Text data requires special preparation before you can start using it for any machine learning project.In this ML project, you will learn about applying Machine Learning models to create classifiers and learn how to make sense of textual data.

This data science in python project predicts if a loan should be given to an applicant or not. We predict if the customer is eligible for loan based on several factors like credit score and past history.

In this deep learning project, we will predict customer churn using Artificial Neural Networks and learn how to model an ANN in R with the keras deep learning package.

In this machine learning and IoT project, we are going to test out the experimental data using various predictive models and train the models and break the energy usage.

In this machine learning project, we will use binary leaf images and extracted features, including shape, margin, and texture to accurately identify plant species using different benchmark classification techniques.

Machine Learning Project in R- Predict the customer churn of telecom sector and find out the key drivers that lead to churn. Learn how the logistic regression model using R can be used to identify the customer churn in telecom dataset.

In this project, we are going to talk about H2O and functionality in terms of building Machine Learning models.

Deep Learning Project- Learn about implementation of a machine learning algorithm using autoencoders for anomaly detection.

In this data science project in R, we are going to talk about subjective segmentation which is a clustering technique to find out product bundles in sales data.

In this project, we are going to talk about Time Series Forecasting to predict the electricity requirement for a particular house using Prophet.