How to perform basic regression using keras model?

How to perform basic regression using keras model?

How to perform basic regression using keras model?

This recipe helps you perform basic regression using keras model

Recipe Objective

In machine learning, our main motive is to create a model that can relate the dependent variable(i.e target) with the independent variable(i.e. data). The most common model to do this is regression analysis. Regression fits the best possible curve on the training data set so that it can predict the target using the same curve.

So this recipe is a short example of How to perform basic regression using keras model?

Step 1 - Import the library

import pandas as pd import numpy as np from keras.datasets import mnist from sklearn.model_selection import train_test_split from keras.models import Sequential from keras.layers import Dense from keras.layers import Dropout

We have imported pandas, numpy, mnist(which is the dataset), train_test_split, Sequential, Dense and Dropout. We will use these later in the recipe.

Step 2 - Loading the Dataset

Here we have used the inbuilt mnist dataset and stored the train data in X_train and y_train. We have used X_test and y_test to store the test data. (X_train, y_train), (X_test, y_test) = mnist.load_data()

Step 3 - Creating Regression Model

We have created an object model for sequential model. We can use two args i.e layers and name. model = Sequential() Now, We are adding the layers by using 'add'. We can specify the type of layer, activation function to be used and many other things while adding the layer.
Here we are making regression model so we are making the linear stack of layers. We are using the activation function as 'relu' that is rectified linear unit, it has a advantage of being non linear also. model.add(Dense(512, activation='relu')) model.add(Dropout(0.5)) model.add(Dense(256, activation='relu')) model.add(Dropout(0.25)) model.add(Dense(10))

Step 4 - Compiling the model

We can compile a model by using compile attribute. Let us first look at its parameters before using it.

  • optimizer : In this, we can pass the optimizer we want to use. There is various optimizer like SGD, Adam etc.
  • loss : In this, we can pass a loss function which we want for the model
  • metrics : In this, we can pass the metric on which we want the model to be scored
model.compile(optimizer='Adam', loss='categorical_crossentropy', metrics=['accuracy'])

Step 5 - Fitting the model

We can fit a model on the data we have and can use the model after that. Here we are using the data which we have splitted i.e the training data for fitting the model.
While fitting we can pass various parameters like batch_size, epochs, verbose, validation_data and so on., y_train, batch_size=128, epochs=2, verbose=1, validation_data=(X_test, y_test) model.summary()

Step 6 - Evaluating the model

After fitting a model we want to evaluate the model. Here we are using model.evaluate to evaluate the model and it will give us the loss and the accuracy. Here we have also printed the score. score = model.evaluate(X_test, y_test, verbose=0) print('Test loss:', score[0]) print('Test accuracy:', score[1])

Step 7 - Predicting the output

Finally we are predicting the output for this we are using another part of the data that we get from test_train_split i.e. test data. We will use it and predict the output. y_pred = model.predict(X_test) print(y_pred) As an output we get:

Epoch 1/2
469/469 [==============================] - 7s 14ms/step - loss: 0.3174 - accuracy: 0.9033 - val_loss: 0.1212 - val_accuracy: 0.9630
Epoch 2/2
469/469 [==============================] - 6s 14ms/step - loss: 0.1560 - accuracy: 0.9534 - val_loss: 0.0918 - val_accuracy: 0.9720
Test loss: 0.09184003621339798
Test accuracy: 0.972000002861023

[[8.92436292e-10 1.32853462e-09 6.39653945e-06 ... 9.99989152e-01
  1.79315840e-09 2.44941958e-07]
 [9.11153306e-11 1.03196271e-05 9.99982357e-01 ... 1.89035987e-09
  9.82423032e-09 8.40081246e-14]
 [1.10766098e-06 9.99514341e-01 1.26151179e-04 ... 1.44331687e-04
  4.99823145e-05 6.05678633e-06]
 [2.03985762e-09 1.29704825e-08 2.95020914e-08 ... 1.23884201e-05
  6.87194824e-06 1.75449488e-04]
 [5.91818647e-08 1.97798578e-08 7.46679774e-10 ... 5.06311437e-09
  1.96506153e-04 1.14137793e-08]
 [1.13083731e-09 5.45665553e-12 2.54836174e-09 ... 3.70580059e-13
  6.02386641e-10 3.15489106e-12]]

Relevant Projects

Abstractive Text Summarization using Transformers-BART Model
Deep Learning Project to implement an Abstractive Text Summarizer using Google's Transformers-BART Model to generate news article headlines.

Inventory Demand Forecasting using Machine Learning in R
In this machine learning project, you will develop a machine learning model to accurately forecast inventory demand based on historical sales data.

Churn Prediction in Telecom using Machine Learning in R
Estimating churners before they discontinue using a product or service is extremely important. In this ML project, you will develop a churn prediction model in telecom to predict customers who are most likely subject to churn.

Resume parsing with Machine learning - NLP with Python OCR and Spacy
In this machine learning resume parser example we use the popular Spacy NLP python library for OCR and text classification.

Credit Card Fraud Detection as a Classification Problem
In this data science project, we will predict the credit card fraud in the transactional dataset using some of the predictive models.

Word2Vec and FastText Word Embedding with Gensim in Python
In this NLP Project, you will learn how to use the popular topic modelling library Gensim for implementing two state-of-the-art word embedding methods Word2Vec and FastText models.

Grouping similar schools/colleges using scorecard and other factors
Use cluster analysis to identify the groups of characteristically similar schools in the College Scorecard dataset. Considerations: Clustering Algorithm Data Preparation How will you deal with missing values? Categorical variables? Feature intercorrelations? Feature normalization or scaling? Dimensionality reduction? Hyperparameters How will you set the parameters -- the algorithm's knobs and dials, so to speak -- in order to achieve valid and useful output? Interpretation Is it possible to explain what each cluster represents? Did you retain or prepare a set of features that enables a meaningful interpretation of the clusters? Do the compositions of the clusters seem to make sense? Validation How will you measure the validity of your clustering process? Which metrics will you use and how will you apply them?

Machine learning for Retail Price Recommendation with Python
Use the Mercari Dataset with dynamic pricing to build a price recommendation algorithm using machine learning in Python to automatically suggest the right product prices.

Loan Eligibility Prediction in Python using
In this loan prediction project you will build predictive models in Python using to predict if an applicant is able to repay the loan or not.

House Price Prediction Project using Machine Learning
Use the Zillow dataset to follow a test-driven approach and build a regression machine learning model to predict the price of the house based on other variables.