How to run a basic RNN model using Pytorch?
MACHINE LEARNING RECIPES

How to run a basic RNN model using Pytorch?

How to run a basic RNN model using Pytorch?

This Pytorch recipe inputs a dataset into a basic RNN (recurrent neural net) model and makes image classification predictions.

0

This recipe uses the MNIST handwritten digits dataset for image classification. The RNN model predicts what the handwritten digit is. The recipe uses the following steps to accurately predict the handwritten digits:
- Import Libraries
- Prepare Dataset
- Create RNN Model
- Instantiate Model Class
- Instantiate Loss Class
- Instantiate Optimizer Class
- Tran the Model
- Prediction

This recipe uses the helpful PyTorch utility DataLoader - which provide the ability to batch, shuffle and load the data in parallel using multiprocessing workers.

What is RNN ?
A recurrent neural network (RNN) is a type of deep learning artificial neural network commonly used in speech recognition and natural language processing (NLP). This neural net processes sequential data, and takes in as input both the new input and the output (or a hidden layer) of the net in the previous step. Since they have backward connection in their hidden layers they have memory states.

What is PyTorch ?
Pytorch is a Python deep learning library that uses the power of graphics processing units. Its strengths compared to other tools like tensorflow are its flexibility and speed. You can use other Python packages such as NumPy, SciPy to extend PyTorch functionalities.

In [79]:
import torch
from torch import nn
from torch.autograd import Variable
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
%matplotlib inline
In [80]:
torch.manual_seed(1)    # reproducible
Out[80]:
<torch._C.Generator at 0x12214f310>
In [81]:
# Hyper Parameters
EPOCH = 1               # train the training data n times, to save time, we just train 1 epoch
BATCH_SIZE = 64
TIME_STEP = 28          # rnn time step / image height
INPUT_SIZE = 28         # rnn input size / image width
LR = 0.01               # learning rate
DOWNLOAD_MNIST = True   # set to True if haven't download the data
In [82]:
# Mnist digital dataset
train_data = dsets.MNIST(
    root='./mnist/',
    train=True,                         # this is training data
    transform=transforms.ToTensor(),    # Converts a PIL.Image or numpy.ndarray to
                                        # torch.FloatTensor of shape (C x H x W) and normalize in the range [0.0, 1.0]
    download=DOWNLOAD_MNIST,            # download it if you don't have it
)
In [83]:
# plot one example
print(train_data.train_data.size())     # (60000, 28, 28)
print(train_data.train_labels.size())   # (60000)
plt.imshow(train_data.train_data[0].numpy(), cmap='gray')
plt.title('%i' % train_data.train_labels[0])
plt.show()
torch.Size([60000, 28, 28])
torch.Size([60000])
In [84]:
# Data Loader for easy mini-batch return in training
train_loader = torch.utils.data.DataLoader(dataset=train_data,
                                           batch_size=BATCH_SIZE, shuffle=True)
In [85]:
# convert test data into Variable, pick 2000 samples to speed up testing
test_data = dsets.MNIST(root='./mnist/', train=False, transform=transforms.ToTensor())
test_x = Variable(test_data.test_data, volatile=True).type(torch.FloatTensor)[:2000]/255.
# shape (2000, 28, 28) value in range(0,1)
test_y = test_data.test_labels.numpy().squeeze()[:2000]    # covert to numpy array
/Applications/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:3: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  This is separate from the ipykernel package so we can avoid doing imports until
In [86]:
class RNN(nn.Module):
    def __init__(self):
        super(RNN, self).__init__()

        self.rnn = nn.LSTM(         # if use nn.RNN(), it hardly learns
            input_size=INPUT_SIZE,
            hidden_size=64,         # rnn hidden unit
            num_layers=1,           # number of rnn layer
            batch_first=True,       # input & output will has batch size as 1s dimension. e.g. (batch, time_step, input_size)
        )

        self.out = nn.Linear(64, 10)

    def forward(self, x):
        # x shape (batch, time_step, input_size)
        # r_out shape (batch, time_step, output_size)
        # h_n shape (n_layers, batch, hidden_size)
        # h_c shape (n_layers, batch, hidden_size)
        r_out, (h_n, h_c) = self.rnn(x, None)   # None represents zero initial hidden state

        # choose r_out at the last time step
        out = self.out(r_out[:, -1, :])
        return out
In [87]:
rnn = RNN()
print(rnn)
RNN(
  (rnn): LSTM(28, 64, batch_first=True)
  (out): Linear(in_features=64, out_features=10, bias=True)
)
In [88]:
optimizer = torch.optim.Adam(rnn.parameters(), lr=LR)   # optimize all cnn parameters
loss_func = nn.CrossEntropyLoss()                       # the target label is not one-hotted
In [89]:
# training and testing
for epoch in range(EPOCH):
    for step, (x, y) in enumerate(train_loader):        # gives batch data
        b_x = Variable(x.view(-1, 28, 28))              # reshape x to (batch, time_step, input_size)
        b_y = Variable(y)                               # batch y

        output = rnn(b_x)                               # rnn output
        loss = loss_func(output, b_y)                   # cross entropy loss
        optimizer.zero_grad()                           # clear gradients for this training step
        loss.backward()                                 # backpropagation, compute gradients
        optimizer.step()                                # apply gradients

        if step % 50 == 0:
            test_output = rnn(test_x)                   # (samples, time_step, input_size)
            pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze()
            accuracy = sum(pred_y == test_y) / float(test_y.size)
            print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy)
/Applications/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:17: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number
Epoch:  0 | train loss: 2.2883 | test accuracy: 0.10
Epoch:  0 | train loss: 0.8138 | test accuracy: 0.62
Epoch:  0 | train loss: 0.9010 | test accuracy: 0.78
Epoch:  0 | train loss: 0.6608 | test accuracy: 0.83
Epoch:  0 | train loss: 0.3150 | test accuracy: 0.85
Epoch:  0 | train loss: 0.2186 | test accuracy: 0.91
Epoch:  0 | train loss: 0.4511 | test accuracy: 0.90
Epoch:  0 | train loss: 0.4673 | test accuracy: 0.90
Epoch:  0 | train loss: 0.2014 | test accuracy: 0.93
Epoch:  0 | train loss: 0.2198 | test accuracy: 0.93
Epoch:  0 | train loss: 0.0439 | test accuracy: 0.93
Epoch:  0 | train loss: 0.1979 | test accuracy: 0.95
Epoch:  0 | train loss: 0.0518 | test accuracy: 0.95
Epoch:  0 | train loss: 0.1723 | test accuracy: 0.94
Epoch:  0 | train loss: 0.1908 | test accuracy: 0.94
Epoch:  0 | train loss: 0.0576 | test accuracy: 0.95
Epoch:  0 | train loss: 0.0414 | test accuracy: 0.96
Epoch:  0 | train loss: 0.3591 | test accuracy: 0.95
Epoch:  0 | train loss: 0.2465 | test accuracy: 0.95
In [90]:
# print 10 predictions from test data
test_output = rnn(test_x[:10].view(-1, 28, 28))
pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze()
print(pred_y, 'prediction number')
print(test_y[:10], 'real number')
[7 2 1 0 4 1 4 9 5 9] prediction number
[7 2 1 0 4 1 4 9 5 9] real number

Relevant Projects

Time Series Forecasting with LSTM Neural Network Python
Deep Learning Project- Learn to apply deep learning paradigm to forecast univariate time series data.

Human Activity Recognition Using Smartphones Data Set
In this deep learning project, you will build a classification system where to precisely identify human fitness activities.

Solving Multiple Classification use cases Using H2O
In this project, we are going to talk about H2O and functionality in terms of building Machine Learning models.

Loan Eligibility Prediction using Gradient Boosting Classifier
This data science in python project predicts if a loan should be given to an applicant or not. We predict if the customer is eligible for loan based on several factors like credit score and past history.

Machine Learning or Predictive Models in IoT - Energy Prediction Use Case
In this machine learning and IoT project, we are going to test out the experimental data using various predictive models and train the models and break the energy usage.

Walmart Sales Forecasting Data Science Project
Data Science Project in R-Predict the sales for each department using historical markdown data from the Walmart dataset containing data of 45 Walmart stores.

Build an Image Classifier for Plant Species Identification
In this machine learning project, we will use binary leaf images and extracted features, including shape, margin, and texture to accurately identify plant species using different benchmark classification techniques.

Music Recommendation System Project using Python and R
Machine Learning Project - Work with KKBOX's Music Recommendation System dataset to build the best music recommendation engine.

German Credit Dataset Analysis to Classify Loan Applications
In this data science project, you will work with German credit dataset using classification techniques like Decision Tree, Neural Networks etc to classify loan applications using R.

Predict Macro Economic Trends using Kaggle Financial Dataset
In this machine learning project, you will uncover the predictive value in an uncertain world by using various artificial intelligence, machine learning, advanced regression and feature transformation techniques.