How to run a basic RNN model using Pytorch?
MACHINE LEARNING RECIPES DATA CLEANING PYTHON DATA MUNGING PANDAS CHEATSHEET     ALL TAGS

How to run a basic RNN model using Pytorch?

How to run a basic RNN model using Pytorch?

This Pytorch recipe inputs a dataset into a basic RNN (recurrent neural net) model and makes image classification predictions.

0

This recipe uses the MNIST handwritten digits dataset for image classification. The RNN model predicts what the handwritten digit is. The recipe uses the following steps to accurately predict the handwritten digits:
- Import Libraries
- Prepare Dataset
- Create RNN Model
- Instantiate Model Class
- Instantiate Loss Class
- Instantiate Optimizer Class
- Tran the Model
- Prediction

This recipe uses the helpful PyTorch utility DataLoader - which provide the ability to batch, shuffle and load the data in parallel using multiprocessing workers.

What is RNN ?
A recurrent neural network (RNN) is a type of deep learning artificial neural network commonly used in speech recognition and natural language processing (NLP). This neural net processes sequential data, and takes in as input both the new input and the output (or a hidden layer) of the net in the previous step. Since they have backward connection in their hidden layers they have memory states.

What is PyTorch ?
Pytorch is a Python deep learning library that uses the power of graphics processing units. Its strengths compared to other tools like tensorflow are its flexibility and speed. You can use other Python packages such as NumPy, SciPy to extend PyTorch functionalities.

In [79]:
import torch
from torch import nn
from torch.autograd import Variable
import torchvision.datasets as dsets
import torchvision.transforms as transforms
import matplotlib.pyplot as plt
%matplotlib inline
In [80]:
torch.manual_seed(1)    # reproducible
Out[80]:
<torch._C.Generator at 0x12214f310>
In [81]:
# Hyper Parameters
EPOCH = 1               # train the training data n times, to save time, we just train 1 epoch
BATCH_SIZE = 64
TIME_STEP = 28          # rnn time step / image height
INPUT_SIZE = 28         # rnn input size / image width
LR = 0.01               # learning rate
DOWNLOAD_MNIST = True   # set to True if haven't download the data
In [82]:
# Mnist digital dataset
train_data = dsets.MNIST(
    root='./mnist/',
    train=True,                         # this is training data
    transform=transforms.ToTensor(),    # Converts a PIL.Image or numpy.ndarray to
                                        # torch.FloatTensor of shape (C x H x W) and normalize in the range [0.0, 1.0]
    download=DOWNLOAD_MNIST,            # download it if you don't have it
)
In [83]:
# plot one example
print(train_data.train_data.size())     # (60000, 28, 28)
print(train_data.train_labels.size())   # (60000)
plt.imshow(train_data.train_data[0].numpy(), cmap='gray')
plt.title('%i' % train_data.train_labels[0])
plt.show()
torch.Size([60000, 28, 28])
torch.Size([60000])
In [84]:
# Data Loader for easy mini-batch return in training
train_loader = torch.utils.data.DataLoader(dataset=train_data,
                                           batch_size=BATCH_SIZE, shuffle=True)
In [85]:
# convert test data into Variable, pick 2000 samples to speed up testing
test_data = dsets.MNIST(root='./mnist/', train=False, transform=transforms.ToTensor())
test_x = Variable(test_data.test_data, volatile=True).type(torch.FloatTensor)[:2000]/255.
# shape (2000, 28, 28) value in range(0,1)
test_y = test_data.test_labels.numpy().squeeze()[:2000]    # covert to numpy array
/Applications/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:3: UserWarning: volatile was removed and now has no effect. Use `with torch.no_grad():` instead.
  This is separate from the ipykernel package so we can avoid doing imports until
In [86]:
class RNN(nn.Module):
    def __init__(self):
        super(RNN, self).__init__()

        self.rnn = nn.LSTM(         # if use nn.RNN(), it hardly learns
            input_size=INPUT_SIZE,
            hidden_size=64,         # rnn hidden unit
            num_layers=1,           # number of rnn layer
            batch_first=True,       # input & output will has batch size as 1s dimension. e.g. (batch, time_step, input_size)
        )

        self.out = nn.Linear(64, 10)

    def forward(self, x):
        # x shape (batch, time_step, input_size)
        # r_out shape (batch, time_step, output_size)
        # h_n shape (n_layers, batch, hidden_size)
        # h_c shape (n_layers, batch, hidden_size)
        r_out, (h_n, h_c) = self.rnn(x, None)   # None represents zero initial hidden state

        # choose r_out at the last time step
        out = self.out(r_out[:, -1, :])
        return out
In [87]:
rnn = RNN()
print(rnn)
RNN(
  (rnn): LSTM(28, 64, batch_first=True)
  (out): Linear(in_features=64, out_features=10, bias=True)
)
In [88]:
optimizer = torch.optim.Adam(rnn.parameters(), lr=LR)   # optimize all cnn parameters
loss_func = nn.CrossEntropyLoss()                       # the target label is not one-hotted
In [89]:
# training and testing
for epoch in range(EPOCH):
    for step, (x, y) in enumerate(train_loader):        # gives batch data
        b_x = Variable(x.view(-1, 28, 28))              # reshape x to (batch, time_step, input_size)
        b_y = Variable(y)                               # batch y

        output = rnn(b_x)                               # rnn output
        loss = loss_func(output, b_y)                   # cross entropy loss
        optimizer.zero_grad()                           # clear gradients for this training step
        loss.backward()                                 # backpropagation, compute gradients
        optimizer.step()                                # apply gradients

        if step % 50 == 0:
            test_output = rnn(test_x)                   # (samples, time_step, input_size)
            pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze()
            accuracy = sum(pred_y == test_y) / float(test_y.size)
            print('Epoch: ', epoch, '| train loss: %.4f' % loss.data[0], '| test accuracy: %.2f' % accuracy)
/Applications/anaconda3/lib/python3.6/site-packages/ipykernel_launcher.py:17: UserWarning: invalid index of a 0-dim tensor. This will be an error in PyTorch 0.5. Use tensor.item() to convert a 0-dim tensor to a Python number
Epoch:  0 | train loss: 2.2883 | test accuracy: 0.10
Epoch:  0 | train loss: 0.8138 | test accuracy: 0.62
Epoch:  0 | train loss: 0.9010 | test accuracy: 0.78
Epoch:  0 | train loss: 0.6608 | test accuracy: 0.83
Epoch:  0 | train loss: 0.3150 | test accuracy: 0.85
Epoch:  0 | train loss: 0.2186 | test accuracy: 0.91
Epoch:  0 | train loss: 0.4511 | test accuracy: 0.90
Epoch:  0 | train loss: 0.4673 | test accuracy: 0.90
Epoch:  0 | train loss: 0.2014 | test accuracy: 0.93
Epoch:  0 | train loss: 0.2198 | test accuracy: 0.93
Epoch:  0 | train loss: 0.0439 | test accuracy: 0.93
Epoch:  0 | train loss: 0.1979 | test accuracy: 0.95
Epoch:  0 | train loss: 0.0518 | test accuracy: 0.95
Epoch:  0 | train loss: 0.1723 | test accuracy: 0.94
Epoch:  0 | train loss: 0.1908 | test accuracy: 0.94
Epoch:  0 | train loss: 0.0576 | test accuracy: 0.95
Epoch:  0 | train loss: 0.0414 | test accuracy: 0.96
Epoch:  0 | train loss: 0.3591 | test accuracy: 0.95
Epoch:  0 | train loss: 0.2465 | test accuracy: 0.95
In [90]:
# print 10 predictions from test data
test_output = rnn(test_x[:10].view(-1, 28, 28))
pred_y = torch.max(test_output, 1)[1].data.numpy().squeeze()
print(pred_y, 'prediction number')
print(test_y[:10], 'real number')
[7 2 1 0 4 1 4 9 5 9] prediction number
[7 2 1 0 4 1 4 9 5 9] real number

Relevant Projects

Natural language processing Chatbot application using NLTK for text classification
In this NLP AI application, we build the core conversational engine for a chatbot. We use the popular NLTK text classification library to achieve this.

Choosing the right Time Series Forecasting Methods
There are different time series forecasting methods to forecast stock price, demand etc. In this machine learning project, you will learn to determine which forecasting method to be used when and how to apply with time series forecasting example.

Machine Learning project for Retail Price Optimization
In this machine learning pricing project, we implement a retail price optimization algorithm using regression trees. This is one of the first steps to building a dynamic pricing model.

German Credit Dataset Analysis to Classify Loan Applications
In this data science project, you will work with German credit dataset using classification techniques like Decision Tree, Neural Networks etc to classify loan applications using R.

Mercari Price Suggestion Challenge Data Science Project
Data Science Project in Python- Build a machine learning algorithm that automatically suggests the right product prices.

Forecast Inventory demand using historical sales data in R
In this machine learning project, you will develop a machine learning model to accurately forecast inventory demand based on historical sales data.

Predict Employee Computer Access Needs in Python
Data Science Project in Python- Given his or her job role, predict employee access needs using amazon employee database.

Perform Time series modelling using Facebook Prophet
In this project, we are going to talk about Time Series Forecasting to predict the electricity requirement for a particular house using Prophet.

Deep Learning with Keras in R to Predict Customer Churn
In this deep learning project, we will predict customer churn using Artificial Neural Networks and learn how to model an ANN in R with the keras deep learning package.

Ensemble Machine Learning Project - All State Insurance Claims Severity Prediction
In this ensemble machine learning project, we will predict what kind of claims an insurance company will get. This is implemented in python using ensemble machine learning algorithms.