Transfer Learning Exercise

Introduction

Most of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune.

In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor.

VGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully-connected layer with our own classifier. This way we can use VGGNet as a fixed feature extractor for our images then easily train a simple classifier on top of that.

  • Use all but the last fully-connected layer as a fixed feature extractor.
  • Define a new, final classification layer and apply it to a task of our choice!

You can read more about transfer learning from the CS231n Stanford course notes.

Imports

# python
from collections import OrderedDict
from datetime import datetime
import os

# pypi
from dotenv import load_dotenv
from torch import nn
from sklearn.model_selection import train_test_split
from torch.utils.data.sampler import SubsetRandomSampler

import matplotlib
import numpy
import seaborn
import torch
import torch.optim as optimize
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as pyplot

# this project
from neurotic.tangles.data_paths import DataPathTwo

Plotting

get_ipython().run_line_magic('matplotlib', 'inline')
get_ipython().run_line_magic('config', "InlineBackend.figure_format = 'retina'")
seaborn.set(style="whitegrid",
            rc={"axes.grid": False,
                "font.size": 8,
                "font.family": ["sans-serif"],
                "font.sans-serif": ["Latin Modern Sans", "Lato"],
                "figure.figsize": (8, 6)},
            font_scale=3)

Flower power

Here we'll be using VGGNet to classify images of flowers. We'll start, as usual, by importing our usual resources. And checking if we can train our model on the GPU.

Download the Data

Download the flower data from this link, save it in the home directory of this notebook and extract the zip file to get the directory flower_photos/. Make sure the directory has this exact name for accessing data: flower_photos.

load_dotenv()
path = DataPathTwo(folder_key="FLOWERS")
print(path.folder)
for target in path.folder.iterdir():
    print(target)
/home/hades/datasets/flower_photos
/home/hades/datasets/flower_photos/.DS_Store
/home/hades/datasets/flower_photos/train
/home/hades/datasets/flower_photos/test
/home/hades/datasets/flower_photos/LICENSE.txt

Check If CUDA Is Available

device = "cuda:0" if torch.cuda.is_available() else "cpu"
print(device)
cuda:0

CUDA is running out of memory and crashing so don't use CUDA.

device = "cpu"
print(device)
cpu

Load and Transform our Data

We'll be using PyTorch's ImageFolder class which makes is very easy to load data from a directory. For example, the training images are all stored in a directory path that looks like this:

root/class_1/xxx.png
root/class_1/xxy.png
root/class_1/xxz.png

root/class_2/123.png
root/class_2/nsdf3.png
root/class_2/asd932_.png

Where, in this case, the root folder for training is flower_photos/train/ and the classes are the names of flower types.

Define Training and Test Data Directories

train_dir = path.folder.joinpath('train/')
test_dir = path.folder.joinpath('test/')
print(train_dir)
print(test_dir)
/home/hades/datasets/flower_photos/train
/home/hades/datasets/flower_photos/test

Classes are folders in each directory with these names:

classes = ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips']
CLASS_COUNT = len(classes)

Transforming the Data

When we perform transfer learning, we have to shape our input data into the shape that the pre-trained model expects. VGG16 expects `224`-dim square images as input and so, we resize each flower image to fit this mold.

Load And Transform Data Using ImageFolder

VGG-16 Takes 224x224 images as input, so we resize all of them.

data_transform = transforms.Compose([transforms.RandomResizedCrop(224), 
                                      transforms.ToTensor()])

train_data = datasets.ImageFolder(train_dir, transform=data_transform)
test_data = datasets.ImageFolder(test_dir, transform=data_transform)

Print Out Some Data Stats

print('Num training images: ', len(train_data))
print('Num test images: ', len(test_data))
Num training images:  3130
Num test images:  540
VALIDATION_FRACTION = 0.2
indices = list(range(len(train_data)))
training_indices, validation_indices = train_test_split(
    indices,
    test_size=VALIDATION_FRACTION)

DataLoaders and Data Visualization

Define Dataloader Parameters

BATCH_SIZE = 20
NUM_WORKERS=4
train_sampler = SubsetRandomSampler(training_indices)
valid_sampler = SubsetRandomSampler(validation_indices)

Prepare Data Loaders

train_loader = torch.utils.data.DataLoader(train_data, batch_size=BATCH_SIZE, 
                                           sampler=train_sampler,
                                           num_workers=NUM_WORKERS)
valid_loader = torch.utils.data.DataLoader(train_data, batch_size=BATCH_SIZE, 
                                           sampler=valid_sampler, num_workers=NUM_WORKERS)
test_loader = torch.utils.data.DataLoader(test_data, batch_size=batch_size, 
                                          num_workers=num_workers, shuffle=True)

Visualize some sample data

obtain one batch of training images

dataiter = iter(train_loader)
images, labels = dataiter.next()
images = images.numpy() # convert images to numpy for display

Plot The Images In The Batch, Along With The Corresponding Labels

fig = pyplot.figure(figsize=(12, 10))
pyplot.rc("axes", titlesize=10)
for idx in numpy.arange(20):
    ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[])
    pyplot.imshow(numpy.transpose(images[idx], (1, 2, 0)))
    ax.set_title(classes[labels[idx]])

sample_batches.png

Define the Model

To define a model for training we'll follow these steps:

  1. Load in a pre-trained VGG16 model
  2. "Freeze" all the parameters, so the net acts as a fixed feature extractor
  3. Remove the last layer
  4. Replace the last layer with a linear classifier of our own

/Freezing simply means that the parameters in the pre-trained model will not change during training.**

Load the pretrained model from pytorch

vgg16 = models.vgg16(pretrained=True)

Print Out The Model Structure

print(vgg16)
VGG(
  (features): Sequential(
    (0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (1): ReLU(inplace)
    (2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (3): ReLU(inplace)
    (4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (5): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (6): ReLU(inplace)
    (7): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (8): ReLU(inplace)
    (9): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (10): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (11): ReLU(inplace)
    (12): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (13): ReLU(inplace)
    (14): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (15): ReLU(inplace)
    (16): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (17): Conv2d(256, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (18): ReLU(inplace)
    (19): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (20): ReLU(inplace)
    (21): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (22): ReLU(inplace)
    (23): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
    (24): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (25): ReLU(inplace)
    (26): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (27): ReLU(inplace)
    (28): Conv2d(512, 512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
    (29): ReLU(inplace)
    (30): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
  )
  (classifier): Sequential(
    (0): Linear(in_features=25088, out_features=4096, bias=True)
    (1): ReLU(inplace)
    (2): Dropout(p=0.5)
    (3): Linear(in_features=4096, out_features=4096, bias=True)
    (4): ReLU(inplace)
    (5): Dropout(p=0.5)
    (6): Linear(in_features=4096, out_features=1000, bias=True)
  )
)

Since we're only going to change the last (classification) layer, it might be helpful to see how many inputs and outpts it has.

print(vgg16.classifier[6].in_features) 
print(vgg16.classifier[6].out_features) 
4096
1000

So, the original model output 1,000 classes - we're going to need to change that to our five classes (eventually).

Freeze training for all "features" layers

for param in vgg16.features.parameters():
    param.requires_grad = False

Final Classifier Layer

Once you have the pre-trained feature extractor, you just need to modify and/or add to the final, fully-connected classifier layers. In this case, we suggest that you replace the last layer in the vgg classifier group of layers.

This layer should see as input the number of features produced by the portion of the network that you are not changing, and produce an appropriate number of outputs for the flower classification task.

You can access any layer in a pretrained network by name and (sometimes) number, i.e. vgg16.classifier[6] is the sixth layer in a group of layers named "classifier".

classifier = nn.Sequential(OrderedDict([
    ("Fullly Connected Classifier", nn.Linear(in_features=4096, out_features=CLASS_COUNT, bias=True)),
]))
vgg16.classifier[6] = classifier

after completing your model, if GPU is available, move the model to GPU

vgg16.to(device)

Specify Loss Function and Optimizer

Below we'll use cross-entropy loss and stochastic gradient descent with a small learning rate. Note that the optimizer accepts as input only the trainable parameters vgg.classifier.parameters().

Specify Loss Function (Categorical Cross-Entropy)

criterion = nn.CrossEntropyLoss()

specify optimizer (stochastic gradient descent) and learning rate = 0.001

optimizer = optimize.SGD(vgg16.classifier.parameters(), lr=0.001)

Training

Here, we'll train the network.

Exercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help.

number of epochs to train the model

n_epochs = EPOCHS = 2
def train(model: nn.Module, epochs: int=EPOCHS, model_number: int=0,
          epoch_offset: int=1, print_every: int=10) -> tuple:
    """Train, validate, and save the model
    This trains the model and validates it, saving the best 
    (based on validation loss) as =model_<number>_cifar.pth=

    Args:
     model: the network to train
     epochs: number of times to repeat training
     model_number: an identifier for the saved hyperparameters file
     epoch_offset: amount of epochs that have occurred previously
     print_every: how often to print output
    Returns:
     filename, training-loss, validation-loss, improvements: the outcomes for the training
    """
    optimizer = optimize.SGD(model.parameters(), lr=0.001)
    criterion = nn.CrossEntropyLoss()
    output_file = "model_{}_vgg.pth".format(model_number)
    training_losses = []
    validation_losses = []
    improvements = []
    valid_loss_min = numpy.Inf # track change in validation loss
    epoch_start = epoch_offset
    last_epoch = epoch_start + epochs + 1
    for epoch in range(epoch_start, last_epoch):

        # keep track of training and validation loss
        train_loss = 0.0
        valid_loss = 0.0

        model.train()
        for data, target in train_loader:
            # move tensors to GPU if CUDA is available            
            data, target = data.to(device), target.to(device)
            # clear the gradients of all optimized variables
            optimizer.zero_grad()
            # forward pass: compute predicted outputs by passing inputs to the model
            output = model(data)
            # calculate the batch loss
            loss = criterion(output, target)
            # backward pass: compute gradient of the loss with respect to model parameters
            loss.backward()
            # perform a single optimization step (parameter update)
            optimizer.step()
            # update training loss
            train_loss += loss.item() * data.size(0)

        model.eval()
        for data, target in valid_loader:
            # move tensors to GPU if CUDA is available
            data, target = data.to(device), target.to(device)
            # forward pass: compute predicted outputs by passing inputs to the model
            output = model(data)
            # calculate the batch loss
            loss = criterion(output, target)
            # update total validation loss 
            valid_loss += loss.item() * data.size(0)

        # calculate average losses
        train_loss = train_loss/len(train_loader.dataset)
        valid_loss = valid_loss/len(valid_loader.dataset)

        # print training/validation statistics 
        if not (epoch % print_every):
            print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
                epoch, train_loss, valid_loss))
        training_losses.append(train_loss)
        validation_losses.append(valid_loss)
        # save model if validation loss has decreased
        if valid_loss <= valid_loss_min:
            print('Validation loss decreased ({:.6f} --> {:.6f}).  Saving model ...'.format(
            valid_loss_min,
            valid_loss))
            torch.save(model.state_dict(), output_file)
            valid_loss_min = valid_loss
            improvements.append(epoch - 1)
    return output_file, training_losses, validation_losses, improvements
def test(best_model):
    criterion = nn.CrossEntropyLoss()
    # track test loss
    test_loss = 0.0
    class_correct = list(0. for i in range(10))
    class_total = list(0. for i in range(10))

    best_model.to(device)
    best_model.eval()
    # iterate over test data
    for data, target in test_loader:
        # move tensors to GPU if CUDA is available
        data, target = data.to(device), target.to(device)
        # forward pass: compute predicted outputs by passing inputs to the model
        output = best_model(data)
        # calculate the batch loss
        loss = criterion(output, target)
        # update test loss 
        test_loss += loss.item() * data.size(0)
        # convert output probabilities to predicted class
        _, pred = torch.max(output, 1)    
        # compare predictions to true label
        correct_tensor = pred.eq(target.data.view_as(pred))
        correct = (
            numpy.squeeze(correct_tensor.numpy())
            if not train_on_gpu
            else numpy.squeeze(correct_tensor.cpu().numpy()))
        # calculate test accuracy for each object class
        for i in range(BATCH_SIZE):
            label = target.data[i]
            class_correct[label] += correct[i].item()
            class_total[label] += 1

    # average test loss
    test_loss = test_loss/len(test_loader.dataset)
    print('Test Loss: {:.6f}\n'.format(test_loss))

    for i in range(10):
        if class_total[i] > 0:
            print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % (
                classes[i], 100 * class_correct[i] / class_total[i],
                numpy.sum(class_correct[i]), numpy.sum(class_total[i])))
        else:
            print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))

    print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (
        100. * numpy.sum(class_correct) / numpy.sum(class_total),
        numpy.sum(class_correct), numpy.sum(class_total)))
output_file, training_losses, validation_losses, improvements = train(vgg16, print_every=1)
training_losses = []
validation_losses = []
improvements = []
valid_loss_min = numpy.Inf # track change in validation loss
for epoch in range(1, 3):

    # keep track of training and validation loss
    train_loss = 0.0
    valid_loss = 0.0

    vgg16.train()
    for data, target in train_loader:
        # move tensors to GPU if CUDA is available            
        data, target = data.to(device), target.to(device)
        # clear the gradients of all optimized variables
        optimizer.zero_grad()
        # forward pass: compute predicted outputs by passing inputs to the model
        output = vgg16(data)
        # calculate the batch loss
        loss = criterion(output, target)
        # backward pass: compute gradient of the loss with respect to model parameters
        loss.backward()
        # perform a single optimization step (parameter update)
        optimizer.step()
        # update training loss
        train_loss += loss.item() * data.size(0)

    vgg16.eval()
    for data, target in valid_loader:
        # move tensors to GPU if CUDA is available
        data, target = data.to(device), target.to(device)
        # forward pass: compute predicted outputs by passing inputs to the model
        output = vgg16(data)
        # calculate the batch loss
        loss = criterion(output, target)
        # update total validation loss 
        valid_loss += loss.item() * data.size(0)

    # calculate average losses
    train_loss = train_loss/len(train_loader.dataset)
    valid_loss = valid_loss/len(valid_loader.dataset)

    # print training/validation statistics 
    print('Epoch: {} \tTraining Loss: {:.6f} \tValidation Loss: {:.6f}'.format(
        epoch, train_loss, valid_loss))
    training_losses.append(train_loss)
    validation_losses.append(valid_loss)
    # save model if validation loss has decreased
    if valid_loss <= valid_loss_min:
        print('Validation loss decreased ({:.6f} --> {:.6f}).  Saving model ...'.format(
        valid_loss_min,
        valid_loss))
        torch.save(vgg16.state_dict(), output_file)
        valid_loss_min = valid_loss
        improvements.append(epoch - 1)

test_loss = 0.0 class_correct = list(0. for i in range(5)) class_total = list(0. for i in range(5))

vgg16.eval() # eval mode

for data, target in test_loader:

if train_on_gpu: data, target = data.cuda(), target.cuda()

output = vgg16(data)

loss = criterion(output, target)

test_loss += loss.item()*data.size(0)

_, pred = torch.max(output, 1)

correct_tensor = pred.eq(target.data.view_as(pred)) correct = np.squeeze(correct_tensor.numpy()) if not train_on_gpu else np.squeeze(correct_tensor.cpu().numpy())

for i in range(batch_size): label = target.data[i] class_correct[label] += correct[i].item() class_total[label] += 1

test_loss = test_loss/len(test_loader.dataset) print('Test Loss: {:.6f}\n'.format(test_loss))

for i in range(5): if class_total[i] > 0: print('Test Accuracy of %5s: %2d%% (%2d/%2d)' % ( classes[i], 100 * class_correct[i] / class_total[i], np.sum(class_correct[i]), np.sum(class_total[i]))) else: print('Test Accuracy of %5s: N/A (no training examples)' % (classes[i]))

print('\nTest Accuracy (Overall): %2d%% (%2d/%2d)' % (

  1. * np.sum(class_correct) / np.sum(class_total),

np.sum(class_correct), np.sum(class_total)))

dataiter = iter(test_loader) images, labels = dataiter.next() images.numpy()

if train_on_gpu: images = images.cuda()

output = vgg16(images)

_, preds_tensor = torch.max(output, 1) preds = np.squeeze(preds_tensor.numpy()) if not train_on_gpu else np.squeeze(preds_tensor.cpu().numpy())

fig = plt.figure(figsize=(25, 4)) for idx in np.arange(20): ax = fig.add_subplot(2, 20/2, idx+1, xticks=[], yticks=[]) plt.imshow(np.transpose(images[idx], (1, 2, 0))) ax.set_title("{} ({})".format(classes[preds[idx]], classes[labels[idx]]), color=("green" if preds[idx]==labels[idx].item() else "red"))