Basic Numpy for Neural Networks

Beginning

Numpy is the main package for scientific computing in Python. It is maintained by a large community (www.numpy.org). In this exercise you will learn several key numpy functions such as np.exp, np.log, and np.reshape.

Imports

# python
from collections import namedtuple
from functools import partial

import math
import time

# pypi
from expects import be_true, equal, expect

import hvplot.pandas
import numpy
import pandas

# my stuff
from graeae import EmbedHoloviews

Set Up

slug = "basic-numpy-for-neural-networks"
Embed = partial(EmbedHoloviews, folder_path=f"files/posts/first-course/{slug}")

Plot = namedtuple("Plot", ["width", "height", "fontscale", "tan", "blue", "red"])
PLOT = Plot(
    width=900,
    height=750,
    fontscale=2,
    tan="#ddb377",
    blue="#4687b7",
    red="#ce7b6d",
 )

Building basic functions with numpy

sigmoid function, np.exp()

Before using np.exp(), you will use math.exp() to implement the sigmoid function. You will then see why np.exp() is preferable to math.exp(). (see python's power and logarithmic functions)

Build a function that returns the sigmoid of a real number x using math.exp(x) for the exponential function.

\(sigmoid(x) = \frac{1}{1+e^{-x}}\) is sometimes also known as the logistic function. It is a non-linear function used not only in Machine Learning (Logistic Regression), but also in Deep Learning.

TOLERANCE = 0.000001
def basic_sigmoid(x: float) -> float:
    """Compute sigmoid of x.

    Args:
     x: A scalar

    Returns:
     s: sigmoid(x)
    """
    return 1/(1 + math.exp(-x))
expected = 0.9525741268224334
actual = basic_sigmoid(3)

expect(math.isclose(expected, actual, rel_tol=TOLERANCE)).to(be_true)
print(f"sigmoid of 3: {actual:0.3f}")

Actually, we rarely use the "math" library in deep learning because the inputs of the functions are real numbers. In deep learning we mostly use matrices and vectors. This is why numpy is more useful.

One reason why we use "numpy" instead of "math" in Deep Learning

x = [1, 2, 3]
try:
    basic_sigmoid(x) # you will see this give an error when you run it, because x is a vector.
except TypeError as error:
    print(str(error))

In fact, if \(x = (x_1, x_2, ..., x_n)\) is a row vector then \(np.exp(x)\) will apply the exponential function to every element of x. The output will thus be: \(np.exp(x) = (e^{x_1}, e^{x_2}, ..., e^{x_n})\). (see numpy.exp).

x = numpy.array([1, 2, 3])
print(numpy.exp(x))
[ 2.71828183  7.3890561  20.08553692]

Furthermore, if x is a vector, then a Python operation such as \(s = x + 3\) or \(s = \frac{1}{x}\) will output s as a vector of the same size as x.

x = numpy.array([1, 2, 3])
print (x + 3)
[4 5 6]

Any time you need more info on a numpy function, we encourage you to look at the official documentation.

Implement the sigmoid function using numpy.

Instructions: x could now be either a real number, a vector, or a matrix. The data structures we use in numpy to represent these shapes (vectors, matrices…) are called numpy arrays. You don't need to know more for now.

\[ \text{For } x \in \mathbb{R}^n \text{, } sigmoid(x) = sigmoid\begin{pmatrix} x_1 \\ x_2 \\ ... \\ x_n \\ \end{pmatrix} = \begin{pmatrix} \frac{1}{1+e^{-x_1}} \\ \frac{1}{1+e^{-x_2}} \\ ... \\ \frac{1}{1+e^{-x_n}} \\ \end{pmatrix}\tag{1} \]

def sigmoid(x):
    """
    Compute the sigmoid of x

    Args:
     x: A scalar or numpy array of any size

    Returns:
     s: sigmoid(x)
    """
    return 1/(1 + numpy.exp(-x))
x = numpy.array([1, 2, 3])
expected = numpy.array([ 0.73105858,  0.88079708,  0.95257413])
actual = sigmoid(x)
print(actual)
expect(numpy.allclose(expected, actual, TOLERANCE)).to(be_true)
[0.73105858 0.88079708 0.95257413]

Sigmoid gradient

You will need to compute gradients to optimize loss functions using backpropagation. Let's code your first gradient function.

The formula is:

\[ sigmoid\_derivative(x) = \sigma'(x) = \sigma(x) (1 - \sigma(x))\tag{2} \]

You often code this function in two steps:

  1. Set s to be the sigmoid of x. You might find your sigmoid(x) function useful.
  2. Compute \(\sigma'(x) = s(1-s)\)

numpy.random.randn generates a sample from the standard normal distribution.

a = numpy.random.randn(2, 3)
b = numpy.random.randn(2, 1)
c = a + b
def sigmoid_derivative(x):
    """
    Compute the gradient (also called the slope or derivative) of the sigmoid
    function with respect to its input x.

    Args:
     x: A scalar or numpy array

    Returns:
     ds: Your computed gradient.
    """
    s = sigmoid(x)
    return s * (1 - s)
x = numpy.array([1, 2, 3])
expected = numpy.array([0.19661193, 0.10499359, 0.04517666])
actual = sigmoid_derivative(x)
print (f"sigmoid_derivative(x) = {actual}")
expect(numpy.allclose(expected, actual, TOLERANCE)).to(be_true)
sigmoid_derivative(x) = [0.19661193 0.10499359 0.04517666]

Plotting The Sigmoid and Its Derivative

x = numpy.linspace(-10, 10)
siggy = sigmoid(x)
siggy_slope = sigmoid_derivative(x)
frame = pandas.DataFrame.from_dict(dict(Sigmoid=siggy, Slope=siggy_slope))
frame = frame.set_index(x)
plot = frame.hvplot(title="Sigmoid and Derivative").opts(
    height=PLOT.height,
    width=PLOT.width,
    fontscale=PLOT.fontscale,
    ylim=(0, 1),
)

output = Embed(plot=plot, file_name="sigmoid")()
print(output)

Figure Missing

Reshaping arrays

Two common numpy functions used in deep learning are np.shape and np.reshape().

  • X.shape is used to get the shape (dimension) of a matrix/vector X.
  • X.reshape(…) is used to reshape X into some other dimension.

For example, in computer science, an image is represented by a 3D array of shape \((length, height, depth = 3)\). However, when you read an image as the input of an algorithm you convert it to a vector of shape \((length \times height \times 3, 1)\). In other words, you "unroll", or reshape, the 3D array into a 1D vector.

We'll implemnt image2vector(), a function that takes an input of shape (length, height, 3) and returns a vector of shape \((length\times height\times 3, 1)\). For example, if you would like to reshape an array v of shape (a, b, c) into a vector of shape (\(a \times b, c\)) you would do:

v = v.reshape((v.shape[0] * v.shape[1], v.shape[2]))
def image2vector(image: numpy.ndarray) -> numpy.ndarray:
    """Unroll the image

    Args:
     image: array of shape (length, height, depth)

    Returns:
     v: vector of shape (length*height*depth, 1)
    """
    length, height, depth = image.shape
    return image.reshape((length * height * depth, 1))

Our image will a 3 by 3 by 2 array, typically images will be \((\textrm{number of pixels}_x, \textrm{number of pixels}_y,3)\) where 3 represents the RGB values

image = numpy.array([[[ 0.67826139,  0.29380381],
                      [ 0.90714982,  0.52835647],
                      [ 0.4215251 ,  0.45017551]],

                     [[ 0.92814219,  0.96677647],
                      [ 0.85304703,  0.52351845],
                      [ 0.19981397,  0.27417313]],

                     [[ 0.60659855,  0.00533165],
                      [ 0.10820313,  0.49978937],
                      [ 0.34144279,  0.94630077]]])

expected = numpy.array([[ 0.67826139],
                        [ 0.29380381],
                        [ 0.90714982],
                        [ 0.52835647],
                        [ 0.4215251 ],
                        [ 0.45017551],
                        [ 0.92814219],
                        [ 0.96677647],
                        [ 0.85304703],
                        [ 0.52351845],
                        [ 0.19981397],
                        [ 0.27417313],
                        [ 0.60659855],
                        [ 0.00533165],
                        [ 0.10820313],
                        [ 0.49978937],
                        [ 0.34144279],
                        [ 0.94630077]])

actual = image2vector(image)
print (f"image2vector(image) = {actual}")
length, height, depth = image.shape
expect(actual.shape == (length * height * depth, 1)).to(be_true)
expect(numpy.allclose(actual, expected, TOLERANCE)).to(be_true)
image2vector(image) = [[0.67826139]
 [0.29380381]
 [0.90714982]
 [0.52835647]
 [0.4215251 ]
 [0.45017551]
 [0.92814219]
 [0.96677647]
 [0.85304703]
 [0.52351845]
 [0.19981397]
 [0.27417313]
 [0.60659855]
 [0.00533165]
 [0.10820313]
 [0.49978937]
 [0.34144279]
 [0.94630077]]

Normalizing rows

Another common technique we use in Machine Learning and Deep Learning is to normalize our data. It often leads to a better performance because gradient descent converges faster after normalization. Here, by normalization we mean changing x to \) \frac{x}{\| x\|} \) (dividing each row vector of x by its norm).

For example, if \[ x = \begin{bmatrix} 0 & 3 & 4 \\ 2 & 6 & 4 \\ \end{bmatrix}\tag{3} \]

then

\[ \| x\| = np.linalg.norm(x, axis = 1, keepdims = True) = \begin{bmatrix} 5 \\ \sqrt{56} \\ \end{bmatrix}\tag{4} \] and

\[ x\_normalized = \frac{x}{\| x\|} = \begin{bmatrix} 0 & \frac{3}{5} & \frac{4}{5} \\ \frac{2}{\sqrt{56}} & \frac{6}{\sqrt{56}} & \frac{4}{\sqrt{56}} \\ \end{bmatrix}\tag{5} \]

Note that you can divide matrices of different sizes and it works fine: this is called broadcasting and you're going to learn about it further down.

Now we'll implement normalizeRows() to normalize the rows of a matrix. After applying this function to an input matrix x, each row of x should be a vector of unit length (meaning length 1).

See: numpy.linalg.norm

def normalizeRows(x: numpy.ndarray) -> numpy.ndarray:
    """
    Implement a function that normalizes each row of the matrix x 
    (to have unit length).

    Args:
     x: A numpy matrix of shape (n, m)

    Returns:
     x: The normalized (by row) numpy matrix.
    """
    x_norm = numpy.linalg.norm(x, ord=2, axis=1, keepdims=True)    
    x = x/x_norm
    return x
x = numpy.array([
    [0, 3, 4],
    [1, 6, 4]])

expected = numpy.array([[ 0., 0.6, 0.8],
                        [ 0.13736056,  0.82416338,  0.54944226]])
actual = normalizeRows(x)

print(f"normalizeRows(x) = {actual}")
expect(numpy.allclose(expected, actual, TOLERANCE)).to(be_true)
normalizeRows(x) = [[0.         0.6        0.8       ]
 [0.13736056 0.82416338 0.54944226]]

We can check that each row is a unit vector by calculating the Euclidean distance.

\[ Euclidean = \sqrt{\sum X^2} \]

SUM_ROWS = 1
print(numpy.sqrt(numpy.sum(actual**2, axis=SUM_ROWS)))
[1. 1.]

Note: x_norm and x have different shapes. This is normal given that x_norm takes the norm of each row of x. So x_norm has the same number of rows but only 1 column. As a consequence you can't use x /= x_norm instead of x = x/x_norm. So how did it work when you divided x by x_norm? This is called broadcasting and we'll talk about it next.

Broadcasting and the softmax function

A very important concept to understand in numpy is "broadcasting". It is very useful for performing mathematical operations between arrays of different shapes. For the full details on broadcasting, you can read the official broadcasting documentation.

We'll implement a softmax function using numpy. You can think of softmax as a normalizing function used when your algorithm needs to classify two or more classes.

The Mathy Definitions: \[ \text{for } x \in \mathbb{R}^{1\times n} \text{, } softmax(x) = softmax(\begin{bmatrix} x_1 && x_2 && \ldots && x_n \end{bmatrix}) = \begin{bmatrix} \frac{e^{x_1}}{\sum_{j}e^{x_j}} && \frac{e^{x_2}}{\sum_{j}e^{x_j}} && \ldots && \frac{e^{x_n}}{\sum_{j}e^{x_j}} \end{bmatrix} \]

\(\text{for a matrix } x \in \mathbb{R}^{m \times n} \text{, x_{ij}}\) maps to the element in the \(i^{th}\) row and \(j^{th}\) column of x, thus we have: \[ softmax(x) = softmax\begin{bmatrix} x_{11} & x_{12} & x_{13} & \dots & x_{1n} \\ x_{21} & x_{22} & x_{23} & \dots & x_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ x_{m1} & x_{m2} & x_{m3} & \dots & x_{mn} \end{bmatrix} = \begin{bmatrix} \frac{e^{x_{11}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{12}}}{\sum_{j}e^{x_{1j}}} & \frac{e^{x_{13}}}{\sum_{j}e^{x_{1j}}} & \dots & \frac{e^{x_{1n}}}{\sum_{j}e^{x_{1j}}} \\ \frac{e^{x_{21}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{22}}}{\sum_{j}e^{x_{2j}}} & \frac{e^{x_{23}}}{\sum_{j}e^{x_{2j}}} & \dots & \frac{e^{x_{2n}}}{\sum_{j}e^{x_{2j}}} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ \frac{e^{x_{m1}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m2}}}{\sum_{j}e^{x_{mj}}} & \frac{e^{x_{m3}}}{\sum_{j}e^{x_{mj}}} & \dots & \frac{e^{x_{mn}}}{\sum_{j}e^{x_{mj}}} \end{bmatrix} = \begin{pmatrix} softmax\text{(first row of x)} \\ softmax\text{(second row of x)} \\ \ldots \\ softmax\text{(last row of x)} \\ \end{pmatrix} \]

See also: numpy.sum

ROW_SUMS = 1

def softmax(x: numpy.ndarray) -> numpy.ndarray:
    """Calculates the softmax for each row of the input x.

    Args:
     x: A numpy matrix of shape (n,m)

    Returns:
     s: A numpy matrix equal to the softmax of x, of shape (n,m)
    """
    x_exp = numpy.exp(x)
    x_sum = numpy.sum(x_exp, axis=ROW_SUMS, keepdims=True)

    return x_exp/x_sum
a = numpy.random.randn(2, 3)
b = numpy.random.randn(2, 1)
c = a + b

expected = numpy.array([[ 9.80897665e-01, 8.94462891e-04, 1.79657674e-02,
                          1.21052389e-04, 1.21052389e-04],
                        [ 8.78679856e-01, 1.18916387e-01, 8.01252314e-04,
                          8.01252314e-04, 8.01252314e-04]])

x = numpy.array([
    [9, 2, 5, 0, 0],
    [7, 5, 0, 0 ,0]])

actual = softmax(x)
print(f"softmax(x) = {actual}")
expect(numpy.allclose(expected, actual, TOLERANCE)).to(be_true)
softmax(x) = [[9.80897665e-01 8.94462891e-04 1.79657674e-02 1.21052389e-04
  1.21052389e-04]
 [8.78679856e-01 1.18916387e-01 8.01252314e-04 8.01252314e-04
  8.01252314e-04]]

Note:

  • If you print the shapes of x_exp, x_sum and s above and rerun the assessment cell, you will see that x_sum is of shape (2,1) while x_exp and s are of shape (2,5). x_exp/x_sum works due to python broadcasting.

What you need to remember:

  • np.exp(x) works for any np.array x and applies the exponential function to every coordinate
  • the sigmoid function and its gradient
  • Some equivalent of image2vector is commonly used in deep learning
  • np.reshape is widely used. In the future, you'll see that keeping your matrix/vector dimensions straight will go toward eliminating a lot of bugs.
  • numpy has efficient built-in functions
  • broadcasting is extremely useful

Vectorization

In deep learning, you deal with very large datasets. Hence, a non-computationally-optimal function can become a huge bottleneck in your algorithm and can result in a model that takes ages to run. To make sure that your code is computationally efficient, you will use vectorization. For example, try to tell the difference between the following implementations of the dot/outer/elementwise product.

x1 = [9, 2, 5, 0, 0, 7, 5, 0, 0, 0, 9, 2, 5, 0, 0]
x2 = [9, 2, 2, 9, 0, 9, 2, 5, 0, 0, 9, 2, 5, 0, 0]
CLASSIC = dict()

Classic (Non-Vectorized)

Dot Product Of Vectors Implementation

tic = time.process_time()
dot = 0
for i in range(len(x1)):
    dot+= x1[i]*x2[i]
toc = time.process_time()
CLASSIC["dot"] = 1000 * (toc - tic)
print (f"dot = {dot} \n ----- Computation time = {CLASSIC['dot']} ms")
dot = 278 
 ----- Computation time = 0.09222100000005895 ms

Outer Product Implementation

tic = time.process_time()
outer = numpy.zeros((len(x1),len(x2)))
for i in range(len(x1)):
    for j in range(len(x2)):
        outer[i,j] = x1[i]*x2[j]
toc = time.process_time()
CLASSIC["outer"] = 1000*(toc - tic)
print (f"outer = {outer}\n ----- Computation time = {CLASSIC['outer']} ms")
outer = [[81. 18. 18. 81.  0. 81. 18. 45.  0.  0. 81. 18. 45.  0.  0.]
 [18.  4.  4. 18.  0. 18.  4. 10.  0.  0. 18.  4. 10.  0.  0.]
 [45. 10. 10. 45.  0. 45. 10. 25.  0.  0. 45. 10. 25.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [63. 14. 14. 63.  0. 63. 14. 35.  0.  0. 63. 14. 35.  0.  0.]
 [45. 10. 10. 45.  0. 45. 10. 25.  0.  0. 45. 10. 25.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [81. 18. 18. 81.  0. 81. 18. 45.  0.  0. 81. 18. 45.  0.  0.]
 [18.  4.  4. 18.  0. 18.  4. 10.  0.  0. 18.  4. 10.  0.  0.]
 [45. 10. 10. 45.  0. 45. 10. 25.  0.  0. 45. 10. 25.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]
 [ 0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.  0.]]
 ----- Computation time = 0.2285300000002266 ms

Elementwise Implementation

tic = time.process_time()
mul = numpy.zeros(len(x1))
for i in range(len(x1)):
    mul[i] = x1[i]*x2[i]
toc = time.process_time()
CLASSIC["elementwise"] = 1000*(toc - tic)
print(f"elementwise multiplication = {mul}\n ----- Computation time = {CLASSIC['elementwise']} ms")
elementwise multiplication = [81.  4. 10.  0.  0. 63. 10.  0.  0.  0. 81.  4. 25.  0.  0.]
 ----- Computation time = 0.10630600000016699 ms

General Dot Product Implementation

W = numpy.random.rand(3,len(x1))
tic = time.process_time()
gdot = numpy.zeros(W.shape[0])
for i in range(W.shape[0]):
    for j in range(len(x1)):
        gdot[i] += W[i,j]*x1[j]
toc = time.process_time()
CLASSIC["general_dot"] = 1000*(toc - tic)
print(f"gdot = {gdot}\n ----- Computation time = {CLASSIC['general_dot']} ms")
gdot = [26.7997887  21.98533453 17.23427487]
 ----- Computation time = 0.14043400000041117 ms

Vectorized

Dot Product Of Vectors

tic = time.process_time()
dot = numpy.dot(x1,x2)
toc = time.process_time()
DOT = 1000*(toc - tic)
print(f"dot = {dot}\n ----- Computation time = {DOT} ms")
print(f"Difference: {CLASSIC['dot'] - DOT} ms")
dot = 278
 ----- Computation time = 0.11425399999964725 ms
Difference: -0.0220329999995883 ms

So for this small set, the pure python is faster.

Outer Product

tic = time.process_time()
outer = numpy.outer(x1,x2)
toc = time.process_time()
OUTER = 1000*(toc - tic)
print(f"outer = {outer}\n ----- Computation time = {OUTER} ms")
print(f"Difference: {CLASSIC['outer'] - OUTER} ms")
outer = [[81 18 18 81  0 81 18 45  0  0 81 18 45  0  0]
 [18  4  4 18  0 18  4 10  0  0 18  4 10  0  0]
 [45 10 10 45  0 45 10 25  0  0 45 10 25  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]
 [63 14 14 63  0 63 14 35  0  0 63 14 35  0  0]
 [45 10 10 45  0 45 10 25  0  0 45 10 25  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]
 [81 18 18 81  0 81 18 45  0  0 81 18 45  0  0]
 [18  4  4 18  0 18  4 10  0  0 18  4 10  0  0]
 [45 10 10 45  0 45 10 25  0  0 45 10 25  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]
 [ 0  0  0  0  0  0  0  0  0  0  0  0  0  0  0]]
 ----- Computation time = 0.09857899999943243 ms
Difference: 0.12995100000079418 ms

Now numpy is a little faster.

Elementwise Multiplication

tic = time.process_time()
mul = numpy.multiply(x1,x2)
toc = time.process_time()
ELEMENTWISE = 1000*(toc - tic)
print(f"elementwise multiplication = {mul}\n ----- Computation time = {ELEMENTWISE} ms")
print(f"Difference: {CLASSIC['elementwise'] - ELEMENTWISE} ms")
elementwise multiplication = [81  4 10  0  0 63 10  0  0  0 81  4 25  0  0]
 ----- Computation time = 0.07506199999962604 ms
Difference: 0.03124400000054095 ms

General Dot Product

tic = time.process_time()
dot = numpy.dot(W,x1)
toc = time.process_time()
GENERAL = 1000*(toc - tic)
print(f"gdot = {dot}\n ----- Computation time = {GENERAL} ms")
print(f"Difference: {CLASSIC['general_dot'] - GENERAL} ms")
gdot = [26.7997887  21.98533453 17.23427487]
 ----- Computation time = 0.10962399999936423 ms
Difference: 0.030810000001046944 ms

As you may have noticed, the vectorized implementation is much cleaner and somewhat more efficient. For bigger vectors/matrices, the differences in running time become even bigger.

Note that np.dot() performs a matrix-matrix or matrix-vector multiplication. This is different from np.multiply() and the * operator (which is equivalent to .* in Matlab/Octave), which performs an element-wise multiplication.

The L1 and L2 loss functions

L1

Now we'll implement the numpy vectorized version of the L1 loss.

Reminder: The loss is used to evaluate the performance of your model. The bigger your loss is, the more different your predictions (\( \hat{y} \)) are from the true values (y). In deep learning, you use optimization algorithms like Gradient Descent to train your model and to minimize the cost. L1 loss is defined as:

\[ \begin{align*} & L_1(\hat{y}, y) = \sum_{i=0}^m \left|y^{(i)} - \hat{y}^{(i)}\right| \end{align*}\tag{6} \]

def L1(yhat: numpy.ndarray, y: numpy.ndarray) -> numpy.ndarray:
    """L1 Loss

    Args:
     yhat: vector of size m (predicted labels)
     y: vector of size m (true labels)

    Returns:
     loss: the value of the L1 loss function defined above
    """
    return numpy.sum(numpy.abs(y - yhat))
yhat = numpy.array([.9, 0.2, 0.1, .4, .9])
y = numpy.array([1, 0, 0, 1, 1])
expected = 1.1
actual = L1(yhat, y)
print(f"L1 = {actual}")
expect(actual).to(equal(expected))
L1 = 1.1

L2 Loss

Next we'll implement the numpy vectorized version of the L2 loss. There are several ways of implementing the L2 loss but we'll use the function np.dot(). As a reminder, if \(x = [x_1, x_2, \ldots, x_n]\), then np.dot(x,x) = \(\sum_{j=0}^n x_j^{2}\).

L2 loss is defined as \[ L_2(\hat{y},y) = \sum_{i=0}^m\left(y^{(i)} - \hat{y}^{(i)}\right)^2\tag{7} \]

def L2(yhat: numpy.ndarray, y: numpy.ndarray) -> numpy.ndarray:
    """Calculate the L2 Loss

    Args:
     yhat: vector of size m (predicted labels)
     y: vector of size m (true labels)

    Returns:
     loss: the value of the L2 loss function defined above
    """
    return numpy.sum((y - yhat)**2)
yhat = numpy.array([.9, 0.2, 0.1, .4, .9])
y = numpy.array([1, 0, 0, 1, 1])
expected = 0.43
actual = L2(yhat,y)
print(f"L2 = {actual}")
expect(actual).to(equal(expected))
L2 = 0.43

End

What to remember:

  • Vectorization is very important in deep learning. It provides computational efficiency and clarity.
  • You have reviewed the L1 and L2 loss.
  • You are familiar with many numpy functions such as np.sum, np.dot, np.multiply, np.maximum, etc…

Source

This was an exercise from DeepLearning.ai's first Coursera course. (link to come)