Word Embeddings: Data Preparation

Data Preparation

This is a look at the types of things we're going to need to do in order to convert our twitter data into a form that we can use for machine learning.

Python Imports

This is pretty standard stuff. Only the re module for regular expressions is, strictly speaking, needed here.

from functools import partial
from pprint import pprint
from typing import Dict, Generator, List, Tuple

import re

PyPi Imports

These are libraries installed via the Python Package Index (pypi).

  • emoji: Convert emoji to text aliases and vice-versa
  • nltk: The Natural Language Toolkit
  • numpy: We'll use it for their arrays.
import emoji
import nltk
import numpy

And Now the Processing

The basic problem we have is that our data is made up of tweets (strings) with emojis, punctuation, and so on but we need to convert it to something numeric. The first step in that process is to standardize the strings and break them up into tokens.

Our Corpus

Here's a fake tweet that we can use to work our way through the steps.

corpus = emoji.emojize((
    "My :ogre: :red_heart: :moyai:, but "
    "my :goblin: must :poop:! Ho, :zany_face:. Human-animals $??"),
                       use_aliases=True)
print(corpus)

My πŸ‘Ή ❀ πŸ—Ώ, but my πŸ‘Ί must πŸ’©! Ho, πŸ€ͺ. Human-animals $??

Cleaning and Tokenization

Punctuation

The first thing we're going to do is replace the punctuation with periods (.) using re.sub .

ONE_OR_MORE = "+"
PERIOD = "."
PUNCTUATION = ",!?;-"
EXPRESSION = f"[{PUNCTUATION}]" + ONE_OR_MORE
data = re.sub(EXPRESSION, PERIOD, corpus)

print(f"First cleaning:  '{data}'")

First cleaning: 'My πŸ‘Ή ❀ πŸ—Ώ. but my πŸ‘Ί must πŸ’©. Ho. πŸ€ͺ. Human.animals $.'

Tokenize

Next, use NLTK's punkt word_tokenize to break our corpus into tokens. punkt is German for "period" and is the name of a system created by Tibor Kiss and Jan Strunk. There's a link to the original paper ("Unsupervised Multilingual Sentence Boundary Detection") on this page.

print(f" - Before: {data}")
data = nltk.word_tokenize(data)
print(f" - After tokenization:  {data}")
  • Before: My πŸ‘Ή ❀ πŸ—Ώ. but my πŸ‘Ί must πŸ’©. Ho. πŸ€ͺ. Human.animals $.
  • After tokenization: ['My', 'πŸ‘Ή', '❀', 'πŸ—Ώ', '.', 'but', 'my', 'πŸ‘Ί', 'must', 'πŸ’©', '.', 'Ho', '.', 'πŸ€ͺ', '.', 'Human.animals', '$', '.']

Lower-Case And More Cleaning

Now we'll reduce the tokens a little more:

  • lower-case everything
  • filter out everything but letters, periods, and emoji

We're going to use get_emoji_regexp which returns a regular expression object that matches emojis. Since it returns a compiled python Regular Expression object we can call its search method to see if the token matches an emoji.

print(f" - Before:  {data}")

emoji_expression = emoji.get_emoji_regexp()
data = [ token.lower() for token in data
         if any((token.isalpha(),
                 token== '.',
                 emoji_expression.search(token)))
       ]

print(f" - After:  {data}")
  • Before: ['My', 'πŸ‘Ή', '❀', 'πŸ—Ώ', '.', 'but', 'my', 'πŸ‘Ί', 'must', 'πŸ’©', '.', 'Ho', '.', 'πŸ€ͺ', '.', 'Human.animals', '$', '.']
  • After: ['my', 'πŸ‘Ή', '❀', 'πŸ—Ώ', '.', 'but', 'my', 'πŸ‘Ί', 'must', 'πŸ’©', '.', 'ho', '.', 'πŸ€ͺ', '.', '.']

One thing to notice is that it got rid of Human.animals so because of the way we're doing it, any hyphenated words are going to be eliminated.

Wrap It Together

While the steps were useful for illustrating things, it'll be more convenient to put it into a function for later.

def tokenize(corpus: str) -> list:
    """clean and tokenize the corpus

    Args:
     corpus: original source text

    Returns:
     list of cleaned tokens from the corpus
    """
    ONE_OR_MORE = "+"
    PUNCTUATION = ",!?;-"
    EXPRESSION = f"[{PUNCTUATION}]" + ONE_OR_MORE
    PERIOD = "."

    data = re.sub(EXPRESSION, PERIOD, corpus)
    expression = emoji.get_emoji_regexp()
    data = nltk.word_tokenize(data)
    data = [ token.lower() for token in data
             if any((token.isalpha(),
                     token== '.',
                     expression.search(token)))
       ]    
    return data

Now we can test it out.

corpus = emoji.emojize(
    ("Able was :clown_face:; ere :clown_face: saw :frog_face:! "
    "Rejoice! :cigarette: for $9?"), use_aliases=True)

# Print new corpus
print(f" - Corpus:  {corpus}")

# Save tokenized version of corpus into 'words' variable
words = tokenize(corpus)

# Print the tokenized version of the corpus
print(f" - Words (tokens):  {words}")
  • Corpus: Able was 🀑; ere 🀑 saw :frog_face:! Rejoice! 🚬 for $9?
  • Words (tokens): ['able', 'was', '🀑', '.', 'ere', '🀑', 'saw', '.', 'rejoice', '.', '🚬', 'for', '.']

Check with an alternative sentence.

source = emoji.emojize(
    ("I'm tired of being a token! Where's all the other "
     ":cheese_wedge:-sniffing"
     " Gnomish at? I bet theres' at least 2 of us :gorilla: "
     "out there, or maybe more..."),
    use_aliases=True)
print(f" - Before: {source}")
print(f" - After: {tokenize(source)}")
  • Before: I'm tired of being a token! Where's all the other πŸ§€-sniffing Gnomish at? I bet theres' at least 2 of us 🦍 out there, or maybe more…
  • After: ['i', 'tired', 'of', 'being', 'a', 'token', '.', 'where', 'all', 'the', 'other', 'πŸ§€.sniffing', 'gnomish', 'at', '.', 'i', 'bet', 'theres', 'at', 'least', 'of', 'us', '🦍', 'out', 'there', '.', 'or', 'maybe', 'more']

Interestingly, it removes "am" (in the contraction "I'm") but not "a", I guess because it's language neutral it can't understand the contractions the way some english-specific tokenizers can.

Sliding Window of Words

The idea behind word-embeddings is that we assume that the words around a word (the context) are what give us the meaning of the word so we create vectors whose distance to other vectors with similar contexts is closer than to those with more different contexts. So our data is made up of list of words around a word. If for example our sentence is:

Fruit flies like a banana.

And we get the context for the word "like" with a half-window (number of tokens on either side of the word) of 2, then our window will be:

["fruit", "flies", "a", "banana"]
GetWindowYield = Tuple[List[str], str]

def get_windows(words: List[str],
                half_window: int) -> Generator[GetWindowYield, None, None]:
    """Generates windows of words

    Args:
     words: cleaned tokens
     half_window: number of words in the half-window

    Yields:
     the next window
    """
    for center_index in range(half_window, len(words) - half_window):
        center_word = words[center_index]
        context_words = (words[(center_index - half_window) : center_index]
                         + words[(center_index + 1):(center_index + half_window + 1)])
        yield context_words, center_word
    return

The first argument of this function, words, is a list of words (or tokens). The second argument, half_window, is the context half-size. As I mentioned, for a given center word, the context words are made of half_window words to the left and half_window words to the right of the center word.

Now let's try it on the words we defined earlier using a window of 2.

for context, word in get_windows(words, 2):
    print(f" - {context}\t{word}")
  • ['able', 'was', '.', 'ere'] 🀑
  • ['was', '🀑', 'ere', '🀑'] .
  • ['🀑', '.', '🀑', 'saw'] ere
  • ['.', 'ere', 'saw', '.'] 🀑
  • ['ere', '🀑', '.', 'rejoice'] saw
  • ['🀑', 'saw', 'rejoice', '.'] .
  • ['saw', '.', '.', '🚬'] rejoice
  • ['.', 'rejoice', '🚬', 'for'] .
  • ['rejoice', '.', 'for', '.'] 🚬

The first example is made up of:

  • the context words "able", "was", ".", "ere",
  • and the center word to be predicted is a clown-face.

Once more with feeling.

for context, word in get_windows(tokenize("My baloney has a first name, it's Gerald."), 2):
    print(f" - {context}\t{word}")
  • ['my', 'baloney', 'a', 'first'] has
  • ['baloney', 'has', 'first', 'name'] a
  • ['has', 'a', 'name', '.'] first
  • ['a', 'first', '.', 'it'] name
  • ['first', 'name', 'it', 'gerald'] .
  • ['name', '.', 'gerald', '.'] it

It's a little more obvious now that the way we wrote it the last two tokens (Gerald and .) don't get a context, so if we wanted to make sure that we did we'd probably have to pad the tokens or figure out some other scheme.

Words To Vectors

The next step is to convert the words to vectors using the contexts and words.

Mapping words to indices and indices to words

The center words will be represented as one-hot vectors (vectors of all zeros except in the cell representing the word), and the vectors that represent context words are also based on one-hot vectors.

To create one-hot word vectors, we can start by mapping each unique word to a unique integer (or index). We'll start with a function named get_dict, that creates a Python dictionary that maps words to integers and back.

WordToIndex = Dict[str, int]
IndexToWord = Dict[int, str]
GetDictOutput = Tuple[WordToIndex, IndexToWord]

def get_dict(data: List[str]) -> GetDictOutput:
    """Creates index to word mappings

    The index is based on the sorted unique tokens in the data

    Args:
     data: the data you want to pull from

    Returns:
     word_to_index: returns dictionary mapping the word to its index
     index_to_word: returns dictionary mapping the index to its word
    """
    words = sorted(list(set(data)))

    word_to_index = {word: index for index, word in enumerate(words)}
    index_to_word = {index: word for index, word in enumerate(words)}
    return word_to_index, index_to_word

So, let's try it out with the corpus.

word_to_index, index_to_word = get_dict(words)
print(f" - {word_to_index}")
print(f" - {index_to_word}")
- {'.': 0, 'able': 1, 'ere': 2, 'for': 3, 'rejoice': 4, 'saw': 5, 'was': 6, '🚬': 7, '🀑': 8}
- {0: '.', 1: 'able', 2: 'ere', 3: 'for', 4: 'rejoice', 5: 'saw', 6: 'was', 7: '🚬', 8: '🀑'}

If it isn't obvious, the purpose of the word_to_index dictionary is to convert a word to an integer.

token = "ere"
print(f"Index of the word '{token}':  {word_to_index[token]}")
Index of the word 'ere':  2

And now in the other direction.

print(f"Word which has index 2:  '{index_to_word[2]}'")
Word which has index 2:  'ere'

Finally, we need to know how many unique tokens are in our data set. The unique tokens make up our "vocabulary".

vocabulary_size = len(word_to_index)
print(f"Size of vocabulary: {vocabulary_size}")
Size of vocabulary: 9

One-Hot Word Vectors

Now let's look at creating one-hot vectors for the words. We'll start with one word - "rejoice".

word = "rejoice"
word_index = word_to_index[word]
print(f"Index for '{word}': {word_index}")
Index for 'rejoice': 4

Now we'll create a vector that has as many cells as there are tokens in the vocabulary and populate it with zeros (using numpy.zeros). This is why we needed the vocabulary size.

center_word_vector = numpy.zeros(vocabulary_size)

print(center_word_vector)
assert len(center_word_vector) == vocabulary_size
assert center_word_vector.sum() == 0.0
[0. 0. 0. 0. 0. 0. 0. 0. 0.]

Now, to make the vector represent out word, we need to set the cell that represents the word to 1.

center_word_vector[word_index] = 1

And now we have our one-hot word vector.

print(center_word_vector)

the_ones = numpy.where(center_word_vector==1)
for item in the_ones:
    print(f"{index_to_word[int(item)]}")
[0. 0. 0. 0. 1. 0. 0. 0. 0.]
rejoice

So, like before, let's put everything into a function.

def word_to_one_hot_vector(word: str,
                           word_to_index: WordToIndex=word_to_index,
                           vocabulary_size: int=vocabulary_size) -> numpy.ndarray:
    """Create a one-hot-vector with a 1 where the word is


    Args:
     word: known token to add to the vector
     word_to_index: dict mapping word: index
     vocabulary_size: how long to make the vector

    Returns:
     vector with zeros everywhere except where the word is
    """
    one_hot_vector = numpy.zeros(vocabulary_size)
    one_hot_vector[word_to_index[word]] = 1
    return one_hot_vector

Now we can check that it worked out.

actual = word_to_one_hot_vector(word)
print(actual)
assert all(actual == center_word_vector)
[0. 0. 0. 0. 1. 0. 0. 0. 0.]

Context Word Vectors

So, now we come to the context words. It may not be quite as obvious what this is, since we said we're going to use one-hot vectors, but each context is made up of multiple words. What we'll do is calculate the average of the one-hot vectors representing the individual words.

As an illustration let's start with one set of context words.

contexts = get_windows(words, 2)

context_words, word = next(contexts)
print(f" - Word: {word}")
print(f" - Context: {context_words}")
  • Word: 🀑
  • Context: ['able', 'was', '.', 'ere']

To create the one-hot vector we're going to create a list of all the vectors for the words in the context.

context_words_vectors = [word_to_one_hot_vector(word)
                         for word in context_words]
pprint(context_words_vectors)
[array([0., 1., 0., 0., 0., 0., 0., 0., 0.]),
 array([0., 0., 0., 0., 0., 0., 1., 0., 0.]),
 array([1., 0., 0., 0., 0., 0., 0., 0., 0.]),
 array([0., 0., 1., 0., 0., 0., 0., 0., 0.])]

And now we can get the average of these vectors using numpy's mean function, to get the vector representation of the context words.

ROWS, COLUMNS = 0, 1
first = numpy.mean(context_words_vectors, axis=ROWS)
print(first)
[0.25 0.25 0.25 0.   0.   0.   0.25 0.   0.  ]

Once again, let's wrap those separate code blocks back into a single function.

def context_words_to_vector(context_words: List[str],
                            word_to_index: WordToIndex=word_to_index) -> numpy.ndarray:
    """Create vector with the mean of the one-hot-vectors

    Args:
     context_words: words to covert to one-hot vectors
     word_to_index: dict mapping word to index

    Returns:
     array with the mean of the one-hot vectors for the context_words
    """
    vocabulary_size = len(word_to_index)
    context_words_vectors = [
        word_to_one_hot_vector(word, word_to_index, vocabulary_size)
        for word in context_words]
    return numpy.mean(context_words_vectors, axis=ROWS)
second = context_words_to_vector(context_words)
print(second)
assert all(first==second)
[0.25 0.25 0.25 0.   0.   0.   0.25 0.   0.  ]

So, there you go. It isn't really a one-hot vector but is just based on one.

Building the training set

Now we can put them all together and create a training data set for a Continuous Bag of Words model.

print(words)
['able', 'was', '🀑', '.', 'ere', '🀑', 'saw', '.', 'rejoice', '.', '🚬', 'for', '.']
for context_words, center_word in get_windows(words, half_window=2):
    print(f'Context words:  {context_words} -> {context_words_to_vector(context_words)}')
    print(f"Center word:  {center_word} -> "
          f"{word_to_one_hot_vector(center_word)}")
    print()
Context words:  ['able', 'was', '.', 'ere'] -> [0.25 0.25 0.25 0.   0.   0.   0.25 0.   0.  ]
Center word:  🀑 -> [0. 0. 0. 0. 0. 0. 0. 0. 1.]

Context words:  ['was', '🀑', 'ere', '🀑'] -> [0.   0.   0.25 0.   0.   0.   0.25 0.   0.5 ]
Center word:  . -> [1. 0. 0. 0. 0. 0. 0. 0. 0.]

Context words:  ['🀑', '.', '🀑', 'saw'] -> [0.25 0.   0.   0.   0.   0.25 0.   0.   0.5 ]
Center word:  ere -> [0. 0. 1. 0. 0. 0. 0. 0. 0.]

Context words:  ['.', 'ere', 'saw', '.'] -> [0.5  0.   0.25 0.   0.   0.25 0.   0.   0.  ]
Center word:  🀑 -> [0. 0. 0. 0. 0. 0. 0. 0. 1.]

Context words:  ['ere', '🀑', '.', 'rejoice'] -> [0.25 0.   0.25 0.   0.25 0.   0.   0.   0.25]
Center word:  saw -> [0. 0. 0. 0. 0. 1. 0. 0. 0.]

Context words:  ['🀑', 'saw', 'rejoice', '.'] -> [0.25 0.   0.   0.   0.25 0.25 0.   0.   0.25]
Center word:  . -> [1. 0. 0. 0. 0. 0. 0. 0. 0.]

Context words:  ['saw', '.', '.', '🚬'] -> [0.5  0.   0.   0.   0.   0.25 0.   0.25 0.  ]
Center word:  rejoice -> [0. 0. 0. 0. 1. 0. 0. 0. 0.]

Context words:  ['.', 'rejoice', '🚬', 'for'] -> [0.25 0.   0.   0.25 0.25 0.   0.   0.25 0.  ]
Center word:  . -> [1. 0. 0. 0. 0. 0. 0. 0. 0.]

Context words:  ['rejoice', '.', 'for', '.'] -> [0.5  0.   0.   0.25 0.25 0.   0.   0.   0.  ]
Center word:  🚬 -> [0. 0. 0. 0. 0. 0. 0. 1. 0.]

Next we'll create a generator that yields the context-vectors.

def get_training_example(
        words: List[str], half_window: int=2,
        word_to_index: WordToIndex=word_to_index) -> Generator[numpy.ndarray,
                                                               None, None]:
    """generates training examples

    Args:
     words: source of words
     half_window: half the window size
     word_to_index: dict with word to index mapping

    Yields:
     array with the mean of the one-hot vectors for the context words
    """
    vocabulary_size = len(word_to_index)
    for context_words, center_word in get_windows(words, half_window):
        yield context_words_to_vector(context_words), word_to_one_hot_vector(
            center_word, word_to_index,
            vocabulary_size)
    return

The output of this function can be iterated on to get successive context word vectors and center word vectors, as demonstrated in the next cell.

for context_words_vector, center_word_vector in get_training_example(words):
    print(f'Context words vector:  {context_words_vector}')
    print(f'Center word vector:  {center_word_vector}')
    print()
Context words vector:  [0.25 0.25 0.25 0.   0.   0.   0.25 0.   0.  ]
Center word vector:  [0. 0. 0. 0. 0. 0. 0. 0. 1.]

Context words vector:  [0.   0.   0.25 0.   0.   0.   0.25 0.   0.5 ]
Center word vector:  [1. 0. 0. 0. 0. 0. 0. 0. 0.]

Context words vector:  [0.25 0.   0.   0.   0.   0.25 0.   0.   0.5 ]
Center word vector:  [0. 0. 1. 0. 0. 0. 0. 0. 0.]

Context words vector:  [0.5  0.   0.25 0.   0.   0.25 0.   0.   0.  ]
Center word vector:  [0. 0. 0. 0. 0. 0. 0. 0. 1.]

Context words vector:  [0.25 0.   0.25 0.   0.25 0.   0.   0.   0.25]
Center word vector:  [0. 0. 0. 0. 0. 1. 0. 0. 0.]

Context words vector:  [0.25 0.   0.   0.   0.25 0.25 0.   0.   0.25]
Center word vector:  [1. 0. 0. 0. 0. 0. 0. 0. 0.]

Context words vector:  [0.5  0.   0.   0.   0.   0.25 0.   0.25 0.  ]
Center word vector:  [0. 0. 0. 0. 1. 0. 0. 0. 0.]

Context words vector:  [0.25 0.   0.   0.25 0.25 0.   0.   0.25 0.  ]
Center word vector:  [1. 0. 0. 0. 0. 0. 0. 0. 0.]

Context words vector:  [0.5  0.   0.   0.25 0.25 0.   0.   0.   0.  ]
Center word vector:  [0. 0. 0. 0. 0. 0. 0. 1. 0.]

End

Now that we know how to creat the training set, we can move on to the CBOW model itself which will be covered in the next post. This is part of a series of posts looking at some preliminaries for creating word-embeddings. There is a table-of-contents post here.