Posts about nlp
- FastAI Quickstart: Movie Review Sentiment
- Neural Machine Translation: Helper Functions
- Neural Machine Translation: Testing the Model
- Neural Machine Translation: Training the Model
- Neural Machine Translation: The Attention Model
- Neural Machine Translation: The Data
- Neural Machine Translation
- Stack Semantics
- Bleu Score
- Siamese Networks: New Questions
- Siamese Networks: Evaluating the Model
- Siamese Networks: Training the Model
- Siamese Networks: Hard Negative Mining
- Siamese Networks: Defining the Model
- Siamese Networks: The Data Generator
- Siamese Networks: The Data
- Siamese Networks: Duplicate Questions
- Evaluating a Siamese Model
- Modified Triplet Loss
- Siamese Networks With Trax
- NER: Testing the Model
- NER: Evaluating the Model
- NER: Training the Model
- NER: Building the Model
- NER: Data
- Named Entity Recognition
- NER: Pre-Processing the Data
- RNNS and Vanishing Gradients
- Deep N-Grams: Batch Generation
- Deep N-Grams: Generating Sentences
- Deep N-Grams: Evaluating the Model
- Deep N-Grams: Training the Model
- Deep N-Grams: Creating the Model
- Deep N-Grams: Loading the Data
- Deep N-Grams
- Trax GRU Model
- Vanilla RNNs and GRUs
- Jax, Numpy, and Perplexity
- Hidden State Activation
- Sentiment Analysis: Testing the Model
- Sentiment Analysis: Training the Model
- Sentiment Analysis: Defining the Model
- Sentiment Analysis: Pre-processing the Data
- Sentiment Analysis: Deep Learning Model
- Data Generators
- Introducing Trax
- Word Embeddings: Visualizing the Embeddings
- Word Embeddings: Training the Model
- Word Embeddings: Build a Model
- Extracting Word Embeddings
- Training the CBOW Model
- Introducing the CBOW Model
- Word Embeddings: Data Preparation
- Word Embeddings with the CBOW Model
- Auto-Complete: Building the Auto-Complete System
- Auto-Complete: Perplexity
- Auto-Complete: the N-Gram Model
- Auto-Complete: Pre-Process the Data II
- Auto-Complete: Pre-Process the Data I
- Auto-Complete
- N-Grams: Out-of-Vocabulary Words
- N-Gram: Building the Language Model
- N-Gram Pre-Processing
- POS Tagging: Checking the Accuracy of Model
- Parts-of-Speech: Viterbi Algorithm
- Parts-of-Speech Tagging: Hidden Markov Model
- Parts-of-Speech Tagging: Most Frequent Class Baseline
- Parts-of-Speech Tagging: Training
- Parts-of-Speech Tagging: The Data
- Parts-of-Speech Tagging
- Parts-of-Speech Tagging: Numpy
- Parts-of-Speech Tagging: Creating a Vocabulary
- Autocorrect: Minimum Edit Distance Backtrace
- Autocorrect: Minimum Edit Distance
- Autocorrect System: Combining the Edits
- Autocorrect System: Edits
- Autocorrect System: Data Preprocessing
- Autocorrect: The System
- Autocorrect: Finding Candidates Using Edits
- Autocorrect: Building the Vocabulary
- Locality-Sensitive Hashing (LSH) for Machine Translation
- Implementing k-Nearest Neighbors for Machine Translation
- Training the Machine Translation Transformation Matrix
- Machine Translation
- Building the Machine Translation Training Set
- Loading the English and French Word Embeddings
- Approximate kNN for Machine Translation
- Hash Tables
- PCA Dimensionality Reduction and Word Vectors
- PCA Exploration
- Word Embeddings
- Tweet Classifier Class
- Visualizing Naive Bayes
- Class-Based Naive Bayes Tweet Sentiment Classifier
- Implementing a Naive Bayes Twitter Sentiment Classifier
- Using Naive Bayes to Classify Tweets by Sentiment
- Text Data Management and Analysis
- Sentiment Analysis In Social Networks
- Speech and Language Processing
- The Tweet Vectorizer
- Implementing Logistic Regression for Tweet Sentiment Analysis
- Twitter Word Frequencies
- Twitter Preprocessing With NLTK
- Hand-rolling a CountVectorizer
- Topic Modeling With Matrix Decomposition
- NLP Classification Exercise
- Embeddings from Scratch
- IMDB GRU With Tokenization
- He Used Sarcasm
- Multi-Layer LSTM
- IMDB Reviews Tensorflow Dataset
- BBC News Classification
- Cleaning the BBC News Archive