Word Similarity: A Website Interface for 89 Languages Word2Vec Models

I have launched WordSimilarity on April, which focused on computing the word similarity between two words by word2vec model based on the Wikipedia data. The website has the English Word2Vec Model for English Word Similarity: Exploiting Wikipedia Word Similarity by … Continue reading →

Update Korean, Russian, French, German, Spanish Wikipedia Word2Vec Model for Word Similarity

I have launched WordSimilarity on April, which focused on computing the word similarity between two words by word2vec model based on the Wikipedia data. The website has the English Word2Vec Model for English Word Similarity: Exploiting Wikipedia Word Similarity by … Continue reading →

Training a Japanese Wikipedia Word2Vec Model by Gensim and Mecab

After “Training a Chinese Wikipedia Word2Vec Model by Gensim and Jieba“, we continue “Training a Japanese Wikipedia Word2Vec Model by Gensim and Mecab” with “Wikipedia_Word2vec” related scripts. Still, download the latest Japanese Wikipedia dump data first: https://dumps.wikimedia.org/jawiki/latest/jawiki-latest-pages-articles.xml.bz2. You can use … Continue reading →

Training a Chinese Wikipedia Word2Vec Model by Gensim and Jieba

We have posted two methods for training a word2vec model based on English wikipedia data: “Training Word2Vec Model on English Wikipedia by Gensim” and “Exploiting Wikipedia Word Similarity by Word2Vec“. Based on the pipeline and related scripts: Wikipedia_Word2vec,we can train … Continue reading →

Dive Into NLTK, Part XI: From Word2Vec to WordNet

This is the eleventh article in the series “Dive Into NLTK“, here is an index of all the articles in the series that have been published to date: Part I: Getting Started with NLTK Part II: Sentence Tokenize and Word … Continue reading →

Exploiting Wikipedia Word Similarity by Word2Vec

We have written “Training Word2Vec Model on English Wikipedia by Gensim” before, and got a lot of attention. Recently, I have reviewed Word2Vec related materials again and test a new method to process the English wikipedia data and train Word2Vec … Continue reading →

Dive Into NLTK, Part X: Play with Word2Vec Models based on NLTK Corpus

This is the tenth article in the series “Dive Into NLTK“, here is an index of all the articles in the series that have been published to date: Part I: Getting Started with NLTK Part II: Sentence Tokenize and Word … Continue reading →

Training Word2Vec Model on English Wikipedia by Gensim

After learning word2vec and glove, a natural way to think about them is training a related model on a larger corpus, and english wikipedia is an ideal choice for this task. After google the related keywords like “word2vec wikipedia”, “gensim … Continue reading →

Getting Started with Word2Vec and GloVe in Python

We have talked about “Getting Started with Word2Vec and GloVe“, and how to use them in a pure python environment? Here we wil tell you how to use word2vec and glove by python. Word2Vec in Python The great topic modeling … Continue reading →

Getting Started with Word2Vec and GloVe

Word2Vec and GloVe are two popular word embedding algorithms recently which used to construct vector representations for words. And those methods can be used to compute the semantic similarity between words by the mathematically vector representation. The c/c++ tools for … Continue reading →