Simple Document Classification using Word Vectors

This example shows how to transform a document into a vector using a word vector model and using these vectors for classification. First, we read some test and train documents which are divided into three topics. We use the train dataset to train a Doc2Vec model using the topic as class attribute. The Word Vector learner now creates a vector for each word, and each label. Next, we use a Vocabulary extractor to extract the words and vectors from the model. On the second output port the Vocabulary Extractor will output the vectors for each label which we can then use as a kind of 'cluster center' for classification. The next step is to convert our test documents into a vector using the word vector model. This can be done using the Word Vector Apply Node. This Node takes in documents and replaces every word with its corresponding word vector if present in the word vector model. We additionally configure the Node to calculate the mean of all vectors so we have a single vector as representation of the test documents. At last we can now use a K Nearest Neighbor Node using our previously created 'cluster centers'. In the context of word vectors often the cosine distance is used. Workflow Requirements KNIME Analytics Platform 3.4.0 KNIME Deeplearning4J Integration KNIME Deeplearning4J Integration Text Processing Extension

This is a companion discussion topic for the original entry at