Chapter 4/Exercise 1. Compute Frequencies from the Movie Review Dataset and keep the most Frequent Terms

Read the Large Movie Review Dataset [1] (sampled) available at the following path Thedata/MoviereviewDataset_sampled.table. The dataset contains labeled reviews as positive or negative, as well unlabeled reviews. Use the Strings to Document node to transforms the strings into documents. Tag the words available in the documents and pre-process them by filtering the numbers, erase the punctuations, filter the stop words, convert the words in lower case, apply snowball stemmer and use the Tag Filter node to keep only nouns and verbs. Create a bag of words for the terms that have been tagged. Continue the analysis by transforming the terms into strings with the Term To String node and by filtering the Bag of Words to keep only the terms that occur at least 5 times in the documents. Compute the TF frequencies for the terms and bin them by using sample quantiles (0.0, 0.25, 0.5, 0.75, 1.0). Then, keep only Bin4 with the Row Filter node and continue the filtering by keeping terms with TF lower bound >0.2. Finally, group the most frequent terms with the GroupBy node. [1] Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. (2011). Learning Word Vectors for Sentiment Analysis. The 49th Annual Meeting of the Association for Computational Linguistics (ACL 2011)


This is a companion discussion topic for the original entry at https://kni.me/w/0OZr9sGuzeGC4E6W