Calculating Word Statistics from the Gutenberg Corpus

Following on from the previous article about scanning text files for word statistics, I shall extend this to use real large corpora. First we shall use this script to create statistics for the entire Gutenberg English language corpus. Next I shall do the same with the entire English language Wikipedia.

Calculating Word Frequency Tables

Now that we can segment words and sentences, it is possible to produce word and tuple frequency tables. Here I show you how to create a word frequency table for a large collection of text files.

Extracting Noun Phrases from Parsed Trees

Following on from my previous post about NLTK Trees, here is a short Python function to extract phrases from an NLTK Tree structure.

Support for SciPy in NLTK’s Maximum Entropy methods

Recently I have been working with the Maximum Entropy classifiers in NLTK. Maximum entropy models are similar to the well known Naive Bayes models but they allow for independence between the features – i.e. they are not “naive”. SciPy has had some problems with its Maximum Entropy code, and v0.8 must be used. v0.9 crashes ...