Calculating N-Gram Frequency Tables

The Word Frequency Table scripts can be easily expanded to calculate N-Gram frequency tables. This post explains how.

Calculating Word and N-Gram Statistics from a Wikipedia Corpora

As well as using the Gutenberg Corpus, it is possible to create a word frequency table for the English text of the Wikipedia encyclopedia.

Calculating Word Statistics from the Gutenberg Corpus

Following on from the previous article about scanning text files for word statistics, I shall extend this to use real large corpora. First we shall use this script to create statistics for the entire Gutenberg English language corpus. Next I shall do the same with the entire English language Wikipedia.

Calculating Word Frequency Tables

Now that we can segment words and sentences, it is possible to produce word and tuple frequency tables. Here I show you how to create a word frequency table for a large collection of text files.

Segmenting Words and Sentences

Even simple NLP tasks such as tokenizing words and segmenting sentences can have their complexities. Punctuation characters could be used to segment sentences, but this requires the punctuation marks to be treated as separate tokens. This would result in abbreviations being split into separate words and sentences. This post uses a classification approach to create ...

Extracting Noun Phrases from Parsed Trees

Following on from my previous post about NLTK Trees, here is a short Python function to extract phrases from an NLTK Tree structure.

NLTK Trees

A number of NLTK functions work with Tree objects. For example, part of speech tagging and chunking classifiers, naturally return trees. Sentence manipulation functions also work with trees. Although Natural Language Processing with Python (Bird et al) includes a couple of pages about NLTK’s Tree module, coverage is generally sparse. The online documentation actually contains ...

Part of Speech Tags

A frequently asked question is “What do the Part of Speech tags (VB, JJ, etc) mean?” The bottom line is that these tags mean whatever they meant in your original training data. You are free to invent your own tags in your training data, as long as you are consistent in their usage. Training data ...

Support for SciPy in NLTK’s Maximum Entropy methods

Recently I have been working with the Maximum Entropy classifiers in NLTK. Maximum entropy models are similar to the well known Naive Bayes models but they allow for independence between the features – i.e. they are not “naive”. SciPy has had some problems with its Maximum Entropy code, and v0.8 must be used. v0.9 crashes ...

Voting Machines in the Florida 2000 Election

This example uses Caliper® Maptitude® to analyze the Florida results of the 2000 US Presidential Election. During the 2000 Presidential Election, the results for Florida came under close scrutiny and argument, with the final result being decided by the courts. Amongst the accusations was the charge that a lot of people in the county of ...