I recently encountered the problem of having to extract the main body content from a series of web pages, and to discard all of the ‘boiler plate’ — i.e. header, menus, footer, and advertising. The application was performing statistical comparisons between web pages, and although it was producing the correct answers for my test data, ...
The OpenLayers CookBook by Antonio Santiago Perez is a good description of the more sophisticated functionality support by the OpenLayers open source library.
If you haven’t heard of it yet, the Raspberry Pi is a $25/$35 barebones computer intended to excite kids with programming and hardware projects. It is very much modeled on the British experience of home computing in the early 1980s and even has a “Model A” and a “Model B” in homage to the BBC ...
Previously, I showed you how to segment words and sentences whilst also taking into account full stops (periods) and abbreviations. The problem with this implementation is that it is easily confused by contiguous punctuation characters. For example “).” is not recognized as the end of a sentence. This article shows you how to correct this.
Previously, I showed you how to create N-Gram frequency tables from large text datasets. Unfortunately, when used on very large datasets such as the English language Wikipedia and Gutenberg corpora, memory limitations limited these scripts to unigrams. Here, I show you how to use the BerkeleyDB database to create N-gram tables of these large datasets.
NLTK 2.0 has officially been released as “v2.0.1”., and can be downloaded here: http://pypi.python.org/pypi/nltk/2.0.1 NLTK 2.0 was previously released as a “Release Candidate” – this is the first official release.
“Foundations of Statistical Natural Language Processing” by Christopher D. Manning and Hinrich Schutze has a relatively old publication date of 1999, but do not let this deter you from reading this useful book. This book continues to be an important foundation text in a fast moving field.
The Word Frequency Table scripts can be easily expanded to calculate N-Gram frequency tables. This post explains how.
As well as using the Gutenberg Corpus, it is possible to create a word frequency table for the English text of the Wikipedia encyclopedia.
Following on from the previous article about scanning text files for word statistics, I shall extend this to use real large corpora. First we shall use this script to create statistics for the entire Gutenberg English language corpus. Next I shall do the same with the entire English language Wikipedia.