Ling/CSE 472: Assignment 3: N-grams and Corpora

Due May 8th, by 11:59pm (NB: No questions will be answered after 5pm)


Part 1: Corpora

This is a Treasure Hunt -- it is designed to introduce you to a range of linguistic corpora and to give you an idea of what is available to you on our server.

Please start at the UW Linguistics Treehouse Wiki: http://depts.washington.edu/uwcl/twiki/bin/view.cgi/Main/WebHome

Begin by finding and reading the Corpus Usage Guidelines. Note: Anyone with a Patas account is a lab member. Then, explore the information in the CompLing Database and answer the following questions about corpora. Some of these questions will be easier than others. Many will require only a single word answer while others may require a few sentences. Be sure to use your own words to complete your short answers -- you will not get credit for copied answers! For certain types of answers (e.g., yes/no, numbers, paths) this admonition is irrelevant.

Note: You may need to follow web links from the database entry pages to get some of the information. For example, for data from the LDC, you may find it helpful to use the LDC's Catalog Search feature. In other cases, you may need to investigate actual corpus files on Patas. This exercise will be most beneficial to you if you take this opportunity to really explore the corpora.

To Turn In

The file CorporaQ.txt has an outline of the questions to be answered (also below). Modify CorporaQ.txt by adding your answers and turn in the modified file (as a plain text file) via Canvas.

  1. Are you permitted to copy a corpus onto your own machine for research purposes?
  2. What is the process for requesting 'Available' corpora?
  3. Is there a single set of license conditions for all of this corpora?
  4. Are all of the 'Installed' corpora accessible immediately?
  5. How many corpora are Installed on our server?
  6. How many more are Available for installation upon request?
  7. What is the LDC? Specifically, what does the acronym stand for and what is its mission/purpose (in your own words)?
  8. How many of the Installed or Available corpora include:
  9. Are all of the corpora collections of written text? If not, what other kind of content is there?
  10. Find the corpus with this description:
  11. "TIMIT contains broadband recordings of 630 speakers of eight major dialects of American English, each reading ten phonetically rich sentences. The TIMIT corpus includes time-aligned orthographic, phonetic and word transcriptions as well as a 16-bit, 16kHz speech waveform file for each utterance."
  12. Find the Europarl Parallel Corpus. Hint: It includes Finnish data.
  13. Find the Google N-gram corpus (note: Ignore the GALE version of the corpus).
  14. List the names of 4 installed corpora that include dialogue act annotation and the languages they include.
  15. Find a POS tagged version of the Brown Corpus.
  16. There are two English corpora installed that have syntactically annotated text, Treebank-2 and CCG.
  17. In a paragraph, describe three other things of interest you found among the corpora, i.e., something you were not asked about above.

Part 2: N-grams

The SRI Language Modeling Toolkit is a toolkit for creating and using N-gram language models. It is installed on Patas, at /NLP_TOOLS/ml_tools/lm/srilm. In this part of the exercise, you will use it to train a series of language models, and see how well they model various sets of test data.

The Data

Copy these files to a directory on Patas.

holmes.txt 614,774 words The complete Sherlock Holmes novels and short stories by A. Conan Doyle, with the exception of the collection of stories His Last Bow (see below) and the collection The Case Book of Sherlock Holmes (which is not yet in the public domain in this country). We will use this corpus to train the language models.
hislastbow.txt 91,144 words The collection of Sherlock Holmes short stories His Last Bow by A. Conan Doyle.
lostworld.txt 89,600 words The novel The Lost World by A. Conan Doyle
otherauthors.txt 52,516 words Stories by English Authors: London, a collection of short stories written around the same time as the Sherlock Holmes canon and The Lost World.

We will use two utilities, ngram-count and ngram, both found in /NLP_TOOLS/ml_tools/lm/srilm/srilm-1.5.3/bin/i686-m64/. I suggest setting your PATH variable to include this path, at least for the duration of this assignment, by adding the following to the end of the file .bashrc in your home directory:

PATH=/NLP_TOOLS/ml_tools/lm/srilm/srilm-1.5.3/bin/i686-m64:$PATH

type

echo $PATH

to make sure /NLP_TOOLS/ml_tools/lm/srilm/srilm-1.5.3/bin/i686-m64 appears.

You can find basic documentation for ngram and ngram-count here , and more extensive documentation here .

Step 1: Build a language model

The following command will create a bigram language model called wbbigram.bo, using Witten-Bell discounting, from the text file holmes.txt:

ngram-count -text holmes.txt -order 2 -wbdiscount -lm wbbigram.bo

Step 2: Test the model

The following command will evaluate the language model wbbigram.bo against the test file hislastbow.txt

ngram -lm wbbigram.bo -order 2 -ppl hislastbow.txt

To Turn In

The file NgramQ.txt has an outline of the items to turn in. Modify the file by adding your answers and then turn it in (as a plain text file) via Canvas. Note: Questions 1, 5, and 8-10 require answers in the form of one or more paragraphs.

  1. What are each of the flags that you used in ngram and ngram-count for (in your own words)?

Evaluate this language model against the other test sets, lostworld.txt and otherauthors.txt. In your writeup, tell us:

  1. The perplexity (ppl) against hislastbow.txt
  2. The perplexity against lostworld.txt
  3. The perplexity against otherauthors.txt
  4. Why do you think the files with the higher perplexity got the higher perplexity?

Now build trigram and 4gram language models against the same training data (still using Witten-Bell discounting). Tell us:

  1. The six perplexity figures: one for each combination of language model and test set.

Build more language models using different smoothing methods. In particular, use "Ristad's natural discounting law" (the -ndiscount flag) and Kneser-Ney discounting (the -kndiscount flag). Tell us:

  1. Which combination - N-gram order, discounting method, test file - gives the best perplexity result?

In addition, answer these questions:

  1. How are the data files you used formatted, i.e., what preprocessing was done on the texts?
  2. What additional preprocessing step should have been taken?
  3. Discuss how/whether this does or does not affect the quality of the language models built.

Back to main course page