Ling 573 - Natural Language Processing Systems and Applications
Spring 2011
Deliverable #4: Final Question-Answering Systems:
Code, Outputs, and Scores: Due May 31, 2011: 09:00
Final reports: Due June 7, 2011: 23:59


Goals

In this deliverable, you will complete development of your question-answering system. You will

Answer Extraction

For this deliverable, you will need to refine your previous passage retrieval approach to achieve more targeted answer extraction. Given the limited time in the course, we will not require Jeopardy!-style answers. Instead, you will aim to improve your results and produce more targeted answer spans. Specifically, for each question, you should produce the best 20 answer snippets of the following lengths: You may build on techniques presented in class, described in the reading list, and proposed in other research articles.

Data

Document Collection
The Aquaint Corpus was employed as the document collection for the question-answering task for a number of years, and will form the basis of retrieval for this deliverable . The collection can be found on patas in /corpora/LDC/LDC02T31/.
Training Data and Development Test Data
You may use any of the pre-2005 TREC question collections for training and tuning. For 2003, 2004 there are prepared gold standard documents and answer patterns to allow you to train and tune your question answering system.

All pattern files appear in /dropbox/10-11/573/Data/patterns.

All question files appear in /dropbox/10-11/573/Data/Questions.

Test Data
You should perform your final evaluation on the TREC-2005 questions and their corresponding documents and answer string patterns. You are only required to test on the factoid questions.
NOTE:Please do NOT tune on these questions. (I know this is hard since you've seen them before.)

Evaluation

You will compute MRR (Mean Reciprocal Rank), strict and lenient, of results for each of the three answer length runs. These scores should be placed in files called QA.results_year_length, in the results directory.

A simple script for calculating MRR based on the Litkowski pattern files and your outputs is provided in /dropbox/10-11/573/code/compute_mrr.py. It should be called as follows: python2.6 compute_mrr.py pattern_file QA.outputs {type} where

Outputs

Create six (6) output files in the outputs directory, based on running your question-answering system on the 2004 training data file and the 2005 test data file, for each of the required answer string lengths. They should be named QA.outputs_year_length, like the results files above.

Completing the project report

This final version should include all required sections, as well as a complete system architecture description and proper bibliography including all and only the papers you have actually referenced. See this document for full details. Please name your report D4.pdf.

Presentation

Your presentation may be prepared in any computer-projectable format, including HTML, PDF, PPT, and Word. Your presentation should take about 10 minutes to cover your main content, including: Your presentation should be deposited in your doc directory, but it is not due until the actual presentation time. You may continue working on it after the main deliverable is due.

Summary