Due 5:00PM Monday 11/24. You may submit your exam on paper (either to me or to my mailbox on the second floor of the CMC) or by placing it into your hand-in folder. Any code you write should go into your hand-in folder in any case. Please put final-related electronic submissions in a folder called hand-in/final/. For electronic text, I prefer .txt, but .doc or .rtf are OK if you have something that doesn't work in straight text.
This is an open-book, open-Internet, open-library takehome exam. You may not consult with any person other than me (Jeff Ondich, to be precise) about this exam.
Several of the questions on this exam concern a tiny grammar and lexicon restricted to the domain of Ernie's breakfast.
Suppose we want to model the difference between mass nouns and count nouns ("some food" vs. "three pancakes") using feature structures and unification. Our goal in the context of Ernie's breakfast is to prevent "Ernie eats pancake" and "Ernie eats a food" from being accepted.
Add suitable feature structures to the grammar and lexicon. Note that you will not need to add feature structures to every rule.
Show the parse trees for "Ernie eats a pancake" and "Ernie eats pancake" with unified feature structures drawn next to each constituent in the tree. Again, this is the tree where all the unification operations have been performed from the bottom up, rather than a simple labeling of the non-terminals with their associated feature structures from the grammar and lexicon. Note that some constituents will have no feature structure, while some may be labeled "failure to unify" or something like that.
We didn't discuss smoothing in much detail this term, so here's an exercise to give you a little hands-on experience with smoothing. These questions refer to the hickory corpus.
We have installed WordNet 3.0 at /usr/local/WordNet-3.0. Before working on these exercises, you should add /usr/local/WordNet-3.0/bin to your path.
Using the "User Commands" documentation in the WordNet 3.0 Reference Manual, try to answer the following questions.
PyWordNet is a Python module for WordNet, and it is installed on the Macs in our labs. The PyWordNet home page has examples, but not much documentation beyond that. If you are so inclined, you can study the source code in /Library/Python/2.5/site-packages/wordnet.py and /Library/Python/2.5/site-packages/wntools.py.
You may find it helpful to read and run this small test of PyWordNet.
I like to keep track of what college students think is interesting and cool, even though I'm old and clueless. I'd like your help in my on-going education. What do you think I should read or watch or listen to if I want to know what's interesting in the world these days?
I promised long ago that we would investigate the mechanisms involved in transforming ordinary sentences into sentences more likely to be spoken by either Yoda or Rob Oden. Unfortunately, I have not been able to collect a sufficient number of really characteristic Oden sentences, so I'll have to keep working on that for the next time I teach this course.
Fortunately, there are plenty of Yoda quotes available. Here are a few interesting sentences:
Your job for this problem is to create a detailed outline of a computational process you would use to translate normal sentences into Yoda sentences (or vice versa, if you prefer). You will probably need to describe relevant grammar rules, and some sort of transformation operations to be performed on parse trees once they are created. Also, you should make note of the kinds of errors your process is likely to make.