Homework 3 - Analyzing interactions

Andrew J. Ko

In homework 1, you learned to do a literature review, and in homework 2, you learned to extract the generalizable knowledge in research papers. This generalizable, abstract knowledge can be powerful, because it can help you generate other ideas, or manifest an idea in a new way.

While a high-level understanding of an idea can be useful, there is some insight about ideas that can only come from a detailed, low-level understanding of exactly how an interface works. For example, this level of detail can help you get a precise understanding of a system's limitations, suggesting for whom and in what contexts an idea will not work. A low-level understanding of an interface can also help you elicit uncertainties in an idea, which might have to be addressed before an idea would actually work in practice. This low-level understanding consists of a subset of the details we've talked about in our theory of user interfaces: a system's functional affordances and it's feedback.

Let's consider Apple's FaceID implementation in the iPhone X, and in particular, it's use on the lock screen to unlock the phone. As a user interface, what are it's functional affordances and feedback? We can look at Apple's FaceID Security guide for low-level details on it's implementation. First, we learn that on the lock screen FaceID is a passive input device: it's continuously scanning. Second, it uses several input streams to do this scanning:

This level of detail exposes a number of limitations and uncertainties. For example, if infrared light doesn't bounce off of a face, neither the proximity or depth camera sensors will work. Glass, plexiglas, wood, brick, stone, asphalt and paper all absorb infrared, so if someone was wearing a mask made out of these, FaceID wouldn't even detect a face, even if it looked like a face. Moreover, if you're sitting in a field full of other infrared radiation (for example, sitting next to another iPhone X user), these fields would interfere. And what about the face-detection classifiers? They should work well for whatever data was included in the training set, but not for data that wasn't. Whose faces weren't included? And what about gaze-detection for people with artificial eyes, or people without eyes? Finally, this whole process only supports one face, so people who share phones will have to decide who gets to use FaceID and who has to use a password. None of these limitations and uncertainties are clear until you really get into the details of exactly how FaceID works.

The task

I want you to practice doing the same analysis I did above. For one of the research contributions you summarized in homework 2 (or a different one if you're no longer interested in those):

  1. Write a detailed description of exactly how the system works. It should contain enough explanation that someone unfamiliar with all of the techniques (e.g., a supervisor or colleague) can still understand how it works. Think of this writing as teaching. You want someone to understand the behavior of the user interface.
  2. Use your description to generate a list of at least ten limitations or uncertainties for the technique. They could all be limitations or all be uncertainties; the distribution doesn't matter.

To achieve the above, you may need to become more expert in technologies you're unfamiliar with. For example, for the explanation above, I had to relearn how a range of sensors and machine learning technologies work. The more expertise you gain, the more limitations and uncertainties you'll be able to generate.


This homework is worth 5 points: