Back to table of contents

Two human hands in the sun facing a blue sky Hands are versatile tools for input. Credit: Luisfi (Own work) [CC BY-SA 3.0] via Wikimedia Commons

Hands

Andrew J. Ko

Thus far, we have discussed two forms of input to computers: pointing and text entry. Both are mostly sufficient for operating most forms of computers. But, as we discussed in our chapter on history, interfaces have always been about augmenting human ability and cognition, and so researchers have pushed far beyond pointing and text to explore many new forms of input. In this chapter, we focus on the use of hands to interact with computers, including touch screens, pens, gestures, and hand tracking.

One of the central motivations for exploring hand-based input came from new visions of interactive computing. For instance, in 1991, Mark Weiser, who at the time was head of the very same Xerox PARC that led to the first GUI, wrote in Scientific American about a vision of ubiquitous computing (Weiser 1991). In this vision, computing would disappear, become invisible, and become a seamless part of everyday tasks:

Hundreds of computers in a room could seem intimidating at first, just as hundreds of volts coursing through wires in the walls did at one time. But like the wires in the walls, these hundreds of computers will come to be invisible to common awareness. People will simply use them unconsciously to accomplish everyday tasks... There are no systems that do well with the diversity of inputs to be found in an embodied virtuality.

Within this vision, input must move beyond the screen, supporting a wide range of embodied forms of computing. We'll begin by focusing on input techniques that rely on hands, just as pointing and text-entry largely have: physically touching a surface, using a pen-shaped object to touch a surface, and moving the hand or wrist to convey a gesture. Throughout, we will discuss how each of these forms of interaction imposes unique gulfs of execution and evaluation.

A diagram of a finger touching a touch screen surface. A 5-wire resistive touch screen for sensing position. Credit: Mercury13 [CC BY-SA 3.0]

Touch

Perhaps the most ubiquitous and familiar form of hand-based input is using our fingers to touch screens. The first touch screens originated in the mid-1960's. They worked similarly to modern touchscreens, just with less fidelity. The earliest screens consisted of an insulator panel with a resistive coating. When a conductive surface such as a finger made contact, it closed a circuit, flipping a binary input from off to on. It didn't read position, pressure, or other features of a touch, just that the surface was being touched. Resistive touch screens came next, and rather than using capacitance to close a circuit, it relied on pressure to measure voltage flow between X wires and Y wires, allowing a position to be read. In the 1980's, HCI researcher Bill Buxton (my academic grandfather) invented the first multi-touch screen while at the University of Toronto, placing a camera behind a frosted glass panel, and using machine vision to detect different black spots from finger occlusion. This led to several other advancements in sensing technologies that did not require a camera, and in the 1990's, multi-touch screens launched on consumer devices, including handheld devices like the Apple Newton and the Palm Pilot. The 2000's brought even more innovation in sensing technology, eventually making multi-touch screens small enough to embed in the smartphones we use today. (You can see a more detailed history in this nice ArsTechnica feature on the history of multi-touch.)

As you are probably already aware, touch screens impose a wide range of gulfs of execution and evaluation on users. On first use, for example, it is difficult to know if a surface is touchable. One will often see children used to everything being a touch screen attempt to touch non-touchscreens, confused by the screen isn't providing any feedback. Then, of course, touch screens often operate via complex multi-fingered gestures. These have to be somehow taught to users, and successfully learned, before someone can successfully operate a touch interface. This learning requires careful feedback to address gulfs of evaluation, especially if a gesture isn't accurately performed. Most operating systems rely on the fact that people will learn how to operate touchscreens from other people, such as through a tutorial at a store.

Spherical multitouch, from Microsoft Research (Benko et al. 2008).

While touchscreens might seem ubiquitous and well understood, the HCI research has been pushing its limits even further. Some of this work has invented new types of touch sensors. For example, researchers have worked on materials that allow touch surfaces to be cut into arbitrary shapes and sizes other than rectangles (Olberding et al. 2013). Some have worked on touch surfaces made of foil and magnets that can sense bending and pressure (Rendl et al. 2012), or thin, stretchable, transparent surfaces that can detect force, pinching, and dragging (Sugiura et al. 2012). Others have made 3-dimensional spherical touch surfaces (Benko et al. 2008) and explored using any surface as a touch screen using depth-sensing cameras and projectors (Harrison et al. 2011a).

Other researchers have explored ways of more precise sensing of how a touch screen is touched. Some have added speakers to detect how something was grasped or touched (Ono et al. 2013), or leveraged variations in the resonance of people's finger anatomy to recognize different fingers and parts of different fingers (Harrison et al. 2011b), or used the resonance of surfaces to detect and classify different types of surface scratching (Harrison and Hudson 2008), including through fabric (Saponas et al. 2011). Depth cameras can also be used to detect the posture and handedness of touch (Murugappan et al. 2012). All of these represent new channels of input that go beyond position, allowing for new, richer, more powerful interfaces.

Multi-user multi-touch, by MERL (Dietz and Leigh 2001).

Commericial touch screens still focus on single-user interface, only allowing one person at a time to touch a screen. Research, however, has explored many ways to differentiate between multiple people using a single touch-screen. One approach is to have users sit on a surface that determines their identity, differentiating touch input (Dietz and Leigh 2001). Another approach uses wearables to differentiate users (Webb et al. 2016). Less obtrusive techniques have successfully used variation in user bone-density, muscle mass, and footwear (Harrison et al. 2012), or fingerprint detection embedded in a display (Holtz and Baudisch 2013).

While these inventions have richly explored many possible new forms of interaction, there is so far very little appetite for touch screen innovation in industry. Apple's force-sensitive touch screen interactions (called "3D touch") is one example of an innovation that made it to market, but there are some indicators that Apple will abandon it after just a few short years of users not being able to discover it (a classic gulf of execution).

An Apple Pencil input device. The Apple Pencil. Credit: Brett Jordan (https://flic.kr/p/BHRaD1) [CC BY 2.0]

Pens

In addition to fingers, many researchers have explored the unique benefits of pen-based interactions to support handwriting, sketching, diagramming, or other touch-based interactions. These leverage the skill of grasping a pen or pencil that many are familiar with from manual writing.

Some of these pen-based interactions are simply replacements for fingers. For example, the Palm Pilot, popular in the 1990's, required the use of a stylus for it's resistive touch-screen, but the pens themselves were plastic. They merely served to prevent fatigue from applying pressure to the screen with a finger and to increase the precision of touch during handwriting or interface interactions.

However, pens impose their own unique gulfs of execution and evaluation. For example, many pens are not active until a device is set to a mode to receive pen input. The Apple Pencil, for example, only works in particular modes and interfaces, and so it is up to a person to experiment with an interface to discover whether it is pencil compatible. Pens themselves can also have buttons and switches that control modes in software, which require people to learn what the modes control and what effect they have on input and interaction. Pens also sometimes fail to play well with the need to enter text, as typing is faster than tapping one character at a time with a pen. One consequence of these gulfs of execution and efficiency issues is that pens are often used for specific applications such as drawing or sketching, where someone can focus on learning the pen's capabilities and is unlikely to be entering much text.

Researchers have explored new types of pen interactions that attempt to break beyond these niche applications. For example, some techniques explore a user using touch input with a non-dominant hand, and pen with a dominant hand (Hinkley et al. 2010, Hamilton et al. 2012), affording new forms of bi-manual input that have higher throughput than just one hand. Others have investigated ways of using the size of a pen's head to add another channel of input (Bi et al. 2008), or even using a physical pen barrel, but with a virtual head, allowing for increased efficiency through software-based precision and customization (Lee et al. 2012).

Other pen-based innovations are purely software based. For example, some interactions improve handwriting recognition by allowing users to correct recognition errors while writing (Shilman et al. 2006), attempting to make the interplay between pen input and text input more seamless. Others have explored techniques for interacting with large displays for sketching and brainstorming activities (Guimbretière et al. 2001). Researchers have developed interactions for particular sketching media, such as algorithms that allow painting that respects edges within images (Olsen, Jr. and Harris 2008) and diagramming tools that follow the paradigms of pencil-based architectural sketching (Zelenik et al. 2008). More recent techniques use software-based motion tracking and a camera to support six degree-of-freedom sub-millimeter accuracy (Wu et al. 2017).

Sixteen unistroke gestures. Sixteen unistroke gestures. Credit: (Wobbrock et al. 2007)

Gestures

Whereas touch and pens involve traditional pointing, gesture-based interactions involve recognizing patterns in hand position or movement. Some gestures still recognize a gesture from a time-series of points in a 2-dimensional plane, such as the type of multi-touch gestures such as pinching and dragging on a touchscreen, or symbol recognition in handwriting or text entry. This type of gesture recognition can be done with a relatively simple recognition algorithm (Wobbrock et al. 2007).

Other gestures rely on 3-dimensional input about the position of fingers and hands in space. Some recognition algorithms seek to recognize single hand positions, such as when the user brings their thumb and forefinger together (a pinch gesture) (Wilson 2006). Researchers have developed tools to make it easier for developers to build applications that respond to in-air hand gestures (Krupka et al. 2017). Other techniques try to model hand gestures using alternative techniques such as Electrical Impedance Tomography (EIT) (Zhang and Harrison 2015), radio frequencies (Wang et al. 2016), the electromagnetic field pulsed by GSM in a phone (Zhao et al. 2014), or full machine vision of in-air hand gestures (Colaço et al. 2013, Song et al. 2014). Some researchers have leveraged wearables to simplify recognition and increase recognition accuracy. These have included sensors mounted on fingers (Gupta et al. 2016), movement of a smartwatch through wrist rotations (Zhang et al. 2016), while walking (Gong et al. 2016), or while being tapped or scratched (Laput et al. 2016).

While all of these inventions are exciting in their potential, gestures have significant gulfs of execution and evaluation. How does someone learn the gestures? How do we create tutorials that give feedback on correct gesture "posture"? When someone performs a gesture incorrectly, how can someone undo it if it had an unintended effect? What if the undo gesture is performed incorrectly? These questions ultimately arise from the unreliability of gesture classification.

Hand Tracking

Color glove used for hand tracking A glove used to faciliate hand tracking with cameras. Credit: (Wang et al. 2009)

Gesture-based systems look at patterns in hand motion to recognize a set of discrete poses or gestures. This is often appropriate when the user wants to trigger some action, but what about tasks in 3D that require pointing or spatial manipulation? Hand tracking systems are better suited for these tasks because they treat the hand as a continuous input device and estimate the hand’s real-time position and orientation.

Most hand tracking systems use cameras and computer vision techniques to track the hand in space. These systems often rely on an approximate model of the hand skeleton, including bones and joints, and solve for the joint angles and hand pose that best fits the observed data. Researchers have used gloves with unique color patterns, shown above, to make the hand easier to identify and to simplify the process of pose estimation (Wang et al. 2009).

Since then, researchers have developed and refined techniques using depth cameras like the Kinect for tracking the hand without the use of markers or gloves (Wang et al. 2011, Oberweger et al. 2015, Taylor et al. 2016, Mueller et al. 2017). Commercial devices, such as the Leap Motion have been developed that bring hand tracking to computers or virtual reality devices. These tracking systems have been used for interaction on large displays (Liu et al. 2015) and haptic devices (Long et al. 2014).

Hand tracking system from Microsoft Research. With precise tracking, hands can be used to manipulate virtual widgets. Credit: (Taylor et al. 2007)

For head-mounted virtual and augmented reality systems, a common way to track the hands is through the use of positionally tracked controllers. Systems such as the Oculus Rift or HTC Vive, use cameras and infrared LEDs to track both the position and orientation of the controllers.

Like gesture interactions, the potential for classification error in hand tracking interactions can impose significant gulfs of execution and evaluation. However, because the applications of hand tracking often involve manipulation of 3D objects rather than invoking commands, the severity of these gulfs may be lower in practice. This is because object manipulation is essentially the same as direct manipulation: it's easy to see what effect the hand tracking is having and correct it if the tracking is failing.


While there has been incredible innovation in hand-based input, there are still many open challenges. They can be hard to learn for new users, requiring careful attention to tutorials and training. And, because of the potential for recognition error, interfaces need some way of helping people correct errors, undo commands, and try again. Moreover, because all of these input techniques use hands, few are accessible to people with severe motor impairments in their hands, people lacking hands altogether, or if the interfaces use visual feedback to bridge gulfs of evaluation, people lacking sight. In the next chapter, we will discuss techniques that rely on other parts of a human body for input, and therefore can be more accessible to people with motor impairments.

Next chapter: Body

Further reading

Hrvoje Benko, Andrew D. Wilson, and Ravin Balakrishnan. 2008. Sphere: multi-touch interactions on a spherical display. In Proceedings of the 21st annual ACM symposium on User interface software and technology (UIST '08). ACM, New York, NY, USA, 77-86.

Xiaojun Bi, Tomer Moscovich, Gonzalo Ramos, Ravin Balakrishnan, and Ken Hinckley. 2008. An exploration of pen rolling for pen-based interaction. In Proceedings of the 21st annual ACM symposium on User interface software and technology (UIST '08). ACM, New York, NY, USA, 191-200.

Xiaojun Bi and Shumin Zhai. 2016. Predicting Finger-Touch Accuracy Based on the Dual Gaussian Distribution Model. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 313-319.

Andrea Colaço, Ahmed Kirmani, Hye Soo Yang, Nan-Wei Gong, Chris Schmandt, and Vivek K. Goyal. 2013. Mime: compact, low power 3D gesture sensing for interaction with head mounted displays. In Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST '13). ACM, New York, NY, USA, 227-236.

Paul Dietz and Darren Leigh. 2001. DiamondTouch: a multi-user touch technology. In Proceedings of the 14th annual ACM symposium on User interface software and technology (UIST '01). ACM, New York, NY, USA, 219-226.

Jun Gong, Xing-Dong Yang, and Pourang Irani. 2016. WristWhirl: One-handed Continuous Smartwatch Input using Wrist Gestures. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 861-872.

François Guimbretière, Maureen Stone, and Terry Winograd. 2001. Fluid interaction with high-resolution wall-size displays. In Proceedings of the 14th annual ACM symposium on User interface software and technology (UIST '01). ACM, New York, NY, USA, 21-30.

Aakar Gupta, Antony Irudayaraj, Vimal Chandran, Goutham Palaniappan, Khai N. Truong, and Ravin Balakrishnan. 2016. Haptic Learning of Semaphoric Finger Gestures. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 219-226.

William Hamilton, Andruid Kerne, and Tom Robbins. 2012. High-performance pen + touch modality interactions: a real-time strategy game eSports context. In Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST '12). ACM, New York, NY, USA, 309-318.

Chris Harrison and Scott E. Hudson. 2008. Scratch input: creating large, inexpensive, unpowered and mobile finger input surfaces. In Proceedings of the 21st annual ACM symposium on User interface software and technology (UIST '08). ACM, New York, NY, USA, 205-208.

Chris Harrison, Hrvoje Benko, and Andrew D. Wilson. 2011a. OmniTouch: wearable multitouch interaction everywhere. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST '11). ACM, New York, NY, USA, 441-450.

Chris Harrison, Julia Schwarz, and Scott E. Hudson. 2011b. TapSense: enhancing finger interaction on touch surfaces. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST '11). ACM, New York, NY, USA, 627-636.

Chris Harrison, Munehiko Sato, and Ivan Poupyrev. 2012. Capacitive fingerprinting: exploring user differentiation by sensing electrical properties of the human body. In Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST '12). ACM, New York, NY, USA, 537-544.

Ken Hinckley, Koji Yatani, Michel Pahud, Nicole Coddington, Jenny Rodenhouse, Andy Wilson, Hrvoje Benko, and Bill Buxton. 2010. Pen + touch = new tools. In Proceedings of the 23nd annual ACM symposium on User interface software and technology (UIST '10). ACM, New York, NY, USA, 27-36.

Christian Holz and Patrick Baudisch. 2013. Fiberio: a touchscreen that senses fingerprints. In Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST '13). ACM, New York, NY, USA, 41-50.

Eyal Krupka, Kfir Karmon, Noam Bloom, Daniel Freedman, Ilya Gurvich, Aviv Hurvitz, Ido Leichter, Yoni Smolin, Yuval Tzairi, Alon Vinnikov, and Aharon Bar-Hillel. 2017. Toward Realistic Hands Gesture Interface: Keeping it Simple for Developers and Machines. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 1887-1898.

Gierad Laput, Robert Xiao, and Chris Harrison. 2016. ViBand: High-Fidelity Bio-Acoustic Sensing Using Commodity Smartwatch Accelerometers. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 321-333.

David Lee, KyoungHee Son, Joon Hyub Lee, and Seok-Hyung Bae. 2012. PhantomPen: virtualization of pen head for digital drawing free from pen occlusion & visual parallax. In Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST '12). ACM, New York, NY, USA, 331-340.

Mathieu Le Goc, Pierre Dragicevic, Samuel Huron, Jeremy Boy, and Jean-Daniel Fekete. 2015. SmartTokens: Embedding Motion and Grip Sensing in Small Tangible Objects. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15). ACM, New York, NY, USA, 357-362.

Mingyu Liu, Mathieu Nancel, and Daniel Vogel. 2015. Gunslinger: Subtle Arms-down Mid-air Interaction. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15). ACM, New York, NY, USA, 63-71.

Benjamin Long, Sue Ann Seah, Tom Carter, and Sriram Subramanian. 2014. Rendering volumetric haptic shapes in mid-air using ultrasound. ACM Trans. Graph. 33, 6, Article 181 (November 2014), 10 pages.

Tomer Moscovich. 2009. Contact area interaction with sliding widgets. In Proceedings of the 22nd annual ACM symposium on User interface software and technology (UIST '09). ACM, New York, NY, USA, 13-22.

Mueller, F., Mehta, D., Sotnychenko, O., Sridhar, S., Casas, D., & Theobalt, C. (2017, April). Real-time hand tracking under occlusion from an egocentric rgb-d sensor. In Proceedings of International Conference on Computer Vision (ICCV) (Vol. 10).

Sundar Murugappan, Vinayak, Niklas Elmqvist, and Karthik Ramani. 2012. Extended multitouch: recovering touch posture and differentiating users using a depth camera. In Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST '12). ACM, New York, NY, USA, 487-496.

Oberweger, M., Wohlhart, P., & Lepetit, V. (2015). Hands deep in deep learning for hand pose estimation. arXiv preprint arXiv:1502.06807.

Simon Olberding, Nan-Wei Gong, John Tiab, Joseph A. Paradiso, and Jürgen Steimle. 2013. A cuttable multi-touch sensor. In Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST '13). ACM, New York, NY, USA, 245-254.

Dan R. Olsen, Jr. and Mitchell K. Harris. 2008. Edge-respecting brushes. In Proceedings of the 21st annual ACM symposium on User interface software and technology (UIST '08). ACM, New York, NY, USA, 171-180.

Makoto Ono, Buntarou Shizuki, and Jiro Tanaka. 2013. Touch & activate: adding interactivity to existing objects using active acoustic sensing. In Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST '13). ACM, New York, NY, USA, 31-40.

Patrick Paczkowski, Julie Dorsey, Holly Rushmeier, and Min H. Kim. 2014. Paper3D: bringing casual 3D modeling to a multi-touch interface. In Proceedings of the 27th annual ACM symposium on User interface software and technology (UIST '14). ACM, New York, NY, USA, 23-32.

Christian Rendl, Patrick Greindl, Michael Haller, Martin Zirkl, Barbara Stadlober, and Paul Hartmann. 2012. PyzoFlex: printed piezoelectric pressure sensing foil. In Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST '12).

T. Scott Saponas, Chris Harrison, and Hrvoje Benko. 2011. PocketTouch: through-fabric capacitive touch input. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST '11). ACM, New York, NY, USA, 303-308.

Michael Shilman, Desney S. Tan, and Patrice Simard. 2006. CueTIP: a mixed-initiative interface for correcting handwriting errors. In Proceedings of the 19th annual ACM symposium on User interface software and technology (UIST '06). ACM, New York, NY, USA, 323-332.

Jie Song, Gábor Sörös, Fabrizio Pece, Sean Ryan Fanello, Shahram Izadi, Cem Keskin, and Otmar Hilliges. 2014. In-air gestures around unmodified mobile devices. In Proceedings of the 27th annual ACM symposium on User interface software and technology (UIST '14). ACM, New York, NY, USA, 319-329.

Yuta Sugiura, Masahiko Inami, and Takeo Igarashi. 2012. A thin stretchable interface for tangential force measurement. In Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST '12). ACM, New York, NY, USA, 529-536.

Jonathan Taylor, Lucas Bordeaux, Thomas Cashman, Bob Corish, Cem Keskin, Toby Sharp, Eduardo Soto, David Sweeney, Julien Valentin, Benjamin Luff, Arran Topalian, Erroll Wood, Sameh Khamis, Pushmeet Kohli, Shahram Izadi, Richard Banks, Andrew Fitzgibbon, and Jamie Shotton. 2016. Efficient and precise interactive hand tracking through joint, continuous optimization of pose and correspondences. ACM Trans. Graph. 35, 4, Article 143 (July 2016), 12 pages.

Robert Y. Wang and Jovan Popović. 2009. Real-time hand-tracking with a color glove. ACM Trans. Graph. 28, 3, Article 63 (July 2009), 8 pages.

Robert Wang, Sylvain Paris, and Jovan Popović. 2011. 6D hands: markerless hand-tracking for computer aided design. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST '11). ACM, New York, NY, USA, 549-558.

Saiwen Wang, Jie Song, Jaime Lien, Ivan Poupyrev, and Otmar Hilliges. 2016. Interacting with Soli: Exploring Fine-Grained Dynamic Gesture Recognition in the Radio-Frequency Spectrum. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 851-860.

Andrew M. Webb, Michel Pahud, Ken Hinckley, and Bill Buxton. 2016. Wearables as Context for Guiard-abiding Bimanual Touch. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 287-300.

Weiser, M. (1991). The computer for the 21st century. Scientific American, 265(3), 94-104.

Andrew D. Wilson. 2006. Robust computer vision-based detection of pinching for one and two-handed gesture input. In Proceedings of the 19th annual ACM symposium on User interface software and technology (UIST '06). ACM, New York, NY, USA, 255-258.

Jacob O. Wobbrock, Andrew D. Wilson, and Yang Li. 2007. Gestures without libraries, toolkits or training: a $1 recognizer for user interface prototypes. In Proceedings of the 20th annual ACM symposium on User interface software and technology (UIST '07). ACM, New York, NY, USA, 159-168.

Robert C. Zeleznik, Andrew Bragdon, Chu-Chi Liu, and Andrew Forsberg. 2008. Lineogrammer: creating diagrams by drawing. In Proceedings of the 21st annual ACM symposium on User interface software and technology (UIST '08). ACM, New York, NY, USA, 161-170.

Yang Zhang and Chris Harrison. 2015. Tomo: Wearable, Low-Cost Electrical Impedance Tomography for Hand Gesture Recognition. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15). ACM, New York, NY, USA, 167-173.

Yang Zhang, Robert Xiao, and Chris Harrison. 2016. Advancing Hand Gesture Recognition with High Resolution Electrical Impedance Tomography. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 843-850.

Chen Zhao, Ke-Yu Chen, Md Tanvir Islam Aumi, Shwetak Patel, and Matthew S. Reynolds. 2014. SideSwipe: detecting in-air gestures around mobile devices using actual GSM signal. In Proceedings of the 27th annual ACM symposium on User interface software and technology (UIST '14). ACM, New York, NY, USA, 527-534.