The Effect of Offset Correction and Cursor on Mid-Air Pointing in Real and Virtual EnvironmentsPointing at remote objects to direct others' attention is a fundamental human ability. Previous work explored methods for remote pointing to select targets. Absolute pointing techniques that cast a ray from the user to a target are affected by humans' limited pointing accuracy. Recent work suggests that accuracy can be improved by compensating systematic offsets between targets a user aims at and rays cast from the user to the target. In this paper, we investigate mid-air pointing in the real world and virtual reality. Through a pointing study, we model the offsets to improve pointing accuracy and show that being in a virtual environment affects how users point at targets. In the second study, we validate the developed model and analyze the effect of compensating systematic offsets. We show that the provided model can significantly improve pointing accuracy when no cursor is provided. We further show that a cursor improves pointing accuracy but also increases the selection time.2018SMSven Mayer et al.University of StuttgartFull-Body Interaction & Embodied InputEye Tracking & Gaze InteractionCHI
Understanding Face and Eye Visibility in Front-Facing Cameras of Smartphones used in the WildCommodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations.2018MKMohamed Khamis et al.LMU MunichEye Tracking & Gaze InteractionHuman Pose & Activity RecognitionCHI
PalmTouch: Using the Palm as an Additional Input Modality on Commodity SmartphonesTouchscreens are the most successful input method for smartphones. Despite their flexibility, touch input is limited to the location of taps and gestures. We present PalmTouch, an additional input modality that differentiates between touches of fingers and the palm. Touching the display with the palm can be a natural gesture since moving the thumb towards the device's top edge implicitly places the palm on the touchscreen. We present different use cases for PalmTouch, including the use as a shortcut and for improving reachability. To evaluate these use cases, we have developed a model that differentiates between finger and palm touch with an accuracy of 99.53% in realistic scenarios. Results of the evaluation show that participants perceive the input modality as intuitive and natural to perform. Moreover, they appreciate PalmTouch as an easy and fast solution to address the reachability issue during one-handed smartphone interaction compared to thumb stretching or grip changes.2018HLHuy Viet Le et al.University of StuttgartMid-Air Haptics (Ultrasonic)Hand Gesture RecognitionFoot & Wrist InteractionCHI
Physical Keyboards in Virtual Reality: Analysis of Typing Performance and Effects of Avatar HandsEntering text is one of the most common tasks when interacting with computing systems. Virtual Reality (VR) presents a challenge as neither the user's hands nor the physical input devices are directly visible. Hence, conventional desktop peripherals are very slow, imprecise, and cumbersome. We developed a apparatus that tracks the user's hands, and a physical keyboard, and visualize them in VR. In a text input study with 32 participants, we investigated the achievable text entry speed and the effect of hand representations and transparency on typing performance, workload, and presence. With our {apparatus}, experienced typists benefited from seeing their hands, and reach almost outside-VR performance. Inexperienced typists profited from semi-transparent hands, which enabled them to type just 5.6 WPM slower than with a regular desktop setup. We conclude that optimizing the visualization of hands in VR is important, especially for inexperienced typists, to enable a high typing performance.2018PKPascal Knierim et al.Ludwig-Maximilian University of MunichEye Tracking & Gaze InteractionImmersion & Presence ResearchCHI
Evaluating the Disruptiveness of Mobile Interactions: A Mixed-Method ApproachWhile the proliferation of mobile devices has rendered mobile notifications ubiquitous, researchers are only slowly beginning to understand how these technologies affect everyday social interactions. In particular, the negative social influence of mobile interruptions remains unexplored from a methodological perspective. This paper contributes a mixed-method evaluation procedure for assessing the disruptive impact of mobile interruptions in conversation. The approach combines quantitative eye tracking, qualitative analysis, and a simulated conversation environment to enable fast assessment of disruptiveness. It is intended to be used as a part of an iterative interaction design process. We describe our approach in detail, present an example of its use to study a new call declining technique, and reflect upon the pros and cons of our approach.2018SMSven Mayer et al.University of StuttgartNotification & Interruption ManagementComputational Methods in HCICHI
Pac-Many: Movement Behavior when Playing Collaborative and Competitive Games on Large DisplaysPrevious work has shown that large high resolution displays (LHRDs) can enhance collaboration between users. As LHRDs allow free movement in front of the screen, an understanding of movement behavior is required to build successful interfaces for these devices. This paper presents Pac-Many; a multiplayer version of the classical computer game Pac-Man to study group dynamics when using LHRDs. We utilized smartphones as game controllers to enable free movement while playing the game. In a lab study, using a 4m × 1m LHRD, 24 participants (12 pairs) played Pac-Many in collaborative and competitive conditions. The results show that players in the collaborative condition divided screen space evenly. In contrast, competing players stood closer together to avoid benefits for the other player. We discuss how the nature of the task is important when designing and analyzing collaborative interfaces for LHRDs. Our work shows how to account for the spatial aspects of interaction with LHRDs to build immersive experiences.2018SMSven Mayer et al.University of StuttgartGame UX & Player BehaviorMultiplayer & Social GamesCHI
Reading on Smart Glasses: The Effect of Text Position, Presentation Type and WalkingSmart glasses are increasingly being used in professional contexts. Having key applications such as short messaging and newsreader, they enable continuous access to textual information. In particular, smart glasses allow reading while performing other activities as they do not occlude the user's world view. For efficient reading, it is necessary to understand how a text should be presented on them. We, therefore, conducted a study with 24 participants using a Microsoft HoloLens to investigate how to display text on smart glasses while walking and sitting. We compared text presentation in the top-right, center, and bottom-center positions with Rapid Serial Visual Presentation (RSVP) and line-by-line scrolling. We found that text displayed in the top-right of smart glasses increases subjective workload and reduces comprehension. RSVP yields higher comprehension while sitting. Conversely, reading with scrolling yields higher comprehension while walking. Insights from our study inform the design of reading interfaces for smart glasses.2018RRRufat Rzayev et al.University of StuttgartEye Tracking & Gaze InteractionAR Navigation & Context AwarenessCHI
Fingers' Range and Comfortable Area for One-Handed Smartphone Interaction Beyond the TouchscreenPrevious research and recent smartphone development presented a wide range of input controls beyond the touchscreen. Fingerprint scanners, silent switches, and Back-of-Device (BoD) touch panels offer additional ways to perform input. However, with the increasing amount of input controls on the device, unintentional input or limited reachability can hinder interaction. In a one-handed scenario, we conducted a study to investigate the areas that can be reached without losing grip stability (comfortable area), and with stretched fingers (maximum range) using four different phone sizes. We describe the characteristics of the comfortable area and maximum range for different phone sizes and derive four design implications for the placement of input controls to support one-handed BoD and edge interaction. Amongst others, we show that the index and middle finger are the most suited fingers for BoD interaction and that the grip shifts towards the top edge with increasing phone sizes.2018HLHuy Viet Le et al.University of StuttgartFoot & Wrist InteractionPrototyping & User TestingCHI
The Effect of Road Bumps on Touch Interaction in CarsTouchscreens are a common fixture in current vehicles. With autonomous driving, we can expect touch interaction with such in-vehicle media systems to exponentially increase. In spite of vehicle suspension systems, road perturbations will continue to exert forces that can render in-vehicle touch interaction challenging. Using a motion simulator, we investigate how different vehicle speeds interact with road features (i.e., speed bumps) to influence touch interaction. We determine their effect on pointing accuracy and task completion time. We show that road bumps have a significant effect on touch input and can decrease accuracy by 19%. In light of this, we developed a Random Forest (RF) model that improves touch accuracy by 32.0% on our test set and by 22.5% on our validation set. As the lightweight model uses only features that can easily be determined through inertial measurement units, this model could be easily deployed in current automobiles.2018SMSven Mayer et al.In-Vehicle Haptic, Audio & Multimodal FeedbackAutoUI
Designing Consistent Gestures Across Device Types: Eliciting RSVP Controls for Phone, Watch, and GlassesIn the era of ubiquitous computing, people expect applications to work across different devices. To provide a seamless user experience it is therefore crucial that interfaces and interactions are consistent across different device types. In this paper, we present a method to create gesture sets that are consistent and easily transferable. Our proposed method entails 1) the gesture elicitation on each device type, 2) the consolidation of a unified gesture set, and 3) a final validation by calculating a transferability score. We tested our approach by eliciting a set of user-defined gestures for reading with Rapid Serial Visual Presentation (RSVP) of text for three device types: phone, watch, and glasses. We present the resulting, unified gesture set for RSVP reading and show the feasibility of our method to elicit gesture sets that are consistent across device types with different form factors.2018TDTilman Dingler et al.Osaka Prefecture UniversityHand Gesture RecognitionUbiquitous ComputingCHI
<i>InfiniTouch:</i> Finger-Aware Interaction on Fully Touch Sensitive SmartphonesSmartphones are the most successful mobile devices and offer intuitive interaction through touchscreens. Current devices treat all fingers equally and only sense touch contacts on the front of the device. In this paper, we present InfiniTouch, the first system that enables touch input on the whole device surface and identifies the fingers touching the device without external sensors while keeping the form factor of a standard smartphone. We first developed a prototype with capacitive sensors on the front, the back and on three sides. We then conducted a study to train a convolutional neural network that identifies fingers with an accuracy of 95.78% while estimating their position with a mean absolute error of 0.74cm. We demonstrate the usefulness of multiple use cases made possible with InfiniTouch, including finger-aware gestures and finger flexion state as an action modifier.2018HLHuy Viet Le et al.In-Vehicle Haptic, Audio & Multimodal FeedbackHand Gesture RecognitionFull-Body Interaction & Embodied InputUIST