Smooth as Steel Wool: Effects of Visual Stimuli on the Haptic Perception of Roughness in Virtual RealityHaptic Feedback is essential for lifelike Virtual Reality (VR) experiences. To provide a wide range of matching sensations of being touched or stroked, current approaches typically need large numbers of different physical textures. However, even advanced devices can only accommodate a limited number of textures to remain wearable. Therefore, a better understanding is necessary of how expectations elicited by different visualizations affect haptic perception, to achieve a balance between physical constraints and great variety of matching physical textures. In this work, we conducted an experiment (N=31) assessing how the perception of roughness is affected within VR. We designed a prototype for arm stroking and compared the effects of different visualizations on the perception of physical textures with distinct roughnesses. Additionally, we used the visualizations' real-world materials, no-haptics and vibrotactile feedback as baselines. As one result, we found that two levels of roughness can be sufficient to convey a realistic illusion.2022SGSebastian Günther et al.Technical University of DarmstadtVibrotactile Feedback & Skin StimulationImmersion & Presence ResearchCHI
Squeezy-Feely: Investigating Lateral Thumb-Index Pinching as an Input ModalityFrom zooming on smartphones and mid-air gestures to deformable user interfaces, thumb-index pinching grips are used in many interaction techniques. However, there is still a lack of systematic understanding of how the accuracy and efficiency of such grips are affected by various factors such as counterforce, grip span, and grip direction. Therefore, in this paper, we contribute an evaluation (N = 18) of thumb-index pinching performance in a visual targeting task using scales up to 75 items. As part of our findings, we conclude that the pinching interaction between the thumb and index finger is a promising modality also for one-dimensional input on higher scales. Furthermore, we discuss and outline implications for future user interfaces that benefit from pinching as an additional and complementary interaction modality.2022MSMartin Schmitz et al.Technical University of DarmstadtIn-Vehicle Haptic, Audio & Multimodal FeedbackHand Gesture RecognitionCHI
SkyPort: Investigating 3D Teleportation Methods in Virtual EnvironmentsTeleportation has become the de facto standard of locomotion in Virtual Reality (VR) environments. However, teleportation with parabolic and linear target aiming methods is restricted to horizontal 2D planes and it is unknown how they transfer to the 3D space. In this paper, we propose six 3D teleportation methods in virtual environments based on the combination of two existing aiming methods (linear and parabolic) and three types of transitioning to a target (instant, interpolated and continuous). To investigate the performance of the proposed teleportation methods, we conducted a controlled lab experiment (N = 24) with a mid-air coin collection task to assess accuracy, efficiency and VR sickness. We discovered that the linear aiming method leads to faster and more accurate target selection. Moreover, a combination of linear aiming and instant transitioning leads to the highest efficiency and accuracy without increasing VR sickness.2022AMAndrii Matviienko et al.Technical University of DarmstadtSocial & Collaborative VRImmersion & Presence ResearchCHI
CameraReady: Assessing the Influence of Display Types and Visualizations on Posture GuidanceComputer-supported posture guidance is used in sports, dance training, expression of art with movements, and learning gestures for interaction. At present, the influence of display types and visualizations have not been investigated in the literature. These factors are important as they directly impact perception and cognitive load, and hence influence the performance of participants. In this paper, we conducted a controlled experiment with 20 participants to compare the use of five display types with different screen sizes: smartphones, tablets, desktop monitors, TVs, and large displays. On each device, we compared three common visualizations for posture guidance: skeletons, silhouettes, and 3d body models. To conduct our assessment, we developed a mobile and cross-platform system that only requires a single camera. Our results show that compared to a smartphone display, larger displays show a lower error (12%). Regarding the choice of visualization, participants rated 3D body models as significantly more usable in comparison to a skeleton visualization.2021HEHesham Elsayed et al.Human Pose & Activity RecognitionDance & Body Movement ComputingDIS
Let’s Frets! Assisting Guitar Students during Practice via Capacitive SensingLearning a musical instrument requires regular exercise. However, students are often on their own during their practice sessions due to the limited time with their teachers, which increases the likelihood of mislearning playing techniques. To address this issue, we present Let's Frets - a modular guitar learning system that provides visual indicators and capturing of finger positions on a 3D-printed capacitive guitar fretboard. We based the design of Let's Frets on requirements collected through in-depth interviews with professional guitarists and teachers. In a user study (N=24), we evaluated the feedback modules of Let's Frets against fretboard charts. Our results show that visual indicators require the least time to realize new finger positions while a combination of visual indicators and position capturing yielded the highest playing accuracy. We conclude how Let's Frets enables independent practice sessions that can be translated to other musical instruments.2021KMKarola Marky et al.Technische Universität DarmstadtElectrical Muscle Stimulation (EMS)Haptic WearablesProgramming Education & Computational ThinkingCHI
Oh, Snap! A Fabrication Pipeline to Magnetically Connect Conventional and 3D-Printed Electronics3D printing has revolutionized rapid prototyping by speeding up the creation of custom-shaped objects. With the rise of multi-material 3Dprinters, these custom-shaped objects can now be made interactive in a single pass through passive conductive structures. However, connecting conventional electronics to these conductive structures often still requires time-consuming manual assembly involving many wires, soldering or gluing. To alleviate these shortcomings, we propose Oh, Snap!: a fabrication pipeline and interfacing concept to magnetically connect a 3D-printed object equipped with passive sensing structures to conventional sensing electronics. To this end, Oh, Snap! utilizes ferromagnetic and conductive 3D-printed structures, printable in a single pass on standard printers. We further present a proof-of-concept capacitive sensing board that enables easy and robust magnetic assembly to quickly create interactive 3D-printed objects. We evaluate Oh, Snap! by assessing the robustness and quality of the connection and demonstrate its broad applicability by a series of example applications.2021MSMartin Schmitz et al.Technical University of DarmstadtDesktop 3D Printing & Personal FabricationCircuit Making & Hardware PrototypingCHI
Itsy-Bits: Fabrication and Recognition of 3D-Printed Tangibles with Small Footprints on Capacitive TouchscreensTangibles on capacitive touchscreens are a promising approach to overcome the limited expressiveness of touch input. While research has suggested many approaches to detect tangibles, the corresponding tangibles are either costly or have a considerable minimal size. This makes them bulky and unattractive for many applications. At the same time, they obscure valuable display space for interaction. To address these shortcomings, we contribute Itsy-Bits: a fabrication pipeline for 3D printing and recognition of tangibles on capacitive touchscreens with a footprint as small as a fingertip. Each Itsy-Bit consists of an enclosing 3D object and a unique conductive 2D shape on its bottom. Using only raw data of commodity capacitive touchscreens, Itsy-Bits reliably identifies and locates a variety of shapes in different sizes and estimates their orientation. Through example applications and a technical evaluation, we demonstrate the feasibility and applicability of Itsy-Bits for tangibles with small footprints.2021MSMartin Schmitz et al.Technical University of DarmstadtCircuit Making & Hardware PrototypingCustomizable & Personalized ObjectsCHI
3D-Auth: Two-Factor Authentication with Personalized 3D-Printed ItemsTwo-factor authentication is a widely recommended security mechanism and already offered for different services. However, known methods and physical realizations exhibit considerable usability and customization issues. In this paper, we propose 3D-Auth, a new concept of two-factor authentication. 3D-Auth is based on customizable 3D-printed items that combine two authentication factors in one object. The object bottom contains a uniform grid of conductive dots that are connected to a unique embedded structure inside the item. Based on the interaction with the item, different dots turn into touch-points and form an authentication pattern. This pattern can be recognized by a capacitive touchscreen. Based on an expert design study, we present an interaction space with six categories of possible authentication interactions. In a user study, we demonstrate the feasibility of 3D-Auth items and show that the items are easy to use and the interactions are easy to remember.2020KMKarola Marky et al.Technische Universität Darmstadt & Keio UniversityPasswords & AuthenticationCustomizable & Personalized ObjectsCHI
Podoportation: Foot-Based Locomotion in Virtual RealityVirtual Reality (VR) allows for infinitely large environments. However, the physical traversable space is always limited by real-world boundaries. This discrepancy between physical and virtual dimensions renders traditional locomotion methods used in real world unfeasible. To alleviate these limitations, research proposed various artificial locomotion concepts such as teleportation, treadmills, and redirected walking. However, these concepts occupy the user's hands, require complex hardware or large physical spaces. In this paper, we contribute nine VR locomotion concepts for foot-based locomotion, relying on the 3D position of the user's feet and the pressure applied to the sole as input modalities. We evaluate our concepts and compare them to state-of-the-art point & teleport technique in a controlled experiment with 20 participants. The results confirm the viability of our approaches for foot-based and engaging locomotion. Further, based on the findings, we contribute a wireless hardware prototype implementation.2020JWJulius von Willich et al.Technische Universität DarmstadtFull-Body Interaction & Embodied InputFoot & Wrist InteractionCHI
Walk The Line: Leveraging Lateral Shifts of the Walking Path as an Input Modality for Head-Mounted DisplaysRecent technological advances have made head-mounted displays (HMDs) smaller and untethered, fostering the vision of ubiquitous interaction in a digitally augmented physical world. Consequently, a major part of the interaction with such devices will happen on the go, calling for interaction techniques that allow users to interact while walking. In this paper, we explore lateral shifts of the walking path as a hands-free input modality. The available input options are visualized as lanes on the ground parallel to the user's walking path. Users can select options by shifting the walking path sideways to the respective lane. We contribute the results of a controlled experiment with 18 participants, confirming the viability of our approach for fast, accurate, and joyful interactions. Further, based on the findings of the controlled experiment, we present three example applications.2020FMFlorian Müller et al.Technische Universität DarmstadtFull-Body Interaction & Embodied InputEye Tracking & Gaze InteractionCHI
./trilaterate: A Fabrication Pipeline to Design and 3D Print Hover-, Touch-, and Force-Sensitive ObjectsHover, touch, and force are promising input modalities that get increasingly integrated into screens and everyday objects. However, these interactions are often limited to flat surfaces and the integration of suitable sensors is time-consuming and costly. To alleviate these limitations, we contribute Trilaterate: A fabrication pipeline to 3D print custom objects that detect the 3D position of a finger hovering, touching, or forcing them by combining multiple capacitance measurements via capacitive trilateration. Trilaterate places and routes actively-shielded sensors inside the object and operates on consumer-level 3D printers. We present technical evaluations and example applications that validate and demonstrate the wide applicability of Trilaterate.2019MSMartin Schmitz et al.Technische Universität DarmstadtShape-Changing Interfaces & Soft Robotic MaterialsCircuit Making & Hardware PrototypingCHI
Mind the Tap: Assessing Foot-Taps for Interacting with Head-Mounted DisplaysFrom voice commands and air taps to touch gestures on frames: Various techniques for interacting with head-mounted displays (HMDs) have been proposed. While these techniques have both benefits and drawbacks dependent on the current situation of the user, research on interacting with HMDs has not concluded yet. In this paper, we add to the body of research on interacting with HMDs by exploring foot-tapping as an input modality. Through two controlled experiments with a total of 36 participants, we first explore direct interaction with interfaces that are displayed on the floor and require the user to look down to interact. Secondly, we investigate indirect interaction with interfaces that, although operated by the user's feet, are always visible as they are floating in front of the user. Based on the results of the two experiments, we provide design recommendations for direct and indirect foot-based user interfaces.2019FMFlorian Müller et al.Technische Universität DarmstadtFoot & Wrist InteractionCHI
Off-Line Sensing: Memorizing Interactions in Passive 3D-Printed ObjectsEmbedding sensors into objects allow them to recognize various interactions. However, sensing usually requires active electronics that are often costly, need time to be assembled, and constantly draw power. Thus, we propose off-line sensing: passive 3D-printed sensors that detect one-time interactions, such as accelerating or flipping, but neither require active electronics nor power at the time of the interaction. They memorize a pre-defined interaction via an embedded structure filled with a conductive medium (e.g., a liquid). Whether a sensor was exposed to the interaction can be read-out via a capacitive touchscreen. Sensors are printed in a single pass on a consumer-level 3D printer. Through a series of experiments, we show the feasibility of off-line sensing.2018MSMartin Schmitz et al.Technische Universität DarmstadtDesktop 3D Printing & Personal FabricationCircuit Making & Hardware PrototypingCHI