Kinethreads: Soft Full-Body Haptic Exosuit using Low-Cost Motor-Pulley MechanismsOur bodies experience a wide variety of kinesthetic forces as we go about our daily lives, including the weight of held objects, contact with surfaces, gravitational loads, and acceleration and centripetal forces while driving, to name just a few. These forces are crucial to realism, yet are simply not possible to render with today's consumer haptic suits, which primarily rely on arrays of vibration actuators built into vests. Rigid exoskeletons have more kinesthetic capability to apply forces directly to users' joints, but are generally cumbersome to wear and cost many thousands of dollars. In this work, we present Kinethreads: a new full-body haptic exosuit design built around string-based motor-pulley mechanisms, which keeps our suit lightweight (<5kg), soft and flexible, quick-to-wear (<30 seconds), comparatively low-cost (~$400), and yet capable of rendering expressive, distributed, and forceful (up to 120N) effects. We detail our system design, implementation, and results from a multi-part performance evaluation and user study.2025VSVivian Shen et al.Force Feedback & Pseudo-Haptic WeightFull-Body Interaction & Embodied InputUIST
EclipseTouch: Touch Segmentation on Ad Hoc Surfaces using Worn Infrared Shadow CastingThe ability to detect touch events on uninstrumented, everyday surfaces has been a long-standing goal for mixed reality systems. Prior work has shown that virtual interfaces bound to physical surfaces offer performance and ergonomic benefits over tapping at interfaces floating in the air. A wide variety of approaches have been previously developed, to which we contribute a new headset-integrated technique called EclipseTouch. We use a combination of a computer-triggered camera and one or more infrared emitters to create structured shadows, from which we can accurately estimate hover distance (mean error of 6.9 mm) and touch contact (98.0% accuracy). We discuss how our technique works across a range of conditions, including surface material, interaction orientation, and environmental lighting.2025VMVimal Mollyn et al.Haptic WearablesPrototyping & User TestingUIST
Reel Feel: Rich Haptic XR Experiences Using an Active, Worn, Multi-String Device While many haptic systems have been demonstrated for use in virtual and augmented reality, they most often enable a single category of feedback (e.g., kinematic breaking, object compliance, textures). Combining prior systems to achieve multi-dimensional effects is unwieldy, expensive, and often physically impossible. We believe this is holding back the ubiquity of rich haptics in both the consumer and industrial AR/VR/XR domains. In this work, we describe Reel Feel, a novel, shoulder-worn haptic system capable of rendering rigid geometry, object-bound haptic animations, impulsive forces, surface compliance, and fine-grained spatial effects all in one unified, worn device. Our design aimed to minimize the weight on the hands (<10 g), where a system's mass is most felt, as many prior systems are heavy gloves and exoskeletons. Finally, we sought to keep the device practical, being self-contained, low-cost, and low enough power to be feasible for consumer adoption with a high degree of mobility. In a user evaluation, our device rated better than a conventional vibrotactile baseline for all qualitative measures (immersion, realism, etc.) and allowed participants to more accurately discern object compliance and fine-grained spatial effects.2025NDNathan DeVrio et al.Carnegie Mellon University, Human-Computer Interaction InstituteHaptic WearablesImmersion & Presence ResearchVR Medical Training & RehabilitationCHI
PatternTrack: Multi-Device Tracking Using Infrared, Structured-Light Projections from Built-in LiDARAs augmented reality devices (e.g., smartphones and headsets) proliferate in the market, multi-user AR scenarios are set to become more common. Co-located users will want to share coherent and synchronized AR experiences, but this is surprisingly cumbersome with current methods. In response, we developed PatternTrack, a novel tracking approach that repurposes the structured infrared light patterns emitted by VCSEL-driven depth sensors, like those found in the Apple Vision Pro, iPhone, iPad, and Meta Quest 3. Our approach is infrastructure-free, requires no pre-registration, works on featureless surfaces, and provides the real-time 3D position and orientation of other users' devices. In our evaluation --- tested on six different surfaces and with inter-device distances of up to 260 cm --- we found a mean 3D positional tracking error of 11.02 cm and a mean angular error of 6.81°.2025DKDaehwa Kim et al.Carnegie Mellon University, Human-Computer Interaction InstituteAR Navigation & Context AwarenessContext-Aware ComputingUbiquitous ComputingCHI
EgoTouch: On-Body Touch Input Using AR/VR Headset CamerasIn augmented and virtual reality (AR/VR) experiences, a user’s arms and hands can provide a convenient and tactile surface for touch input. Prior work has shown on-body input to have significant speed, accuracy, and ergonomic benefits over in-air interfaces, which are common today. In this work, we demonstrate high accuracy, bare hands (i.e., no special instrumentation of the user) skin input using just an RGB camera, like those already integrated into all modern XR headsets. Our results show this approach can be accurate, and robust across diverse lighting conditions, skin tones, and body motion (e.g., input while walking). Finally, our pipeline also provides rich input metadata including touch force, finger identification, angle of attack, and rotation. We believe these are the requisite technical ingredients to more fully unlock on-skin interfaces that have been well motivated in the HCI literature but have lacked robust and practical methods.2024VMVimal Mollyn et al.Mid-Air Haptics (Ultrasonic)On-Skin Display & On-Skin InputUIST
Power-over-Skin: Full-Body Wearables Powered By Intra-Body RF EnergyPowerful computing devices are now small enough to be easily worn on the body. However, batteries pose a major design and user experience obstacle, adding weight and volume, and generally requiring periodic device removal and recharging. In response, we developed Power-over-Skin, an approach using the human body itself to deliver power to many distributed, battery-free, worn devices. We demonstrate power delivery from on-body distances as far as from head-to-toe, with sufficient energy to power microcontrollers capable of sensing and wireless communication. We share results from a study campaign that informed our implementation, as well as experiments that validate our final system. We conclude with several demonstration devices, ranging from input controllers to longitudinal bio-sensors, which highlight the efficacy and potential of our approach.2024AKAndy Kong et al.Haptic WearablesBiosensors & Physiological MonitoringOn-Skin Display & On-Skin InputUIST
Mites: Design and Deployment of a General-Purpose Sensing Infrastructure for Buildings"There is increasing interest in deploying building-scale, general-purpose, and high-fidelity sensing to drive emerging smart building applications. However, the real-world deployment of such systems is challenging due to the lack of system and architectural support. Most existing sensing systems are purpose-built, consisting of hardware that senses a limited set of environmental facets, typically at low fidelity and for short-term deployment. Furthermore, prior systems with high-fidelity sensing and machine learning fail to scale effectively and have fewer primitives, if any, for privacy and security. For these reasons, IoT deployments in buildings are generally short-lived or done as a proof of concept. We present the design of Mites, a scalable end-to-end hardware-software system for supporting and managing distributed general-purpose sensors in buildings. Our design includes robust primitives for privacy and security, essential features for scalable data management, as well as machine learning to support diverse applications in buildings. We deployed our Mites system and 314 Mites devices in Tata Consultancy Services (TCS) Hall at Carnegie Mellon University (CMU), a fully occupied, five-story university building. We present a set of comprehensive evaluations of our system using a series of microbenchmarks and end-to-end evaluations to show how we achieved our stated design goals. We include five proof-of-concept applications to demonstrate the extensibility of the Mites system to support compelling IoT applications. Finally, we discuss the real-world challenges we faced and the lessons we learned over the five-year journey of our stack's iterative design, development, and deployment. https://dl.acm.org/doi/10.1145/3580865"2023SBSudershan Boovaraghavan et al.Context-Aware ComputingSmart Home Privacy & SecuritySmart Cities & Urban SensingUbiComp
Pantœnna: Mouth Pose Estimation for AR/VR Headsets Using Low-Profile Antenna and Impedance Characteristic SensingMethods for faithfully capturing a user's holistic pose have immediate uses in AR/VR, ranging from multimodal input to expressive avatars. Although body-tracking has received the most attention, the mouth is also of particular importance, given that it is the channel for both speech and facial expression. In this work, we describe a new RF-based approach for capturing mouth pose using an antenna integrated into the underside of a VR/AR headset. Our approach side-steps privacy issues inherent in camera-based methods, while simultaneously supporting silent facial expressions that audio-based methods cannot. Further, compared to bio-sensing methods such as EMG and EIT, our method requires no contact with the wearer's body and can be fully self-contained in the headset, offering a high degree of physical robustness and user practicality. We detail our implementation along with results from two user studies, which show a mean 3D error of 2.6 mm for 11 mouth keypoints across worn sessions without re-calibration.2023DKDaehwa Kim et al.Eye Tracking & Gaze InteractionAR Navigation & Context AwarenessImmersion & Presence ResearchUIST
Fluid Reality: Electroosmotic Pump Arrays for Fine-Grained AR/VR HapticsVirtual and augmented reality headsets are making significant progress in audio-visual immersion and consumer adoption. However, their haptic immersion remains low, due in part to the limitations of vibrotactile actuators which dominate the AR/VR market. In this work, we present a new approach to create high-resolution shape-changing fingerpad arrays with 20 haptic pixels/cm\textsuperscript{2}. Unlike prior pneumatic approaches, our actuators are low-profile (5mm thick), low-power (approximately 10mW/pixel), and entirely self-contained, with no tubing or wires running to external infrastructure. We show how multiple actuator arrays can be built into a five-finger, 160-actuator haptic glove that is untethered, lightweight (207g, including all drive electronics and battery), and has the potential to reach consumer price points at volume production. We describe the results from a technical performance evaluation and a suite of eight user studies, quantifying the diverse capabilities of our system. This includes recognition of object properties such as complex contact geometry, texture, and compliance, as well as expressive spatiotemporal effects.2023VSVivian Shen et al.Haptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsUIST
SmartPoser: Arm Pose Estimation With a Smartphone and Smartwatch Using UWB and IMU DataThe ability to track a user's arm pose could be valuable in a wide range of applications, including fitness, rehabilitation, augmented reality input, life logging, and context-aware assistants. Unfortunately, this capability is not readily available to consumers. Systems either require cameras, which carry privacy issues, or utilize multiple worn IMUs or markers. In this work, we describe how an off-the-shelf smartphone and smartwatch can work together to accurately estimate arm pose. Moving beyond prior work, we take advantage of more recent ultra-wideband (UWB) functionality on these devices to capture absolute distance between the two devices. This measurement is the perfect complement to inertial data, which is relative and suffers from drift. We quantify the performance of our software-only approach using off-the-shelf devices, showing it can estimate the wrist and elbow joints with a median positional error of 11.0~cm, without the user having to provide training data.2023NDAlicia DeVrio et al.Human Pose & Activity RecognitionFitness Tracking & Physical Activity MonitoringBiosensors & Physiological MonitoringUIST
"An Instructor is [already] able to keep track of 30 students": Students’ Perceptions of Smart Classrooms for Improving Teaching & Their Emergent Understandings of Teaching and LearningMulti-modal classroom sensing systems can collect complex behaviors in the classroom at a scale and precision far greater than human observers to capture learning insights and provide personalized teaching feedback. As students are critical stakeholders in the adoption of smart classrooms for the improvement of teaching, open questions remain in understanding student perspectives on the use of their data to provide insights to instructors. We conducted a Speed Dating with storyboards study to explore student values and boundaries regarding the acceptance of classroom sensing systems in STEM college courses. We found that students have several emergent beliefs about teaching and learning that influence their views towards smart classroom technologies. Students also held contextual views on the boundaries of data use depending on the outcome. Our findings have implications for the design and communication of classroom sensing systems that reconcile student and instructor beliefs around teaching and learning.2023TNTricia J. Ngoon et al.Intelligent Tutoring Systems & Learning AnalyticsSTEM Education & Science CommunicationDIS
Surface I/O: Creating Devices with Functional Surface Geometry for Haptics and User InputSurface I/O is a novel interface approach that functionalizes the exterior surface of devices to provide haptic and touch sensing without dedicated mechanical components. Achieving this requires a unique combination of surface features spanning the macro-scale (5cm~1mm), meso-scale (1mm~200μm), and micro-scale (<200μm). This approach simplifies interface creation, allowing designers to iterate on form geometry, haptic feeling, and sensing functionality without the limitations of mechanical mechanisms. We believe this can contribute to the concept of "invisible ubiquitous interactivity at scale", where the simplicity and easy implementation of the technique allows it to blend with objects around us. While we prototyped our designs using 3D printers and laser cutters, our technique is applicable to mass production methods, including injection molding and stamping, enabling passive goods with new levels of interactivity.2023YDYuran Ding et al.Carnegie Mellon UniversityShape-Changing Interfaces & Soft Robotic MaterialsCircuit Making & Hardware PrototypingCHI
Flat Panel Haptics: Embedded Electroosmotic Pumps for Scalable Shape DisplaysFlat touch interfaces, with or without screens, pervade the modern world. However, their haptic feedback is minimal, prompting much research into haptic and shape-changing display technologies which are self-contained, fast acting, and offer millimeters of displacement while only being only millimeters thick. We present a new, miniaturizable type of shape-changing display using embedded electroosmotic pumps (EEOPs). Our pumps, controlled and powered directly by applied voltage, are 1.5mm in thickness, and allow complete stackups under 5mm. Nonetheless, they can move their entire volume's worth of fluid in 1 second, and generate pressures of +/-50kPa, enough to create dynamic, millimeter-scale tactile features on a surface that can withstand typical interaction forces (<1N). These are the requisite technical ingredients to enable, for example, a pop-up keyboard on a flat smartphone. We experimentally quantify the mechanical and psychophysical performance of our displays and conclude with a set of example interfaces.2023CSCraig Shultz et al.Carnegie Mellon UniversityShape-Changing Interfaces & Soft Robotic MaterialsCHI
IMUPoser: Full-Body Pose Estimation using IMUs in Phones, Watches, and EarbudsTracking body pose on-the-go could have powerful uses in fitness, mobile gaming, context-aware virtual assistants, and rehabilitation. However, users are unlikely to buy and wear special suits or sensor arrays to achieve this end. Instead, in this work, we explore the feasibility of estimating body pose using IMUs already in devices that many users own --- namely smartphones, smartwatches, and earbuds. This approach has several challenges, including noisy data from low-cost commodity IMUs, and the fact that the number of instrumentation points on a user's body is both sparse and in flux. Our pipeline receives whatever subset of IMU data is available, potentially from just a single device, and produces a best-guess pose. To evaluate our model, we created the IMUPoser Dataset, collected from 10 participants wearing or holding off-the-shelf consumer devices and across a variety of activity contexts. We provide a comprehensive evaluation of our system, benchmarking it on both our own and existing IMU datasets.2023VMVimal Mollyn et al.Carnegie Mellon UniversityHuman Pose & Activity RecognitionBiosensors & Physiological MonitoringCHI
EtherPose: Continuous Hand Pose Tracking with Wrist-Worn Antenna Impedance Characteristic SensingEtherPose is a continuous hand pose tracking system employing two wrist-worn antennas, from which we measure the real-time dielectric loading resulting from different hand geometries (i.e., poses). Unlike worn camera-based methods, our RF approach is more robust to occlusion from clothing and avoids capturing potentially sensitive imagery. Through a series of simulations and empirical studies, we designed a proof-of-concept, worn implementation built around compact vector network analyzers. Sensor data is then interpreted by a machine learning backend, which outputs a fully-posed 3D hand. In a user study, we show how our system can track hand pose with a mean Euclidean joint error of 11.6 mm, even when covered in fabric. We also studied 2DOF wrist angle and micro-gesture tracking. In the future, our approach could be miniaturized and extended to include more and different types of antennas, operating at different self resonances.2022DKDaehwa Kim et al.Force Feedback & Pseudo-Haptic WeightFoot & Wrist InteractionUIST
DiscoBand: Multiview Depth-Sensing Smartwatch Strap for Hand, Arm and Environment TrackingReal-time tracking of a user’s hands, arms and environment is valuable in a wide variety of HCI applications, from context awareness to virtual reality. Rather than rely on fixed and external tracking infrastructure, the most flexible and consumer-friendly approaches are mobile, self-contained, and compatible with popular device form factors (e.g., smartwatches). In this vein, we contribute DiscoBand, a thin sensing strap not exceeding 1 cm in thickness. Sensors operating so close to the skin inherently face issues with occlusion. To overcome this, our strap uses eight distributed depth sensors imaging the hand from different viewpoints, creating a sparse 3D point cloud. An additional eight depth sensors image outwards from the band to track the user’s body and surroundings. In addition to evaluating arm and hand pose tracking, we also describe a series of supplemental applications powered by our band's data, including held object recognition and environment mapping.2022NDNathan DeVrio et al.Full-Body Interaction & Embodied InputFoot & Wrist InteractionEye Tracking & Gaze InteractionUIST
ElectriPop: Low-Cost, Shape-Changing Displays Using Electrostatically Inflated Mylar SheetsWe describe how sheets of metalized mylar can be cut and then “inflated” into complex 3D forms with electrostatic charge for use in digitally-controlled, shape-changing displays. This is achieved by placing and nesting various cuts, slits and holes such that mylar elements repel from one another to reach an equilibrium state. Importantly, our technique is compatible with industrial and hobbyist cutting processes, from die and laser cutting to handheld exacto-knives and scissors. Given that mylar film costs <$1 per m^2, we can create self-actuating 3D objects for just a few cents, opening new uses in low-cost consumer goods. We describe a design vocabulary, interactive simulation tool, fabrication guide, and proof-of-concept electrostatic actuation hardware. We detail our technique's performance metrics along with qualitative feedback from a design study. We present numerous examples generated using our pipeline to illustrate the rich creative potential of our method.2022CFCathy Mengying Fang et al.Carnegie Mellon UniversityShape-Changing Interfaces & Soft Robotic MaterialsShape-Changing Materials & 4D PrintingCHI
TriboTouch: Micro-Patterned Surfaces for Low Latency TouchscreensTouchscreen tracking latency, often 80ms or more, creates a rubber-banding effect in everyday direct manipulation tasks such as dragging, scrolling, and drawing. This has been shown to decrease system preference, user performance, and overall realism of these interfaces. In this research, we demonstrate how the addition of a thin, 2D micro-patterned surface with 5 micron spaced features can be used to reduce motor-visual touchscreen latency. When a finger, stylus, or tangible is translated across this textured surface frictional forces induce acoustic vibrations which naturally encode sliding velocity. This acoustic signal is sampled at 192kHz using a conventional audio interface pipeline with an average latency of 28ms. When fused with conventional low-speed, but high-spatial-accuracy 2D touch position data, our machine learning model can make accurate predictions of real time touch location.2022CSCraig Shultz et al.Carnegie Mellon UniversityMid-Air Haptics (Ultrasonic)Vibrotactile Feedback & Skin StimulationCHI
ControllerPose: Inside-Out Body Capture with VR Controller CamerasWe present a new and practical method for capturing user body pose in virtual reality experiences: integrating cameras into handheld controllers, where batteries, computation and wireless communication already exist. By virtue of the hands operating in front of the user during many VR interactions, our controller-borne cameras can capture a superior view of the body for digitization. Our pipeline composites multiple camera views together, performs 3D body pose estimation, uses this data to control a rigged human model with inverse kinematics, and exposes the resulting user avatar to end user applications. We developed a series of demo applications illustrating the potential of our approach and more leg-centric interactions, such as balancing games and kicking soccer balls. We describe our proof-of-concept hardware and software, as well as results from our user study, which point to imminent feasibility.2022KAKaran Ahuja et al.Carnegie Mellon UniversityFull-Body Interaction & Embodied InputHuman Pose & Activity RecognitionCHI
Mouth Haptics in VR using a Headset Ultrasound Phased ArrayToday’s consumer virtual reality (VR) systems offer limited haptic feedback via vibration motors in handheld controllers. Rendering haptics to other parts of the body is an open challenge, especially in a practical and consumer-friendly manner. The mouth is of particular interest, as it is a close second in tactile sensitivity to the fingertips, offering a unique opportunity to add fine-grained haptic effects. In this research, we developed a thin, compact, beamforming array of ultrasonic transducers, which can render haptic effects onto the mouth. Importantly, all components are integrated into the headset, meaning the user does not need to wear an additional accessory, or place any external infrastructure in their room. We explored several effects, including point impulses, swipes, and persistent vibrations. Our haptic sensations can be felt on the lips, teeth and tongue, which can be incorporated into new and interesting VR experiences.2022VSVivian Shen et al.Carnegie Mellon UniversityMid-Air Haptics (Ultrasonic)CHI