TH-Wood: Developing Thermo-Hygro-Coordinating Driven Wood Actuators to Enhance Human-Nature InteractionWood has become increasingly applied in shape-changing interfaces for its eco-friendly and smart responsive properties, while its applications face challenges as it remains primarily driven by humidity. We propose TH-Wood, a biodegradable actuator system composed of wood veneer and microbial polymers, driven by both temperature and humidity, and capable of functioning in complex outdoor environments. This dual-factor-driven approach enhances the sensing and response channels, allowing for more sophisticated coordinating control methods. To assist in designing and utilizing the system more effectively, we developed a structure library inspired by dynamic plant forms, conducted extensive technical evaluations, created an educational platform accessible to users, and provided a design tool for deformation adjustments and behavior previews. Finally, several ecological applications demonstrate the potential of TH-Wood to significantly enhance human interaction with natural environments and expand the boundaries of human-nature relationships.2025GWGuanyun Wang et al.Zhejiang UniversityShape-Changing Interfaces & Soft Robotic MaterialsHuman-Nature Relationships (More-than-Human Design)CHI
Vibrosight++: City-Scale Sensing Using Existing Retroreflective Signs and MarkersToday's smart cities use thousands of physical sensors distributed across the urban landscape to support decision making in areas such as infrastructure monitoring, public health, and resource management. These weather-hardened devices require power and connectivity, and often cost thousands just to install, let alone maintain. In this paper, we show how long-range laser vibrometry can be used for low-cost, city-scale sensing. Although typically limited to just a few meters of sensing range, the use of retroreflective markers can boost this to 1km or more. Fortuitously, cities already make extensive use of retroreflective materials for street signs, construction barriers, road studs, license plates, and many other markings. We describe how our prototype system can co-opt these existing markers at very long ranges and use them as unpowered accelerometers for use in a wide variety of sensing applications.2021YZYang Zhang et al.Carnegie Mellon UniversityContext-Aware ComputingSmart Cities & Urban SensingCHI
Wireality: Enabling Complex Tangible Geometries in Virtual Reality with Worn Multi-String HapticsToday's virtual reality (VR) systems allow users to explore immersive new worlds and experiences through sight. Unfortunately, most VR systems lack haptic feedback, and even high-end consumer systems use only basic vibration motors. This clearly precludes realistic physical interactions with virtual objects. Larger obstacles, such as walls, railings, and furniture are not simulated at all. In response, we developed Wireality, a self-contained worn system that allows for individual joints on the hands to be accurately arrested in 3D space through the use of retractable wires that can be programmatically locked. This allows for convincing tangible interactions with complex geometries, such as wrapping fingers around a railing. Our approach is lightweight, low-cost, and low-power, criteria important for future, worn consumer uses. In our studies, we further show that our system is fast-acting, spatially-accurate, high-strength, comfortable, and immersive.2020CFCathy Fang et al.Carnegie Mellon UniversityHaptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsCHI
Sozu: Self-Powered Radio Tags for Building-Scale Activity SensingRobust, wide-area sensing of human environments has been a long-standing research goal. We present Sozu, a new low-cost sensing system that can detect a wide range of events wirelessly, through walls and without line of sight, at whole-building scale. To achieve this in a battery-free manner, Sozu tags convert energy from activities that they sense into RF broadcasts, acting like miniature self-powered radio stations. We describe the results from a series of iterative studies, culminating in a deployment study with 30 instrumented objects. Results show that Sozu is very accurate, with true positive event detection exceeding 99%, with almost no false positives. Beyond event detection, we show that Sozu can be extended to detect the state, intensity, count, and rate of events.2019YZYang Zhang et al.Context-Aware ComputingUbiquitous ComputingUIST
Sensing Posture-Aware Pen+Touch Interaction on TabletsMany status-quo interfaces for tablets with pen + touch input capabilities force users to reach for device-centric UI widgets at fixed locations, rather than sensing and adapting to the user-centric posture. To address this problem, we propose sensing techniques that transition between various nuances of mobile and stationary use via postural awareness. These postural nuances include shifting hand grips, varying screen angle and orientation, planting the palm while writing or sketching, and detecting what direction the hands approach from. To achieve this, our system combines three sensing modalities: 1) raw capacitance touchscreen images, 2) inertial motion, and 3) electric field sensors around the screen bezel for grasp and hand proximity detection. We show how these sensors enable posture-aware pen+touch techniques that adapt interaction and morph user interface elements to suit fine-grained contexts of body-, arm-, hand-, and grip-centric frames of reference.2019YZYang Zhang et al.Microsoft Research & Carnegie Mellon UniversityHand Gesture RecognitionHuman Pose & Activity RecognitionCHI
True Touch: Precise Touch Detection for On-Skin AR/VR InterfacesContemporary AR/VR systems use in-air gestures or handheld controllers for interactivity. This overlooks the skin as a convenient surface for tactile, touch-driven interactions, which are generally more accurate and comfortable than free space interactions. We developed RFTouch, an electrical method that enables very precise touch segmentation by using the body as an RF waveguide. We combine this method with computer vision, enabling a system with both high tracking precision and robust touch detection. Our system requires no cumbersome instrumentation of the fingers or hands, requiring only a single wristband and sensors integrated into the headset. We quantify the accuracy of our approach through a user study and demonstrate how it can enable touchscreen-like interactions on the skin.2019YZYang Zhang et al.Hand Gesture RecognitionImmersion & Presence ResearchOn-Skin Display & On-Skin InputUIST
Interferi: Gesture Sensing using On-Body Acoustic InterferometryInterferi is an on-body gesture sensing technique using acoustic interferometry. We use ultrasonic transducers resting on the skin to create acoustic interference patterns inside the wearer's body, which interact with anatomical features in complex, yet characteristic ways. We focus on two areas of the body with great expressive power: the hands and face. For each, we built and tested a series of worn sensor configurations, which we used to identify useful transducer arrangements and machine learning fea-tures. We created final prototypes for the hand and face, which our study results show can support eleven- and nine-class gestures sets at 93.4% and 89.0% accuracy, re-spectively. We also evaluated our system in four continu-ous tracking tasks, including smile intensity and weight estimation, which never exceed 9.5% error. We believe these results show great promise and illuminate an inter-esting sensing technique for HCI applications.2019YIYasha Iravantchi et al.Carnegie Mellon UniversityForce Feedback & Pseudo-Haptic WeightHand Gesture RecognitionCHI
Pulp Nonfiction: Low-Cost Touch Tracking for PaperPaper continues to be a versatile and indispensable material in the 21st century. Of course, paper is a passive medium with no inherent interactivity, precluding us from computationally-enhancing a wide variety of paper-based activities. In this work, we present a new approach for bringing the digital and paper worlds closer together, specifically by enabling paper to track finger input and also drawn input with writing implements. Importantly, for paper to still be considered paper, our method had to be very low cost. This necessitated much research into materials, fabrication methods and sensing techniques. We describe the outcome of our investigations and show that our novel method can be sufficiently low-cost and accurate to enable new interactive opportunities with this pervasive material.2018YZYang Zhang et al.Carnegie Mellon UniversityContext-Aware ComputingCircuit Making & Hardware PrototypingCHI
Vibrosight: Long-Range Vibrometry for Smart Environment SensingSmart and responsive environments rely on the ability to detect physical events, such as appliance use and human activities. Currently, to sense these types of events, one must either upgrade to “smart” appliances, or attach aftermarket sensors to existing objects. These approaches can be expensive, intrusive and inflexible. In this work, we present Vibrosight, a new approach to sense activities across entire rooms using long-range laser vibrometry. Unlike a microphone, our approach can sense physical vibrations at one specific point, making it robust to interference from other activities and noisy environments. This property enables detection of simultaneous activities, which has proven challenging in prior work. Through a series of evaluations, we show that Vibrosight can offer high accuracies at long range, allowing our sensor to be placed in an inconspicuous location. We also explore a range of additional uses, including data transmission, sensing user input and modes of appliance operation, and detecting human movement and activities on work surfaces.2018YZYang Zhang et al.Biosensors & Physiological MonitoringContext-Aware ComputingUIST
Wall++: Room-Scale Interactive and Context-Aware SensingHuman environments are typified by walls – homes, offices, restaurants, schools, museums and pretty much every indoor context one can imagine. In many cases, they make up a majority of readily accessible indoor surface area, and yet they are static – their primary function is to be a wall, separating spaces and hiding infrastructure. We present Wall++, a low-cost sensing approach that allows walls to become a smart infrastructure. Instead of merely separating spaces, walls can now enhance rooms with sensing, interactivity and computation. Our wall treatment and sensing hardware can track users’ touch and gestures, as well as estimate body pose if they are close. By capturing airborne electromagnetic noise, we can also recognize what appliances are active and where they are located, and track and identify signal-emitting tags carried by users. Through a series of evaluations, we demonstrate Wall++ can enable robust room-scale interactive and context sensing applications.2018YZYang Zhang et al.Carnegie Mellon UniversityHuman Pose & Activity RecognitionContext-Aware ComputingUbiquitous ComputingCHI
LumiWatch: On-Arm Projected Graphics and Touch InputCompact, worn computers with projected, on-skin touch interfaces have been a long-standing yet elusive goal, largely written off as science fiction. Such devices offer the potential to mitigate the significant human input/output bottleneck inherent in worn devices with small screens. In this work, we present the first fully functional and self-contained projection smartwatch implementation, containing the requisite compute, power, projection and touch-sensing capabilities. Our watch offers roughly 40 sq. cm of interactive surface area – more than five times that of a typical smartwatch display. We demonstrate continuous 2D finger tracking with interactive, rectified graphics, transforming the arm into a touchscreen. We discuss our hardware and software implementation, as well as evaluation results regarding touch accuracy and projection visibility.2018RXRobert Xiao et al.Carnegie Mellon UniversityHaptic WearablesOn-Skin Display & On-Skin InputCHI