ProxiCycle : Passively Mapping Cyclist Safety Using Smart Handlebars for Near-Miss DetectionActive transportation is a valuable tool to prevent some of the most common causes of mortality worldwide, but is severely underutilized. The primary factors preventing cyclist adoption are safety concerns, specifically, the fear of collision from automobiles. One solution to address this concern is to direct cyclists to known safe routes to minimize risk and stress, thus making cycling more approachable. However, few localized safety priors are available, hindering safety based routing. Specifically, road user behavior is unknown. To address this issue, we develop a novel handlebar attachment to passively monitor the proximity of passing cars as a an indicator of cycling safety along historically traveled routes. We deploy this sensor with 15 experienced cyclists in a 2 month longitudinal study to source a citywide map of car passing distance. We then compare this signal to both historic collisions and perceived safety reported by experienced and inexperienced cyclists.2025JBJoseph Breda et al.University of Washington, Paul G. Allen School of Computer Science & EngineeringMotion Sickness & Passenger ExperiencePedestrian & Cyclist SafetyCHI
Incorporating Sustainability in Electronics Design: Obstacles and OpportunitiesLife cycle assessment (LCA) is a methodology for holistically measuring the environmental impact of a product from initial manufacturing to end-of-life disposal. However, the extent to which LCA informs the design of computing devices remains unclear. To understand how this information is collected and applied, we interviewed 17 industry professionals with experience in LCA or electronics design, systematically coded the interviews, and investigated common themes. These themes highlight the challenge of LCA data collection and reveal distributed decision-making processes where responsibility for sustainable design choices—and their associated costs—is often ambiguous. Our analysis identifies opportunities for HCI technologies to support LCA computation and its integration into the design process to facilitate sustainability-oriented decision-making. While this work provides a nuanced discussion about sustainable design in the information and communication technologies (ICT) hardware industry, we hope our insights will also be valuable to other sectors.2025ZEZachary Englhardt et al.University of Washington, Computer Science and EngineeringSustainable HCIEcological Design & Green ComputingCHI
PPG Earring: Wireless Smart Earring for Heart Health MonitoringHeart rate is a key vital sign for cardiovascular health and fitness. However, the photoplethysmography (PPG) sensors that monitor heart rate in wearables struggle with accuracy during motion. Our day-long in-the-wild study shows Fitbit measures valid heart rates only 54.88% of the time. To address this, we developed PPG Earring, which measures 14 mm in diameter, weighs 2.0 g, and offers 21 hours of continuous sensing. Our eight-user exercise study shows that PPG Earring captures valid heart rate data for 91.74% of the time during exercise and 86.29% of our day-long in-the-wild study. All participants found the PPG Earring as comfortable as their regular earrings, and most participants expressed a strong willingness to wear the PPG Earring all the time every day. Our results validate the signal quality and comfort level of the PPG Earring, highlighting its potential as a daily health monitoring device.2025QXQiuyue (Shirley) Xue et al.University of Washington, Paul G. Allen School of Computer Science and EngineeringSmartwatches & Fitness BandsBiosensors & Physiological MonitoringCHI
IRIS: Wireless Ring for Vision-based Smart Home InteractionIntegrating cameras into wireless smart rings has been challenging due to size and power constraints. We introduce IRIS, the first wireless vision-enabled smart ring system for smart home interactions. Equipped with a camera, Bluetooth radio, inertial measurement unit (IMU), and an onboard battery, IRIS meets the small size, weight, and power (SWaP) requirements for ring devices. IRIS is context-aware, adapting its gesture set to the detected device, and can last for 16-24 hours on a single charge. IRIS leverages the scene semantics to achieve instance-level device recognition. In a study involving 23 participants, IRIS consistently outpaced voice commands, with a higher proportion of participants expressing a preference for IRIS over voice commands regarding toggling a device's state, granular control, and social acceptability. Our work pushes the boundary of what is possible with ring form-factor devices, addressing system challenges and opening up novel interaction capabilities.2024MKMaruchi Kim et al.Foot & Wrist InteractionContext-Aware ComputingSmart Home Interaction DesignUIST
WatchLink: Enhancing Smartwatches with Sensor Add-Ons via ECG InterfaceWe introduce a low-power communication method that lets smartwatches leverage existing electrocardiogram (ECG) hardware as a data communication interface. Our unique approach enables the connection of external, inexpensive, and low-power "add-on" sensors to the smartwatch, expanding its functionalities. These sensors cater to specialized user needs beyond those offered by pre-built sensor suites, at a fraction of the cost and power of traditional communication protocols, including Bluetooth Low Energy. To demonstrate the feasibility of our approach, we conduct a series of exploratory and evaluative tests to characterize the ECG interface as a communication channel on commercial smartwatches. We design a simple transmission scheme using commodity components, demonstrating cost and power benefits. Further, we build and test a suite of add-on sensors, including UV light, body temperature, buttons, and breath alcohol, all of which achieved testing objectives at low material cost and power usage. This research paves the way for personalized and user-centric wearables by offering a cost-effective solution to expand their functionalities.2024AWAnandghan Waghmare et al.Smartwatches & Fitness BandsBiosensors & Physiological MonitoringUIST
MMTSA: Multi-Modal Temporal Segment Attention Network for Efficient Human Activity Recognition"Multimodal sensors provide complementary information to develop accurate machine-learning methods for human activity recognition (HAR), but introduce significantly higher computational load, which reduces efficiency. This paper proposes an efficient multimodal neural architecture for HAR using an RGB camera and inertial measurement units (IMUs) called Multimodal Temporal Segment Attention Network (MMTSA). MMTSA first transforms IMU sensor data into a temporal and structure-preserving gray-scale image using the Gramian Angular Field (GAF), representing the inherent properties of human activities. MMTSA then applies a multimodal sparse sampling method to reduce data redundancy. Lastly, MMTSA adopts an inter-segment attention module for efficient multimodal fusion. Using three well-established public datasets, we evaluated MMTSA's effectiveness and efficiency in HAR. Results show that our method achieves superior performance improvements (11.13% of cross-subject F1-score on the MMAct dataset) than the previous state-of-the-art (SOTA) methods. The ablation study and analysis suggest that MMTSA's effectiveness in fusing multimodal data for accurate HAR. The efficiency evaluation on an edge device showed that MMTSA achieved significantly better accuracy, lower computational load, and lower inference latency than SOTA methods." https://doi.org/10.1145/36108722023ZGZiqi Gao et al.Human Pose & Activity RecognitionUbiComp
GlucoScreen: A Smartphone-based Readerless Glucose Test Strip for Prediabetes ScreeningBlood glucose measurement is commonly used to screen for and monitor diabetes, a chronic condition characterized by the inability to effectively modulate blood glucose that can lead to heart disease, vision loss, and kidney failure. Early detection of prediabetes can forestall or reverse more serious illness if healthy lifestyle adjustments or medical interventions are made in a timely manner. Current diabetes screening methods require visits to a healthcare facility and use of over-the-counter glucose-testing devices (glucometers), both of which are costly or inaccessible for many populations, reducing the chances of early disease detection. We therefore developed GlucoScreen, a readerless glucose test strip that enables affordable, single-use, at-home glucose testing, leveraging the user's touchscreen cellphone for reading and displaying results. By integrating minimal, low-cost electronics with commercially available blood glucose testing strips, the GlucoScreen prototype introduces a new type of low-cost, battery-free glucose testing tool that works with any smartphone, obviating the need to purchase a separate dedicated reader. Our key innovation is using the phone's capacitive touchscreen for the readout of the minimally modified commercially available glucose test strips. In an in vitro evaluation with artificial glucose solutions, we tested GlucoScreen with five different phones and compared the findings to two common glucometers, AccuChek and True Metrix. The mean absolute error (MAE) for our GlucoScreen prototype was 4.52 mg/dl (Accu-Chek test strips) and 3.7 mg/dl (True Metrix test strips), compared to 4.98 mg/dl and 5.44 mg/dl for the AccuChek glucometer and True Metrix glucometer, respectively. In a clinical investigation with 75 patients, GlucoScreen had a MAE of 10.47 mg/dl, while the AccuChek glucometer had a 9.88 mg/dl MAE. These results indicate that GlucoScreen's performance is comparable to that of commonly available over-the-counter blood glucose testing devices. With further development and validation, GlucoScreen has the potential to facilitate large-scale and lower cost diabetes screening. This work employs GlucoScreen's smartphone-based technology for glucose testing, but it could be extended to build other readerless electrochemical assays in the future. https://dl.acm.org/doi/10.1145/35808552023AWAnandghan Waghmare et al.Chronic Disease Self-Management (Diabetes, Hypertension, etc.)Biosensors & Physiological MonitoringUbiComp
FeverPhone: Accessible Core-Body Temperature Sensing for Fever Monitoring Using Commodity SmartphonesSmartphones contain thermistors that ordinarily monitor the temperature of the device's internal components; however, these sensors are also sensitive to warm entities in contact with the device, presenting opportunities for measuring human body temperature and detecting fevers. We present FeverPhone --- a smartphone app that estimates a person's core body temperature by having the user place the capacitive touchscreen of the phone against their forehead. During the assessment, the phone logs the temperature sensed by a thermistor and the raw capacitance sensed by the touchscreen to capture features describing the rate of heat transfer from the body to the device. These features are then used in a machine learning model to infer the user's core body temperature. We validate FeverPhone through both a lab simulation with a skin-like controllable heat source and a clinical study with real patients. We found that FeverPhone's temperature estimates are comparable to commercial off-the-shelf peripheral and tympanic thermometers. In a clinical study with 37 participants, FeverPhone readings achieved a mean absolute error of 0.229 °C, a limit of agreement of ±0.731 °C, and a Pearson's correlation coefficient of 0.763. Using these results for fever classification results in a sensitivity of 0.813 and a specificity of 0.904. https://dl.acm.org/doi/10.1145/35808502023JBJoseph Breda et al.Biosensors & Physiological MonitoringUbiComp
GLOBEM: Cross-Dataset Generalization of Longitudinal Human Behavior ModelingThere is a growing body of research revealing that longitudinal passive sensing data from smartphones and wearable devices can capture daily behavior signals for human behavior modeling, such as depression detection. Most prior studies build and evaluate machine learning models using data collected from a single population. However, to ensure that a behavior model can work for a larger group of users, its generalizability needs to be verified on multiple datasets from different populations. We present the first work evaluating cross-dataset generalizability of longitudinal behavior models, using depression detection as an application. We collect multiple longitudinal passive mobile sensing datasets with over 500 users from two institutes over a two-year span, leading to four institute-year datasets. Using the datasets, we closely re-implement and evaluated nine prior depression detection algorithms. Our experiment reveals the lack of model generalizability of these methods. We also implement eight recently popular domain generalization algorithms from the machine learning community. Our results indicate that these methods also do not generalize well on our datasets, with barely any advantage over the naive baseline of guessing the majority. We then present two new algorithms with better generalizability. Our new algorithm, Reorder, significantly and consistently outperforms existing methods on most cross-dataset generalization setups. However, the overall advantage is incremental and still has great room for improvement. Our analysis reveals that the individual differences (both within and between populations) may play the most important role in the cross-dataset generalization challenge. Finally, we provide an open-source benchmark platform GLOBEM- short for Generalization of Longitudinal BEhavior Modeling - to consolidate all 19 algorithms. GLOBEM can support researchers in using, developing, and evaluating different longitudinal behavior modeling methods. We call for researchers' attention to model generalizability evaluation for future longitudinal human behavior modeling studies. https://dl.acm.org/doi/10.1145/35694852023XXXuhai Xu et al.Human Pose & Activity RecognitionMental Health Apps & Online Support CommunitiesBiosensors & Physiological MonitoringUbiComp
Modeling the Trade-off of Privacy Preservation and Activity Recognition on Low-Resolution ImagesA computer vision system using low-resolution image sensors can provide intelligent services (e.g., activity recognition) but preserve unnecessary visual privacy information from the hardware level. However, preserving visual privacy and enabling accurate machine recognition have adversarial needs on image resolution. Modeling the trade-off of privacy preservation and machine recognition performance can guide future privacy-preserving computer vision systems using low-resolution image sensors. In this paper, using the at-home activity of daily livings (ADLs) as the scenario, we first obtained the most important visual privacy features through a user survey. Then we quantified and analyzed the effects of image resolution on human and machine recognition performance in activity recognition and privacy awareness tasks. We also investigated how modern image super-resolution techniques influence these effects. Based on the results, we proposed a method for modeling the trade-off of privacy preservation and activity recognition on low-resolution images.2023YWYuntao Wang et al.Tsinghua UniversityHuman Pose & Activity RecognitionPrivacy Perception & Decision-MakingCHI
Understanding People's Concerns and Attitudes Toward Smart CitiesDesigning privacy-respecting and human-centric smart cities requires a careful investigation of people's attitudes and concerns toward city-wide data collection scenarios. To capture a holistic view, we carried out this investigation in two phases. We first surfaced people's understanding, concerns, and expectations toward smart city scenarios by conducting 21 semi-structured interviews with people in underserved communities. We complemented this in-depth qualitative study with a 348-participant online survey of the general population to quantify the significance of smart city factors (e.g., type of collected data) on attitudes and concerns. Depending on demographics, privacy and ethics were the two most common types of concerns among participants. We found the type of collected data to have the most and the retention time to have the least impact on participants' perceptions and concerns about smart cities. We highlight key takeaways and recommendations for city stakeholders to consider when designing inclusive and protective smart cities.2023PEPardis Emami-Naeini et al.Duke UniversityPrivacy by Design & User ControlSmart Cities & Urban SensingSustainable HCICHI
ARDW: An Augmented Reality Workbench for Printed Circuit Board DebuggingDebugging printed circuit boards (PCBs) can be a time-consuming process, requiring frequent context switching between PCB design files (schematic and layout) and the physical PCB. To assist electrical engineers in debugging PCBs, we present ARDW, an augmented reality workbench consisting of a monitor interface featuring PCB design files, a projector-augmented workspace for PCBs, tracked test probes for selection and measurement, and a connected test instrument. The system supports common debugging workflows for augmented visualization on the physical PCB as well as augmented interaction with the tracked probes. We quantitatively and qualitatively evaluate the system with 10 electrical engineers from industry and academia, finding that ARDW speeds up board navigation and provides engineers with greater confidence in debugging. We discuss practical design considerations and paths for improvement to future systems. A video demo of the system may be accessed here: https://youtu.be/RbENbf5WIfc.2022ICIshan Chatterjee et al.AR Navigation & Context AwarenessUIST
FaceOri: Tracking Head Position and Orientation Using Ultrasonic Ranging on EarphonesFace orientation can often indicate users’ intended interaction target. In this paper, we propose FaceOri, a novel face tracking technique based on acoustic ranging using earphones. FaceOri can leverage the speaker on a commodity device to emit an ultrasonic chirp, which is picked up by the set of microphones on the user’s earphone, and then processed to calculate the distance from each microphone to the device. These measurements are used to derive the user’s face orientation and distance with respect to the device. We conduct a ground truth comparison and user study to evaluate FaceOri’s performance. The results show that the system can determine whether the user orients to the device at a 93.5% accuracy within a 1.5 meters range. Furthermore, FaceOri can continuously track the user’s head orientation with a median absolute error of 10.9 mm in the distance, 3.7◦ in yaw, and 5.8◦ in pitch. FaceOri can allow for convenient hands-free control of devices and produce more intelligent context-aware interaction.2022YWYuntao Wang et al.Tsinghua UniversityEye Tracking & Gaze InteractionContext-Aware ComputingCHI
Augmented Silkscreen: Designing AR Interactions for Debugging Printed Circuit BoardsDebugging printed circuit boards (PCBs) requires frequent context switching and spatial pattern matching between software design files and physical boards. To reduce this overhead, we conduct a series of interviews with electrical engineers to understand their workflows, around which we design a set of AR interaction techniques, we call Augmented Silkscreen, to streamline identification, localization, annotation, and measurement tasks. We then run a set of remote user studies with illustrative video sketches and simulated PCB tasks to compare our interactions with current practices, finding that our techniques reduce completion times. Based on these quantitative results, as well as qualitative feedback from our participants, we offer design recommendations for the implementation of these interactions on a future, deployable AR system.2021ICIshan Chatterjee et al.AR Navigation & Context AwarenessCircuit Making & Hardware PrototypingDIS
Understanding the Design Space of Mouth MicrogesturesAs wearable devices move toward the face (i.e. smart earbuds, glasses), there is an increasing need to facilitate intuitive interactions with these devices. Current sensing techniques can already detect many mouth-based gestures; however, users’ preferences of these gestures are not fully understood. In this paper, we investigate the design space and usability of mouth-based microgestures. We first conducted brainstorming sessions (N=16) and compiled an extensive set of 86 user-defined gestures. Then, with an online survey (N=50), we assessed the physical and mental demand of our gesture set and identified a subset of 14 gestures that can be performed easily and naturally. Finally, we conducted a remote Wizard-of-Oz usability study (N=11) mapping gestures to various daily smartphone operations under a sitting and walking context. From these studies, we develop a taxonomy for mouth gestures, finalize a practical gesture set for common applications, and provide design guidelines for future mouth-based gesture interactions.2021VCVictor Chen et al.Haptic WearablesHand Gesture RecognitionDIS
Facilitating Text Entry on Smartphones with QWERTY Keyboard for Users with Parkinson’s DiseaseQWERTY is the primary smartphone text input keyboard configuration. However, insertion and substitution errors caused by hand tremors, often experienced by users with Parkinson's disease, can severely affect typing efficiency and user experience. In this paper, we investigated Parkinson's users' typing behavior on smartphones. In particular, we identified and compared the typing characteristics generated by users with and without Parkinson's symptoms. We then proposed an elastic probabilistic model for input prediction. By incorporating both spatial and temporal features, this model generalized the classical statistical decoding algorithm to correct insertion, substitution and omission errors, while maintaining direct physical interpretation. User study results confirmed that the proposed algorithm outperformed baseline techniques: users reached 22.8 WPM typing speed with a significantly lower error rate and higher user-perceived performance and preference. We concluded that our method could effectively improve the text entry experience on smartphones for users with Parkinson's disease.2021YWYuntao Wang et al.Tsinghua University, University of WashingtonMotor Impairment Assistive Input TechnologiesShape-Changing Materials & 4D PrintingCHI
Drunk User Interfaces: Determining Blood Alcohol Level through Everyday Smartphone TasksBreathalyzers, the standard quantitative method for assessing inebriation, are primarily owned by law enforcement and used only after a potentially inebriated individual is caught driving. However, not everyone has access to such specialized hardware. We present drunk user interfaces: smartphone user interfaces that measure how alcohol affects a person's motor coordination and cognition using performance metrics and sensor data. We examine five drunk user interfaces and combine them to form the "DUI app". DUI uses machine learning models trained on human performance metrics and sensor data to estimate a person’s blood alcohol level (BAL). We evaluated DUI on 14 individuals in a week-long longitudinal study wherein each participant used DUI at various BALs. We found that with a global model that accounts for user-specific learning, DUI can estimate a person’s BAL with an absolute mean error of 0.005% ± 0.007% and a Pearson's correlation coefficient of 0.96 with breathalyzer measurements.2018AMAlex Mariakakis et al.University of WashingtonBiosensors & Physiological MonitoringContext-Aware ComputingCHI
Convey: Exploring the Use of a Context View for ChatbotsText messaging-based conversational systems, popularly called chatbots, have seen massive growth lately. Recent work on evaluating chatbots has found that there exists a mismatch between the chatbot's state of understanding (also called context) and the user's perception of the chatbot's understanding. Users found it difficult to use chatbots for complex tasks as the users were uncertain of the chatbots' intelligence level and contextual state. In this work, we propose Convey (CONtext View), a window added to the chatbot interface, displaying the conversational context and providing interactions with the context values. We conducted a usability evaluation of Convey with 16 participants. Participants preferred using chatbot with Convey and found it to be easier to use, less mentally demanding, faster, and more intuitive compared to a default chatbot without Convey. The paper concludes with a discussion of the design implications offered by Convey.2018MJMohit Jain et al.University of Washington, IBM ResearchConversational ChatbotsCHI
Seismo: Blood Pressure Monitoring using Built-in Smartphone Accelerometer and CameraAlthough cost-effective at-home blood pressure monitors are available, a complementary mobile solution can ease the burden of measuring BP at critical points throughout the day. In this work, we developed and evaluated a smartphone-based BP monitoring application called textit{Seismo}. The technique relies on measuring the time between the opening of the aortic valve and the pulse later reaching a periphery arterial site. It uses the smartphone's accelerometer to measure the vibration caused by the heart valve movements and the smartphone's camera to measure the pulse at the fingertip. The system was evaluated in a nine participant longitudinal BP perturbation study. Each participant participated in four sessions that involved stationary biking at multiple intensities. The Pearson correlation coefficient of the blood pressure estimation across participants is 0.20-0.77 ($mu$=0.55, $sigma$=0.19), with an RMSE of 3.3-9.2 mmHg ($mu$=5.2, $sigma$=2.0).2018EWEdward Jay Wang et al.University of WashingtonSmartwatches & Fitness BandsBiosensors & Physiological MonitoringCHI