Foody Talk: Exploring Opportunities for Conversational Food JournalingDigital food journaling can help support reflection and improvement of wellbeing relating to eating habits. However, it is often viewed as burdensome, and abandoned before gaining benefits. Advances in conversational user interfaces (CUIs) have the potential to support people journaling in a natural and interactive manner, but we lack understanding of how people would ideally prefer to use CUIs when journaling. We conducted 33 co-design sessions with 18 participants to ideate CUI interactions supportive of their health goals and in everyday situations. Our findings reveal that participants expect CUIs to be adaptive by learning goals and personal references, and support depth in detail and goal alignment while respecting situational constraints and intent. While participants expressed concern around navigating long-term data solely through conversations, they envisioned that CUIs could provide empathetic, non-judgmental feedback. We discuss opportunities for CUIs to support empathetic food journaling and accountability while following guardrails for delegated tasks.2025LSLucas M. Silva et al.University of Iowa, Computer ScienceConversational ChatbotsDiet Tracking & Nutrition ManagementCHI
Peerspective: A Study on Reciprocal Tracking for Self-awareness and Relational InsightPersonal informatics helps individuals understand themselves, but it often struggles to capture non-conscious behaviors such as stress responses, habitual actions, and communication styles. Incorporating social aspects into PI systems offers new perspectives on self-understanding, yet prior research has largely focused on unidirectional approaches that center benefits on the primary tracker. To address this gap, we introduce the Peerspective study, which explores reciprocal tracking---a bidirectional practice where two participants observe and provide feedback to each other, fostering mutual self-understanding and collaboration. In a week-long study with eight peer dyads, we explored how reciprocal observation and feedback influence self-awareness and interpersonal relationships. Our findings reveal that reciprocal tracking not only helps participants uncover blind spots and expand their self-concepts but also enhances empathy, deepens communication, and promotes sustained engagement. We discuss key facilitators and challenges of integrating reciprocity into personal informatics systems and offer design considerations for supporting collaborative tracking in everyday contexts.2025KLKwangyoung Lee et al.KAIST, Department of Industrial DesignCollaborative Learning & Peer TeachingMental Health Apps & Online Support CommunitiesContext-Aware ComputingCHI
Meditating Together: Practices, Benefits and Challenges of Meditation on Social Virtual RealityMeditation and mind-body practices offer many benefits for both mental and physical well-being. Recently, social virtual reality (VR) has emerged as a promising platform to support well-being activities. While Human-Computer Interaction (HCI) research has explored technologies for meditation, little is known about how users appropriate social VR for meditation, particularly group practice, and how it shapes their experiences. To bridge this gap, we interviewed 13 regular social VR meditators to explore their practices, perceived benefits, and challenges. We found that meditators utilized platform features to engage in community-driven group practices, manage session flow, employ avatars and body tracking for kinetic practices, and experiment with novel forms of meditation. Participants reported benefits and challenges related to the individual and social aspects of their meditation experiences. Based on these findings, we discuss the implications of using social VR for meditation, including how avatars and virtual others positively affect the practice, as well as emerging tensions and opportunities.2025LLLika Haizhou Liu et al.University of California Irvine, InformaticsSocial & Collaborative VRImmersion & Presence ResearchMental Health Apps & Online Support CommunitiesCHI
Towards Hormone Health: An Autoethnography of Long-Term Holistic Tracking to Manage PCOSPolycystic ovary syndrome (PCOS) is a common hormonal disorder affecting 11-13% of women of reproductive age, characterized by a wide range of symptoms (e.g., menstrual irregularity, acne, and obesity) that varies among individuals. While self-tracking tools help PCOS patients to monitor their symptoms and find personalized treatment, they often focus on regular periods of healthy women with inadequate support for the 1) personalization and 2) long-term holistic tracking necessary for managing complex chronic conditions like PCOS. To bridge this gap, the first author (who has PCOS) conducted an autoethnographic study of holistic self-tracking over a period of ten months in an effort to manage her condition. Our results highlight the challenges of personalized, holistic, long-term tracking in medical, socio-cultural, temporal, technical, and spatial contexts. Based on these insights, we provide design implications for tracking tools that are more inclusive and sustainable.2025DKDaye Kang et al.Cornell, Information ScienceChronic Disease Self-Management (Diabetes, Hypertension, etc.)Diet Tracking & Nutrition ManagementCHI
HAIGEN: Towards Human-AI Collaboration for Facilitating Creativity and Style Generation in Fashion DesignJiang等人提出人机协作框架HAIGEN,利用AI辅助时尚设计师创意生成与风格探索,提升设计效率与创新多样性。2024JJJianan Jiang et al.Generative AI (Text, Image, Music, Video)AI-Assisted Creative WritingUbiComp
LT-Fall: The Design and Implementation of a Life-threatening Fall Detection and Alarming SystemFalls are the leading cause of fatal injuries to elders in modern society, which has motivated researchers to propose various fall detection technologies. We observe that most of the existing fall detection solutions are diverging from the purpose of fall detection: timely alarming the family members, medical staff or first responders to save the life of the human with severe injury caused by fall. Instead, they focus on detecting the behavior of human falls, which does not necessarily mean a human is in real danger. The real critical situation is when a human cannot get up without assistance and is thus lying on the ground after the fall because of losing consciousness or becoming incapacitated due to severe injury. In this paper, we define a life-threatening fall as a behavior that involves a falling down followed by a long-lie of humans on the ground, and for the first time point out that a fall detection system should focus on detecting life-threatening falls instead of detecting any random falls. Accordingly, we design and implement LT-Fall, a mmWave-based life-threatening fall detection and alarming system. LT-Fall detects and reports both fall and fall-like behaviors in the first stage and then identifies life-threatening falls by continuously monitoring the human status after fall in the second stage. We propose a joint spatio-temporal localization technique to detect and locate the micro-motions of the human, which solves the challenge of mmWave's insufficient spatial resolution when the human is static, i.e., lying on the ground. Extensive evaluation on 15 volunteers demonstrates that compared to the state-of-the-art work (92% precision and 94% recall), LT-Fall achieves zero false alarms as well as a precision of 100% and a recall of 98.8%. https://dl.acm.org/doi/10.1145/35808352023DZDuo Zhang et al.Elderly Care & Dementia SupportBiosensors & Physiological MonitoringUbiComp
SignRing: Continuous American Sign Language Recognition Using IMU Rings and Virtual IMU Data"Sign language is a natural language widely used by Deaf and hard of hearing (DHH) individuals. Advanced wearables are developed to recognize sign language automatically. However, they are limited by the lack of labeled data, which leads to a small vocabulary and unsatisfactory performance even though laborious efforts are put into data collection. Here we propose SignRing, an IMU-based system that breaks through the traditional data augmentation method, makes use of online videos to generate the virtual IMU (v-IMU) data, and pushes the boundary of wearable-based systems by reaching the vocabulary size of 934 with sentences up to 16 glosses. The v-IMU data is generated by reconstructing 3D hand movements from two-view videos and calculating 3-axis acceleration data, by which we are able to achieve a word error rate (WER) of 6.3% with a mix of half v-IMU and half IMU training data (2339 samples for each), and a WER of 14.7% with 100% v-IMU training data (6048 samples), compared with the baseline performance of the 8.3% WER (trained with 2339 samples of IMU data). We have conducted comparisons between v-IMU and IMU data to demonstrate the reliability and generalizability of the v-IMU data. This interdisciplinary work covers various areas such as wearable sensor development, computer vision techniques, deep learning, and linguistics, which can provide valuable insights to researchers with similar research objectives." https://doi.org/10.1145/36108812023JLJiyang Li et al.Hand Gesture RecognitionFoot & Wrist InteractionDeaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)UbiComp
VibPath: Two-Factor Authentication with Your Hand’s Vibration Response to Unlock Your Phone"Technical advances in the smart device market have fixated smartphones at the heart of our lives, warranting an ever more secure means of authentication. Although most smartphones have adopted biometrics-based authentication, after a couple of failed attempts, most users are given the option to quickly bypass the system with passcodes. To add a layer of security, two-factor authentication (2FA) has been implemented but has proven to be vulnerable to various attacks. In this paper, we introduce VibPath, a simultaneous 2FA scheme that can understand the user's hand neuromuscular system through touch behavior. VibPath captures the individual's vibration path responses between the hand and the wrist with the attention-based encoder-decoder network, authenticating the genuine users from the imposters unobtrusively. In a user study with 30 participants, VibPath achieved an average performance of 0.98 accuracy, 0.99 precision, 0.98 recall, 0.98 f1-score for user verification, and 94.3% accuracy for user identification across five passcodes. Furthermore, we also conducted several extensive studies, including in-the-wile, permanence, vulnerability, usability, and system overhead studies, to assess the practicability and viability of the VibPath from multiple aspects." https://doi.org/10.1145/36108942023SCSeokmin Choi et al.Vibrotactile Feedback & Skin StimulationPasswords & AuthenticationUbiComp
SmartASL: “Point-of-Care” Comprehensive ASL Interpreter Using WearablesSign language builds up an important bridge between the d/Deaf and hard-of-hearing (DHH) and hearing people. Regrettably, most hearing people face challenges in comprehending sign language, necessitating sign language translation. However, state-of-the-art wearable-based techniques mainly concentrate on recognizing manual markers (e.g., hand gestures), while frequently overlooking non-manual markers, such as negative head shaking, question markers, and mouthing. This oversight results in the loss of substantial grammatical and semantic information in sign language. To address this limitation, we introduce SmartASL, a novel proof-of-concept system that can 1) recognize both manual and non-manual markers simultaneously using a combination of earbuds and a wrist-worn IMU, and 2) translate the recognized American Sign Language (ASL) glosses into spoken language. Our experiments demonstrate the SmartASL system's significant potential to accurately recognize the manual and non-manual markers in ASL, effectively bridging the communication gaps between ASL signers and hearing people using commercially available devices. https://dl.acm.org/doi/10.1145/35962552023YJYINCHENG JIN et al.Foot & Wrist InteractionDeaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)Augmentative & Alternative Communication (AAC)UbiComp
NeuralGait: Assessing Brain Health Using Your Smartphone"Brain health attracts more recent attention as the population ages. Smartphone-based gait sensing and analysis can help identify the risks of brain diseases in daily life for prevention. Existing gait analysis approaches mainly hand-craft temporal gait features or developing CNN-based feature extractors, but they are either prone to lose some inconspicuous pathological information or are only dedicated to a single brain disease screening. We discover that the relationship between gait segments can be used as a principle and generic indicator to quantify multiple pathological patterns. In this paper, we propose NeuralGait, a pervasive smartphone-cloud system that passively captures and analyzes principle gait segments relationship for brain health assessment. On the smartphone end, inertial gait data are collected while putting the smartphone in the pants pocket. We then craft local temporal-frequent gait domain features and develop a self-attention-based gait segment relationship encoder. Afterward, the domain features and relation features are fed to a scalable RiskNet in the cloud for brain health assessment. We also design a pathological hot update protocol to efficiently add new brain diseases in the RiskNet. NeuralGait is practical as it provides brain health assessment with no burden in daily life. In the experiment, we recruit 988 healthy people and 417 patients with a single or combination of PD, TBI, and stroke, and evaluate the brain health assessment using a set of specifically designed metrics including global accuracy, exact accuracy, sensitivity, and false alarm rate. We also demonstrate the generalization (e.g., analysis of feature effectiveness and model efficiency) and inclusiveness of NeuralGait. https://dl.acm.org/doi/10.1145/3569476"2023HLHuining Li et al.Fitness Tracking & Physical Activity MonitoringBiosensors & Physiological MonitoringUbiComp
WavoID: Robust and Secure Multi-modal User Identification via mmWave-voice MechanismWith the increasing deployment of voice-controlled devices in homes and enterprises, there is an urgent demand for voice identification to prevent unauthorized access to sensitive information and property loss. However, due to the broadcast nature of sound wave, a voice only system is vulnerable to adverse conditions and malicious attacks. We observe that the cooperation of millimeter waves (mmWave) and voice signals can significantly improve the effectiveness and security of user identification. Based on the properties, we propose a multi-modal user identification system (named WavoID) by fusing the uniqueness of mmWave sensed vocal vibration and mic-recorded voice of users. To estimate fine-grained waveforms, WavoID splits signals and adaptively combines useful decomposed signals according to correlative contents in both mmWave and voice. An elaborated anti-spoofing module in WavoID comprising biometric bimodal information defend against attacks. WavoID produces and fuses the response maps of mmWave and voice to improve the representation power of fused features, benefiting accurate identification, even facing adverse circumstances. We evaluate WavoID using commercial sensors on extensive experiments. WavoID has significant performance on user identification with over 98% accuracy on 100 user datasets.2023TLTiantian Liu et al.Eye Tracking & Gaze InteractionBrain-Computer Interface (BCI) & NeurofeedbackPasswords & AuthenticationUIST
“I Don't Even Remember What I Read”: How Design Influences Dissociation on Social MediaMany people have experienced mindlessly scrolling on social media. We investigated these experiences through the lens of normative dissociation: total cognitive absorption, characterized by diminished self-awareness and reduced sense of agency. To explore user experiences of normative dissociation and how design affects the likelihood of normative dissociation, we deployed Chirp, a custom Twitter client, to 43 U.S. participants. Experience sampling and interviews revealed that sometimes, becoming absorbed in normative dissociation on social media felt like a beneficial break. However, people also reported passively slipping into normative dissociation, such that they failed to absorb any content and were left feeling like they had wasted their time. We found that designed interventions--including custom lists, reading history labels, time limit dialogs, and usage statistics--reduced normative dissociation. Our findings demonstrate that interaction designs intended to capture attention likely do so by harnessing people’s natural inclination to seek normative dissociation experiences. This suggests that normative dissociation may be a more productive framing than addiction for discussing social media overuse.2022ABAmanda Baughan et al.University of WashingtonPrivacy by Design & User ControlOnline Harassment & Counter-ToolsSocial Platform Design & User BehaviorCHI
A Method to Analyze Multiple Social Identities in Twitter BiosTwitter users signal social identity in their profile descriptions, or bios, in a number of important but complex ways that are not well-captured by existing characterizations of how identity is expressed in language. Better ways of defining and measuring these expressions may therefore be useful both in understanding how social identity is expressed in text, and how the self is presented on Twitter. To this end, the present work makes three contributions. First, using qualitative methods, we \hl{identify and} define the concept of a personal identifier, which is more representative of the ways in which identity is signaled in Twitter bios. Second, we propose a method to extract all personal identifiers expressed in a given bio. Finally, we present a series of validation analyses that explore the strengths and limitations of our proposed method. Our work opens up exciting new opportunities at the intersection between the social psychological study of social identity and the study of how we compose the self through markers of identity on Twitter and in social media more generally.2021APArjunil Pathak et al.Online IdentitiesCSCW
“@alex, this fixes #9”: Analysis of Referencing Patterns in Pull Request DiscussionsPull Requests (PRs) are a frequently used method for proposing changes to source code repositories. When discussing proposed changes in a PR discussion, stakeholders often reference a wide variety of information objects for establishing shared awareness and common ground. Previous work has not considered how the referential behavior impacts collaborative software development via PRs. This knowledge gap is the major barrier in evaluating the current support for referencing in PRs and improving them. We conducted an explorative analysis of \textasciitilde7K references, collected from 450 public PRs on GitHub, and constructed taxonomies of referent types and expressions. Using our annotated dataset, we identified several patterns in the use of references. Referencing source code elements was prevalent but the authoring interface lacks support for it. Three classes of contextual factors influence referencing behaviors: referent type, discussion thread, and project attributes. Referencing patterns may indicate PR outcomes (e.g., merged PRs frequently reference issues, users, and tests). We conclude with design implications to support more effective referencing in PR discussion interfaces.2021ACAshish Chopra et al.Computer-Supported Conversation and CommunicationCSCW
X-Droid: A Quick and Easy Android Prototyping Framework with a Single App IllusionWe present X-Droid, a framework that provides Android app developers an ability to quickly produce functional prototypes. Our work is motivated by the need for such ability and the lack of tools that provide the ability. Developers want to produce a functional prototype rapidly in order to test out potential features in real-life situations. However, current prototyping tools for mobile apps are limited to creating non-functional UI mockups that do not demonstrate actual features. With X-Droid, developers can create a new app that imports various kinds of functionality provided by other existing Android apps. In doing so, developers do not need to understand how other Android apps are implemented and do not need access to their source code. X-Droid provides a developer tool that enables developers to use the UIs of other Android apps and import desired functions into their prototypes. X-Droid also provides a run-time system that executes other apps’ functionality in the background on off-the-shelf Android devices for seamless integration. Our evaluation shows that with the help of X-Droid, a developer was able to import a function from an existing Android app into a new prototype with only 55 lines of Java code, while the function itself requires 10,334 lines of Java code to implement.2019DKDonghwi Kim et al.Prototyping & User TestingUIST