ViFeed: Promoting Slow Eating and Food Awareness through Strategic Video Manipulation during Screen-Based DiningGiven the widespread presence of screens during meals, the notion that digital engagement is inherently incompatible with mindfulness. We demonstrate how the strategic design of digital content can enhance two core aspects of mindful eating: slow eating and food awareness. Our research unfolded in three sequential studies: (1). Zoom Eating Study: Contrary to the assumption that video-watching leads to distraction and overeating, this study revealed that subtle video speed manipulations—can promote slower eating (by 15.31%) and controlled food intake (by 9.65%) while maintaining meal satiation and satisfaction. (2). Co-design workshop: Informed the development of ViFeed, a video playback system strategically incorporating subtle speed adjustments and glanceable visual cues. (3). Field Study: A week-long deployment of ViFeed in daily eating demonstrated its efficacy in fostering food awareness, food appreciation, and sustained engagement. By bridging the gap between ideal mindfulness practices and screen-based behaviors, this work offers insights for designing digital-wellbeing interventions that align with, rather than against, existing habits.2025YCYang Chen et al.National University of Singapore, College of Design and EngineeringDiet Tracking & Nutrition ManagementFood Culture & Food InteractionCHI
PilotAR: Streamlining Pilot Studies with OHMDs from Concept to InsightJanaka 等人开发 PilotAR 系统,简化头戴式显示器的飞行员研究流程,从概念设计到数据分析提供完整解决方案。2024NJNuwan Janaka et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)UbiComp
AudioXtend: Assisted Reality Visual Accompaniments for Audiobook Storytelling During Everyday Routine TasksThe rise of multitasking in contemporary lifestyles has positioned audio-first content as an essential medium for information consumption. We present AudioXtend, an approach to augment audiobook experiences during daily tasks by integrating glanceable, AI-generated visuals through optical see-through head-mounted displays (OHMDs). Our initial study showed that these visual augmentations not only preserved users' primary task efficiency but also dramatically enhanced immediate auditory content recall by 33.3% and 7-day recall by 32.7%, alongside a marked improvement in narrative engagement. Through participatory design workshops involving digital arts designers, we crafted a set of design principles for visual augmentations that are attuned to the requirements of multitaskers. Finally, a 3-day take-home field study further revealed new insights for everyday use, underscoring the potential of assisted reality (aR) to enhance heads-up listening and incidental learning experiences.2024FTFelicia Tan et al.National University of SingaporeAR Navigation & Context AwarenessGenerative AI (Text, Image, Music, Video)CHI
Heads-Up Multitasker: Simulating Attention Switching On Optical Head-Mounted DisplaysOptical Head-Mounted Displays (OHMDs) allow users to read digital content while walking. A better understanding of how users allocate attention between these two tasks is crucial for improving OHMD interfaces. This paper introduces a computational model for simulating users' attention switches between reading and walking. We model users' decision to deploy visual attention as a hierarchical reinforcement learning problem, wherein a supervisory controller optimizes attention allocation while considering both reading activity and walking safety. Our model simulates the control of eye movements and locomotion as an adaptation to the given task priority, design of digital content, and walking speed. The model replicates key multitasking behaviors during OHMD reading while walking, including attention switches, changes in reading and walking speeds, and reading resumptions.2024YBYunpeng Bai et al.National University of SingaporeHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Eye Tracking & Gaze InteractionCHI
Facilitating Virtual Reality Integration in Medical Education: A Case Study of Acceptability and Learning Impact in Childbirth Delivery TrainingAdvancements in Virtual Reality (VR) technology have opened new frontiers in medical education, igniting interest among medical educators to incorporate it into mainstream curriculum, complementing traditional training modalities such as manikin training. Despite numerous VR simulators on the market, their uptake in medical education remains limited. This paper explores the acceptability and educational effectiveness of VR in the context of vaginal childbirth delivery training, with the simulator providing a walkthrough for the second and third stages of labour, contrasting it with established manikin-based methods. We conducted a large-scale empirical study with 117 medical students, revealing a significant 24.9% improvement in knowledge scores when using VR as compared to manikin. However, VR received significantly lower self-reported feasibility scores in Confidence, Usability, Enjoyment, Feedback and Presence, indicating low acceptance. The study provides critical insights into the relationship between technological innovation and educational impact, guiding future integration of VR into medical training curricula.2024CLChang Liu et al.National University of SingaporeVR Medical Training & RehabilitationCHI
Navigating Real-World Challenges: A Quadruped Robot Guiding System for Visually Impaired People in Diverse EnvironmentsBlind and Visually Impaired (BVI) people find challenges in navigating unfamiliar environments, even using assistive tools such as white canes or smart devices. Increasingly affordable quadruped robots offer us opportunities to design autonomous guides that could improve how BVI people find ways around unfamiliar environments and maneuver therein. In this work, we designed RDog, a quadruped robot guiding system that supports BVI individuals' navigation and obstacle avoidance in indoor and outdoor environments. RDog combines an advanced mapping and navigation system to guide users with force feedback and preemptive voice feedback. Using this robot as an evaluation apparatus, we conducted experiments to investigate the difference in BVI people's ambulatory behaviors using a white cane, a smart cane, and RDog. Results illustrated the benefits of RDog-based ambulation, including faster and smoother navigation with fewer collisions and limitations, and reduced cognitive load. We discuss the implications of our work for multi-terrain assistive guidance systems.2024SCSHAOJUN CAI et al.National University of SingaporeVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Social Robot InteractionCHI
PANDALens: Towards AI-Assisted In-Context Writing on OHMD During TravelsWhile effective for recording and sharing experiences, traditional in-context writing tools are relatively passive and unintelligent, serving more like instruments rather than companions. This reduces primary task (e.g., travel) enjoyment and hinders high-quality writing. Through formative study and iterative development, we introduce PANDALens, a Proactive AI Narrative Documentation Assistant built on an Optical See-Through Head Mounted Display that supports personalized documentation in everyday activities. PANDALens observes multimodal contextual information from user behaviors and environment to confirm interests and elicit contemplation, and employs Large Language Models to transform such multimodal information into coherent narratives with significantly reduced user effort. A real-world travel scenario comparing PANDALens with a smartphone alternative confirmed its effectiveness in improving writing quality and travel enjoyment while minimizing user effort. Accordingly, we propose design guidelines for AI-assisted in-context writing, highlighting the potential of transforming them from tools to intelligent companions.2024RCRunze Cai et al.National University of SingaporeGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationCHI
GlassMessaging: Towards Ubiquitous Messaging Using OHMDshttps://doi.org/10.1145/36109312023NJNuwan Janaka et al.Context-Aware ComputingUbiquitous ComputingUbiComp
Mindful Moments: Exploring On-the-go Mindfulness Practice On Smart-glassesMindfulness technologies have gained research interest in recent years. We explore the use of smart-glasses (Optical Head Mounted Displays or OHMDs) for breath-based mindfulness practice as a well-being technology for everyday users. Since OHMDs do not occlude the wearer’s view, practitioners can access the digital environment while performing daily activities. Through our pilot series, we identified suitable visual and auditory attributes for OHMD mindfulness sessions in casual walking settings, and combined user-preferred features into our proposed Mindful Moments design. Results on physiological, sustained attention and self-reported mindfulness measures suggest that Mindful Moments facilitates higher state mindfulness than the Control. Its results proved comparable to the state-of-the-art Walking Meditation, while also being more accessible, convenient, and easy for novice practitioners to implement in everyday environments. We further evaluate Mindful Moments in a realistic setting, enhancing current understanding of mindfulness practice on OHMDs, thereby contributing a technique for improved health and well-being.2023FTFelicia Fang-Yi Tan et al.Fitness Tracking & Physical Activity MonitoringSleep & Stress MonitoringSmartwatches & Fitness BandsDIS
Not all spacings are created equal: The Effect of Text Spacings in On-the-go Reading Using Optical See-Through Head-Mounted DisplaysThe emergent Optical Head-Mounted Display (OHMD) platform has made mobile reading possible by superimposing digital text onto users’ view of the environment. However, mobile reading through OHMD needs to be effectively balanced with the user's environmental awareness. Hence, a series of studies were conducted to explore how text spacing strategies facilitate such balance. Through these studies, it was found that increasing spacing within the text can significantly enhance mobile reading on OHMDs in both simple and complex navigation scenarios and that such benefits mainly come from increasing the inter-line spacing, but not inter-word spacing. Compared with existing positioning strategies, increasing inter-line spacing improves mobile OHMD information reading in terms of reading speed (11.9% faster), walking speed (3.7% faster), and switching between reading and navigation (106.8% more accurate and 33% faster).2023CZChen Zhou et al.National University of SingaporeHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Eye Tracking & Gaze InteractionCHI
ParaGlassMenu: Towards Social-Friendly Subtle Interactions in ConversationsInteractions with digital devices during social settings can reduce social engagement and interrupt conversations. To overcome these drawbacks, we designed ParaGlassMenu, a semi-transparent circular menu that can be displayed around a conversation partner's face on Optical See-Through Head-Mounted Display (OHMD) and interacted subtly using a ring mouse. We evaluated ParaGlassMenu with several alternative approaches (Smartphone, Voice assistant, and Linear OHMD menus) by manipulating Internet-of-Things (IoT) devices in a simulated conversation setting with a digital partner. Results indicated that the ParaGlassMenu offered the best overall performance in balancing social engagement and digital interaction needs in conversations. To validate these findings, we conducted a second study in a realistic conversation scenario involving commodity IoT devices. Results confirmed the utility and social acceptance of the ParaGlassMenu. Based on the results, we discuss implications for designing attention-maintaining subtle interaction techniques on OHMDs.2023RCRunze Cai et al.National University of SingaporeAR Navigation & Context AwarenessMixed Reality WorkspacesContext-Aware ComputingCHI
Can Icons outperform Text? Understanding the Role of Pictograms in OHMD NotificationsOptical see-through head-mounted displays (OHMDs) can provide just-in-time digital assistance to users while they are engaged in ongoing tasks. However, given users' limited attentional resources when multitasking, there is a need to concisely and accurately present information in OHMDs. Existing approaches for digital information presentation involve using either text or pictograms. While pictograms have enabled rapid recognition and easier use in warning messages and traffic signs, most studies using pictograms for digital notifications have exhibited unfavorable results. We thus conducted a series of four iterative studies to understand how we can support effective notification presentation on OHMDs during multitasking scenarios. We find that while icon-augmented notifications can outperform text-only notifications, their effectiveness depends on icon familiarity, encoding density, and environmental brightness. We reveal design implications when using icon-augmented notifications in OHMDs and present plausible reasons for the observed disparity in literature.2023NJNuwan Janaka et al.National University of SingaporeHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Eye Tracking & Gaze InteractionNotification & Interruption ManagementCHI
Does Dynamically Drawn Text Improve Learning? Investigating the Effect of Text Presentation Styles in Video LearningDynamically drawn content (e.g., handwritten text) in learning videos is believed to improve users’ engagement and learning over static powerpoint-based ones. However, evidence from existing literature is inconclusive. With the emergence of Optical Head-Mounted Displays (OHMDs), recent work has shown that video learning can be adapted for on-the-go scenarios. To better understand the role of dynamic drawing, we decoupled dynamically drawn text into two factors (font style and motion of appearance) and studied their impact on learning performance under two usage scenarios (while seated with desktop and walking with OHMD). We found that although letter-traced text was more engaging for some users, most preferred learning with typeface text that displayed the entire word at once and achieved better recall (46.7% higher), regardless of the usage scenarios. Insights learned from the studies can better inform designers on how to present text in videos for ubiquitous access.2022ARAshwin Ram et al.National University of SingaporeAR Navigation & Context AwarenessOnline Learning & MOOC PlatformsSTEM Education & Science CommunicationCHI
Paracentral and near-peripheral visualizations: Towards attention-maintaining secondary information presentation on OHMDs during in-person social interactionsOptical see-through Head-Mounted Displays (OST HMDs, OHMDs) are known to facilitate situational awareness while accessing secondary information. However, information displayed on OHMDs can cause attention shifts, which distract users from natural social interactions. We hypothesize that information displayed in paracentral and near-peripheral vision can be better perceived while the user is maintaining eye contact during face-to-face conversations. Leveraging this idea, we designed a circular progress bar to provide progress updates in paracentral and near-peripheral vision. We compared it with textual and linear progress bars under two conversation settings: a simulated one with a digital conversation partner and a realistic one with a real partner. Results show that a circular progress bar can effectively reduce notification distractions without losing eye contact and is more preferred by users. Our findings highlight the potential of utilizing the paracentral and near-peripheral vision for secondary information presentation on OHMDs.2022NJNuwan Nanayakkarawasam Peru Kandage Janaka et al.National University of SingaporeMixed Reality WorkspacesContext-Aware ComputingCHI
From Lost to Found: Discover Missing UI Design Semantics through Recovering Missing TagsDesign sharing sites provide UI designers with a platform to share their works and also an opportunity to get inspiration from others' designs. To facilitate management and search of millions of UI design images, many design sharing sites adopt collaborative tagging systems by distributing the work of categorization to the community. However, designers often do not know how to properly tag one design image with compact textual description, resulting in unclear, incomplete, and inconsistent tags for uploaded examples which impede retrieval, according to our empirical study and interview with four professional designers. Based on the deep neural network, we introduce a novel approach for encoding both the visual and textual information to recover the missing tags for existing UI examples so that they can be more easily found by text queries. We achieve 82.72\% accuracy in the tag prediction. Through a simulation test of 5 queries, our system on average returns hundreds of times more results than the default Dribble search, leading to better relatedness, diversity and satisfaction.2020CCKun-Ting Chen et al.UX of AICSCW
EYEditor: Towards On-the-Go Heads-Up Text Editing Using Voice and Manual InputOn-the-go text-editing is difficult, yet frequently done in everyday lives. Using smartphones for editing text forces users into a heads-down posture which can be undesirable and unsafe. We present EYEditor, a heads-up smartglass-based solution that displays the text on a see-through peripheral display and allows text-editing with voice and manual input. The choices of output modality (visual and/or audio) and content presentation were made after a controlled experiment, which showed that sentence-by-sentence visual-only presentation is best for optimizing users' editing and path-navigation capabilities. A second experiment formally evaluated EYEditor against the standard smartphone-based solution for tasks with varied editing complexities and navigation difficulties. The results showed that EYEditor outperformed smartphones as either the path OR the task became more difficult. Yet, the advantage of EYEditor became less salient when both the editing and navigation was difficult. We discuss trade-offs and insights gained for future heads-up text-editing solutions.2020DGDebjyoti Ghosh et al.National University of SingaporeHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Voice User Interface (VUI) DesignCHI
Learn with Haptics: Improving Vocabulary Recall with Free-form Digital Annotation on Touchscreen MobilesMobile vocabulary learning interfaces typically present material only in auditory and visual channels, underutilizing the haptic modality. We explored haptic-integrated learning by adding free-form digital annotation to mobile vocabulary learning interfaces. Through a series of pilot studies, we identified three design factors: annotation mode, presentation sequence, and vibrotactile feedback, that influence recall in haptic-integrated vocabulary interfaces. These factors were then evaluated in a within-subject comparative study using a digital flashcard interface as baseline. Results using a 84-item vocabulary showed that the 'whole word' annotation mode is highly effective, yielding a 24.21% increase in immediate recall scores and a 30.36% increase in the 7-day delayed scores. Effects of presentation sequence and vibrotactile feedback were more transient; they affected the results of immediate tests, but not the delayed tests. We discuss the implications of these factors for designing future mobile learning applications.2020SSSmitha Sheshadri et al.National University of SingaporeHaptic WearablesMotor Impairment Assistive Input TechnologiesCHI
Virtually-Extended Proprioception: Providing Spatial Reference in VR through an Appended Virtual LimbSelecting targets directly in the virtual world is difficult due to the lack of haptic feedback and inaccurate estimation of egocentric distances. Proprioception, the sense of self-movement and body position, can be utilized to improve virtual target selection by placing targets on or around one's body. However, its effective scope is limited closely around one's body. We explore the concept of virtually-extended proprioception by appending virtual body parts mimicking real body parts to users' avatars, to provide spatial reference to virtual targets. Our studies suggest that our approach facilitates more efficient target selection in VR as compared to no reference or using an everyday object as reference. Besides, by cultivating users' sense of ownership on the appended virtual body part, we can further enhance target selection performance. The effects of transparency and granularity of the virtual body part on target selection performance are also discussed.2020YTYang Tian et al.The Chinese University of Hong KongImmersion & Presence ResearchIdentity & Avatars in XRCHI
Gallery D.C.: Design Search and Knowledge Discovery through Auto-created GUI Component GalleryOnline communities like and GraphicBurger allow GUI designers to share their design artwork and learn from each other. These design sharing platforms are important sources for design inspiration, but our survey with GUI designers suggests additional information needs unmet by existing design sharing platforms. First, designers need to see the practical use of certain GUI designs in real applications, rather than just artworks. Second, designers want to see not only the overall designs but also the detailed design of the GUI components. Third, designers need advanced GUI design search abilities (e.g., multi-facets search) and knowledge discovery support (e.g., demographic investigation, cross-company design comparison). This paper presents Gallery D.C. (http://mui-collection.herokuapp.com/), a gallery of GUI design components that harness GUI designs crawled from millions of real-world applications using reverse-engineering and computer vision techniques. Through a process of invisible crowdsourcing, Gallery D.C. supports novel ways for designers to collect, analyze, search, summarize and compare GUI designs on a massive scale. We quantitatively evaluate the quality of Gallery D.C. and demonstrate that Gallery D.C. offers additional support for design sharing and knowledge discovery beyond existing platforms.2019CCKun-Ting Chen et al.Expert WorkCSCW
AVEID: Automatic Video System For Measuring Engagement in DementiaEngagement in dementia is typically measured using behavior observational scales (BOS) that are tedious and involve intensive manual labor to annotate, and are therefore not easily scalable. We present AVEID, a low-cost and easy to use video-based engagement measurement tool to determine the level of engagement of a person with dementia (PwD) when interacting with a target object. We show that the objective behavioral measures computed via AVEID correlate well with subjective expert impressions for the popular MPES and OME BOS, confirming its viability and effectiveness.2018PFPin Sym Foong et al.Elderly Care & Dementia SupportBiosensors & Physiological MonitoringIUI