NeuroChat: A Neuroadaptive AI Chatbot for Customizing Learning ExperiencesGenerative AI is reshaping education by enabling personalized, on-demand learning experiences. However, current AI systems lack awareness of the learner’s cognitive state, limiting their adaptability. In parallel, electroencephalography (EEG)-based neuroadaptive systems have shown promise in enhancing engagement through real-time physiological feedback. This paper introduces NeuroChat, a neuroadaptive AI tutor that integrates real-time EEG-based engagement tracking with a large language model to adapt its conversational responses. By continuously monitoring learners’ cognitive engagement, NeuroChat dynamically adjusts content complexity, tone, and response style in a closed-loop interaction. In a within-subjects study (n=24), NeuroChat significantly increased both EEG-measured and self-reported engagement compared to a non-adaptive chatbot. However, no significant differences in short-term learning outcomes were observed. These findings demonstrate the feasibility of real-time brain–AI interaction for education and highlight opportunities for deeper personalization, longer-term adaptation, and richer learning assessment in future neuroadaptive systems.2025DBDunya Baradari et al.Brain-Computer Interface (BCI) & NeurofeedbackHuman-LLM CollaborationIntelligent Tutoring Systems & Learning AnalyticsCUI
From Synthetic to Human: The Gap Between AI-Predicted and Actual Pro-Environmental Behavior Change After Chatbot PersuasionPro-environmental behavior (PEB) is vital to combat climate change, yet turning awareness into intention and action remains elusive. We explore large language models (LLMs) as tools to promote PEB, comparing their impact across 3,600 participants: real humans (n=1,200), simulated humans based on actual participant data (n=1,200), and fully synthetic personas (n=1,200). All three participant groups faced either personalized chatbots, standard chatbots, or static statements, employing four persuasion strategies (moral foundations, future self-continuity, action orientation, or ''freestyle'' chosen by the LLM). Results reveal a ''synthetic persuasion paradox'': synthetic and simulated participants significantly change their post-intervention PEB stance, while human attitudes barely shift. Simulated participants better approximate human behavior but still overestimate effects. This disconnect underscores LLM’s potential for pre-evaluating PEB interventions but warns of its limits in predicting human responses. We call for refined synthetic modeling and sustained and extended human trials to align conversational AI’s promise with tangible sustainability outcomes.2025ADAlexander Doudkin et al.AI Ethics, Fairness & AccountabilityAlgorithmic Fairness & BiasSustainable HCICUI
MemPal: Leveraging Multimodal AI and LLMs for Voice-Activated Object Retrieval in Homes of Older AdultsOlder adults have increasing difficulty with retrospective memory, hindering their abilities to perform daily activities and posing stress on caregivers to ensure their wellbeing. Recent developments in Artificial Intelligence (AI) and large context-aware multimodal models offer an opportunity to create memory support systems that assist older adults with common issues like object finding. This paper discusses the development of an AI-based, wearable memory assistant, MemPal, that helps older adults with a common problem, finding lost objects at home, and presents results from tests of the system in older adults' own homes. Using visual context from a wearable camera, the multimodal LLM system creates a real-time automated text diary of the person's activities for memory support purposes, offering object retrieval assistance using a voice-based interface. The system is designed to support additional use cases like context-based proactive safety reminders and recall of past actions. We report on a quantitative and qualitative study with N=15 older adults within their own homes that showed improved performance of object finding with audio-based assistance compared to no aid and positive overall user perceptions on the designed system. We discuss further applications of MemPal’s design as a multi-purpose memory aid and future design guidelines to adapt memory assistants to older adults’ unique needs.2025NMNatasha Maniar et al.Smart Home Interaction DesignAging-in-Place Assistance SystemsIUI
Synthetic Human Memories: AI-Edited Images and Videos Can Implant False Memories and Distort RecollectionAI is increasingly used to enhance images and videos, both intentionally and unintentionally. As AI editing tools become more integrated into smartphones, users can modify or animate photos into realistic videos. This study examines the impact of AI-altered visuals on false memories—recollections of events that didn’t occur or deviate from reality. In a pre-registered study, 200 participants were divided into four conditions of 50 each. Participants viewed original images, completed a filler task, then saw stimuli corresponding to their assigned condition: unedited images, AI-edited images, AI-generated videos, or AI-generated videos of AI-edited images. AI-edited visuals significantly increased false recollections, with AI-generated videos of AI-edited images having the strongest effect (2.05x compared to control). Confidence in false memories was also highest for this condition (1.19x compared to control). We discuss potential applications in HCI, such as therapeutic memory reframing, and challenges in ethical, legal, political, and societal domains.2025PPPat Pataranutaporn et al.Massachusetts Institute of Technology, MIT Media LabGenerative AI (Text, Image, Music, Video)Explainable AI (XAI)AI Ethics, Fairness & AccountabilityCHI
Leveraging AI-Generated Emotional Self-Voice to Nudge People towards their Ideal SelvesEmotions, shaped by past experiences, significantly influence decision-making and goal pursuit. Traditional cognitive-behavioral techniques for personal development rely on mental imagery to envision ideal selves, but may be less effective for individuals who struggle with visualization. This paper introduces Emotional Self-Voice (ESV), a novel system combining emotionally expressive language models and voice cloning technologies to render customized responses in the user's own voice. We investigate the potential of ESV to nudge individuals towards their ideal selves in a study with 60 participants. Across all three conditions (ESV, text-only, and mental imagination), we observed an increase in resilience, confidence, motivation, and goal commitment, and the ESV condition was perceived as uniquely engaging and personalized. We discuss the implications of designing generated self-voice systems as a personalized behavioral intervention for different scenarios.2025CFCathy Mengying Fang et al.MIT Media LabIntelligent Voice Assistants (Alexa, Siri, etc.)Generative AI (Text, Image, Music, Video)AI Ethics, Fairness & AccountabilityCHI
Talk to the Hand: an LLM-powered Chatbot with Visual Pointer as Proactive Companion for On-Screen TasksThis paper presents Pointer Assistant, a novel human-AI interaction technique for on-screen tasks. The design features a chatbot displayed as an extra mouse pointer, alongside the user's, which proactively gives feedback on user actions while directing them to relevant areas on the screen and responding to the user's direct chat messages. The effectiveness of the design's key characteristics, pointer form and proactivity, was investigated in a study involving 220 participants in a financial budget planning task. Results demonstrated that the pointer design and interaction reduced task load while improving satisfaction with the experience, and increased the number of budget categories ideated during the task compared to the traditional passive chat log design. Participants viewed Pointer Assistant as a fun, innovative, and helpful visual guide while noting that its assertiveness can be improved. Future developments could offer even further enhancements to the user experience of human-AI collaboration and task outcomes.2025TPThanawit Prasongpongchai et al.KASIKORN Business-Technology Group, Beacon InterfaceVoice User Interface (VUI) DesignHuman-LLM CollaborationInteractive Data VisualizationCHI
Putting Things into Context: Generative AI-Enabled Context Personalization for Vocabulary Learning Improves Learning MotivationFostering students' interests in learning is considered to have many positive downstream effects. Large language models have opened up new horizons for generating content tuned to one's interests, yet it is unclear in what ways and to what extent this customization could have positive effects on learning. To explore this novel dimension, we conducted a between-subjects online study (n=272) featuring different variations of a generative AI vocabulary learning app that enables users to personalize their learning examples. Participants were randomly assigned to control (sentence sourced from pre-existing text) or experimental conditions (generated sentence or short story based on users’ text input). While we did not observe a difference in learning performance between the conditions, the analysis revealed that generative AI-driven context personalization positively affected learning motivation. We discuss how these results relate to previous findings and underscore their significance for the emerging field of using generative AI for personalized learning.2024JLJoanne Leong et al.MITGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationOnline Learning & MOOC PlatformsCHI
Improving Attention Using Wearables via Haptic and Multimodal Rhythmic StimuliRhythmic light, sound and haptic stimuli can improve cognition through neural entrainment and by modifying autonomic nervous system function. However, the effects and user experience of using wearables for inducing such rhythmic stimuli have been under-investigated. We conducted a study with 20 participants to understand the effects of rhythmic stimulation wearables on attention. We found that combined sound and light stimuli from a glasses device provided the strongest improvement to attention but were the least usable and socially acceptable. Haptic vibration stimuli from a wristband also improved attention and were the most usable and socially acceptable. Our field study (N=12) with haptic stimuli from a smartwatch showed that such systems can be easy to use and were used frequently in a range of contexts but more exploration is needed to improve the comfort. Our work contributes to developing future wearables to support attention and cognition.2024NWNathan W Whitmore et al.Massachusetts Institute of TechnologyVibrotactile Feedback & Skin StimulationHaptic WearablesFoot & Wrist InteractionCHI
An Accessible, Three-Axis Plotter for Enhancing Calligraphy Learning through Generated MotionAn Accessible, Three-Axis Plotter for Enhancing Calligraphy Learning through Generated Motion2024CFCathy Mengying Fang et al.MIT Media LabSpecial Education TechnologyShape-Changing Materials & 4D PrintingCHI
Memoro: Using Large Language Models to Realize a Concise Interface for Real-Time Memory AugmentationPeople have to remember an ever-expanding volume of information. Wearables that use information capture and retrieval for memory augmentation can help but can be disruptive and cumbersome in real-world tasks, such as in social settings. To address this, we developed Memoro, a wearable audio-based memory assistant with a concise user interface. Memoro uses a large language model (LLM) to infer the user’s memory needs in a conversational context, semantically search memories, and present minimal suggestions. The assistant has two interaction modes: Query Mode for voicing queries and Queryless Mode for on-demand predictive assistance, without explicit query. Our study of (N=20) participants engaged in a real-time conversation, demonstrated that using Memoro reduced device interaction time and increased recall confidence while preserving conversational quality. We report quantitative results and discuss the preferences and experiences of users. This work contributes towards utilizing LLMs to design wearable memory augmentation systems that are minimally disruptive.2024WZWazeer Deen Zulfikar et al.MIT Media LabBrain-Computer Interface (BCI) & NeurofeedbackHuman-LLM CollaborationCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)CHI
AI Comes Out of the Closet: Using AI-Generated Virtual Characters to Help Individuals Practice LGBTQIA+ AdvocacyDespite significant historical progress, discrimination and social stigma continue to impact the lives of LGBTQIA+ individuals. The use of AI-generated virtual characters offers a unique opportunity to facilitate advocacy by engaging individuals in simulated conversations that can foster understanding, education, and empathy. This paper explores the potential of AI simulations in helping individuals practice LGBTQIA+ advocacy, while also acknowledging the need for ethical considerations and addressing concerns about oversimplification or perpetuation of stereotypes. By combining technological innovation with a commitment to inclusivity, we aim to contribute to the ongoing struggle for equality in both the legal framework and the hearts and minds of the community. We present a study evaluating virtual characters driven by generative conversational AI simulating the social interactions surrounding ‘coming out of the closet’, a rite of passage associated with LGBTQIA+ communities. In our study, virtual characters embodied as queer individuals engage with users in a text-based conversation simulation paired with visual representations. We investigate how the interactions between the virtual characters and a user influence the user’s comfort, confidence, empathy and sympathy. We developed an AI simulation with distinct visual personas and deployed a series of conditions. We explore the potential of these interfaces for simulating queer social interactions to enhance LGBTQIA+ potential and cultural acceptance. We present findings from such deployments involving 323 users. Finally, we discuss the design implications of our work on the potential future of embodied, self-actuated and openly LGBTQIA+ intelligent agents.2024DPDaniel Pillis et al.Agent Personality & AnthropomorphismGenerative AI (Text, Image, Music, Video)Gender & Race Issues in HCIIUI
Joie: a Joy-based BCIThe size and cost of electroencephalography (EEG) headsets have been decreasing at a steadfast pace. Cortical frontal activity is a promising input method that is also important for affect regulation. We created Joie, a joy-based EEG brain-computer interface (BCI) which uses prefrontal asymmetries associated with joyful thoughts as input to an endless runner video game. The more prefrontal asymmetries are activated, the more the character collects coins in response. In a lab study (20 participants, 15 training sessions per participant, up to two weeks of training), we found that our experiment group instructed to imagine positive music, winning awards, and similar strategies, demonstrated significantly greater ability in activating asymmetries compared to our placebo and control groups. In our analysis, Joie demonstrates the ability for frontal asymmetries to be used as input to an affective BCI and builds upon prior work in this area. In the future, training these asymmetries can teach mental strategies that have applications in mental health.2023AVAngela Vujic et al.Brain-Computer Interface (BCI) & NeurofeedbackGame UX & Player BehaviorMental Health Apps & Online Support CommunitiesUIST
"Picture the Audience...": Exploring Private AR Face Filters for Online Public SpeakingFaced with public speaking anxiety, one common piece of advice is to picture the audience in a new light, using your mind’s eye. With Augmented Reality (AR) face filters, it becomes possible to literally change how one sees oneself or others. In this paper, we explore privately applied AR filters during online public speaking. Private means that these effects are only visible to the speaker. To investigate this possibly controversial concept, we conducted an online survey with 100 respondents to gather a diverse set of initial impressions, possible boundaries, and guidelines. Following this, we built a prototype of a private AR web-based video-calling application, and pilot-tested it with 16 participants to gain more in-depth insights. Based on our results, we outline key user perspectives and opportunities for the private application of AR face filters during online public speaking and discuss them in the context of previous literature on this topic.2023JLJoanne Leong et al.MIT Media LabAR Navigation & Context AwarenessInteractive Narrative & Immersive StorytellingCHI
Don’t Just Tell Me, Ask Me: AI Systems that Intelligently Frame Explanations as Questions Improve Human Logical Discernment Accuracy over Causal AI explanationsCritical thinking is an essential human skill. Despite the importance of critical thinking, research reveals that our reasoning ability suffers from personal biases and cognitive resource limitations, leading to potentially dangerous outcomes. This paper presents the novel idea of AI-framed Questioning that turns information relevant to the AI classification into questions to actively engage users' thinking and scaffold their reasoning process. We conducted a study with 204 participants comparing the effects of AI-framed Questioning on a critical thinking task; discernment of logical validity of socially divisive statements. Our results show that compared to no feedback and even causal AI explanations of an always correct system, AI-framed Questioning significantly increase human discernment of logically flawed statements. Our experiment exemplifies a future style of Human-AI co-reasoning system, where the AI becomes a critical thinking stimulator rather than an information teller.2023VDValdemar Danry et al.MITExplainable AI (XAI)Privacy by Design & User ControlCHI
Olfactory Wearables for Mobile Targeted Memory ReactivationThis paper investigates how a smartphone-controlled olfactory wearable might improve memory recall. We conducted a within-subjects experiment with 32 participants using the device and without (control). In the experimental condition, bursts of odor were released during visuo-spatial memory navigation tasks, and replayed during sleep the following night in the subjects' home. We found that compared to control, there was an improvement in memory performance when using the scent wearable in memory tasks that involved walking in a physical space. Furthermore, participants recalled more objects and translations when re-exposed to the same scent during the recall test, in addition to during sleep. These effects were statistically significant, and, in the object recall task, they also persisted for more than one week. This experiment demonstrates a potential practical application of olfactory interfaces that can interact with a user during wake as well as sleep to support memory.2023JFJudith Amores Fernandez et al.Microsoft, MITElectronic Textiles (E-textiles)Context-Aware ComputingCHI
On-Face Olfactory InterfacesOn-face wearables are currently limited to piercings, tattoos, or interactive makeup that aesthetically enhances the user, and have been minimally used for scent-delivery methods. However, on-face scent interfaces could provide an advantage for personal scent delivery in comparison with other modalities or body locations since they are closer to the nose. In this paper, we present the mechanical and industrial design details of a series of form factors for on-face olfactory wearables that are lightweight and can be adhered to the skin or attached to glasses or piercings. We assessed the usability of three prototypes by testing with 12 participants in a within-subject study design while they were interacting in pairs at a close personal distance. We compare two of these designs with an "off-face" olfactory necklace and evaluate their social acceptance, comfort as well as perceived odor intensity for both the wearer and observer.2020YWYanan Wang et al.Zhejiang UniversityOn-Skin Display & On-Skin InputCHI
Next Steps for Human-Computer IntegrationHuman-Computer Integration (HInt) is an emerging paradigm in which computational and human systems are closely interwoven. Integrating computers with the human body is not new. however, we believe that with rapid technological advancements, increasing real-world deployments, and growing ethical and societal implications, it is critical to identify an agenda for future research. We present a set of challenges for HInt research, formulated over the course of a five-day workshop consisting of 29 experts who have designed, deployed and studied HInt systems. This agenda aims to guide researchers in a structured way towards a more coordinated and conscientious future of human-computer integration.2020FMFlorian Floyd Mueller et al.Monash UniversityBrain-Computer Interface (BCI) & NeurofeedbackTechnology Ethics & Critical HCIUser Research Methods (Interviews, Surveys, Observation)CHI
AttentivU: Biofeedback Glasses to Monitor and Improve Engagement and Vigilance in the CarSeveral research projects have recently explored the use of physiological sensors such as electroencephalography (EEG) or electrooculography (EOG) to measure the engagement and vigilance of a user in context of car driving. However, these systems still suffer from limitations such as an absence of a socially acceptable form-factor and use of impractical, gel-based electrodes. We present AttentivU, a device using both EEG and EOG for real-time monitoring of physiological data. The device is designed as a socially acceptable pair of glasses and employs silver electrodes. It also supports real-time delivery of feedback in the form of an auditory signal via a bone conduction speaker embedded in the glasses. A detailed description of the hardware design and proof of concept prototype is provided, as well as preliminary data collected from 20 users performing a driving task in a simulator in order to evaluate the signal quality of the physiological data.2019NKNataliya Kosmyna et al.Eye Tracking & Gaze InteractionBiosensors & Physiological MonitoringAutoUI
Adding Proprioceptive Feedback to Virtual Reality Experiences Using Galvanic Vestibular StimulationWe present a small and lightweight wearable device that enhances virtual reality experiences and reduces cybersickness by means of galvanic vestibular stimulation (GVS). GVS is a specific way to elicit vestibular reflexes that has been used for over a century to study the function of the vestibular system. In addition to GVS, we support physiological sensing by connecting heart rate, electrodermal activity and other sensors to our wearable device using a plug and play mechanism. An accompanying Android app communicates with the device over Bluetooth (BLE) for transmitting the GVS stimulus to the user through electrodes attached behind the ears. Our system supports multiple categories of virtual reality applications with different types of virtual motion such as driving, navigating by flying, teleporting, or riding. We present a user study in which participants (N = 20) experienced significantly lower cybersickness when using our device and rated experiences with GVS-induced haptic feedback as significantly more immersive than a no-GVS baseline.2019MSMisha Sra et al.Massachusetts Institute of TechnologyEye Tracking & Gaze InteractionImmersion & Presence ResearchSleep & Stress MonitoringCHI
AlterEgo: a Personalised Wearable Silent Speech Interface We present a wearable interface that allows a user to silently converse with a computing device without any voice or any discernible movements - thereby enabling the user to communicate with devices, AI assistants, applications or other people in a silent, concealed and seamless manner. A user’s intention to speak and internal speech is characterised by neuromuscular signals and subtle movements in internal speech articulators that are captured by the AlterEgo system to reconstruct this speech. We use this to facilitate a natural language user interface, where users can silently communicate in natural language and receive aural output (e.g - bone conduction headphones), thereby enabling a discreet, bi-directional interface with a computing device, and providing a seamless form of intelligence augmentation. The paper describes the architecture, design, implementation and operation of the entire system. Furthermore, we demonstrate the robustness of the system through user studies and report >91% accuracies.2018AKArnav Kapur et al.Haptic WearablesBrain-Computer Interface (BCI) & NeurofeedbackVoice User Interface (VUI) DesignIUI