AiModerator: A Co-Pilot for Hyper-Contextualization in Political Debate VideoPolitical debates are essential in political discourse for democratic societies. Advancements in technology have significantly transformed the structure of political debates, the ways in which politicians communicate, and the platforms through which audiences engage with them. Originally a forum for improving understanding, political debates have increasingly favored theatrics over substance, risking young adult disengagement. To bring substance back to this medium we developed AiModerator, a political debate co-pilot acting as a Multimodal Conversational Agent (MCA). AiModerator aims to promote engagement while improving understanding by analyzing video content to provide contextually relevant information. This consolidated information facilitates understanding while keeping users synchronized with the debate viewing experience. Our system builds upon multimodal techniques, integrating computer vision and large language models to demonstrate ways of improving content delivery and engagement. AiModerator's backend system extracts events from identified speech data, allowing the user to interact with these events through a touch interface on an iPad application. We address three key topics: evaluating young adults' engagement, satisfaction, and preference compared to traditional second screening, and determining whether AiModerator can improve subjective understanding. To evaluate these measures we conducted a mixed-method evaluation (n=20) within-group design A-B study. Our analysis found AiModerator excelled in promoting engagement and satisfaction while delivering clear, contextually relevant information to the user which improved their understanding of debate topics more than the second screening mode. Our qualitative analysis offers broader insights, particularly in terms of a trade-off between automation and information consolidation versus autonomy and control.2025PAPeter Andrews et al.Agent Personality & AnthropomorphismHuman-LLM CollaborationContext-Aware ComputingIUI
RedirectedStepper: Exploring Walking-In-Place Locomotion in VR Using a Mini Stepper for AscentsWalking on inclined surfaces is common in some Virtual Reality (VR) scenarios, for instance, when moving between floors of a building, climbing a tower, or ascending a virtual mountain. Existing approaches enabling realistic walking experiences in such settings typically require the user to use bulky walking-in-place hardware or to walk in a physical area. Addressing this challenge, we present RedirectedStepper, a locomotion technique leveraging a novel device based on a mini exercise stepper to provide realistic VR staircase walking experiences by alternating the tilt of the two stepper pedals. RedirectedStepper employs a new exponential mapping function to visually morph the user's real foot motion to a corresponding curved path in the virtual environment (VE). Combining this stepper and the visual mapping function provides an in-place locomotion technique allowing users to virtually ascend an infinite staircase or slope while walking-in-place (WIP). We conducted three within-subject user studies (n=36) comparing RedirectedStepper with a WIP locomotion technique using the Kinect. Our studies indicate that RedirectedStepper improves the users' sense of realism in walking on staircases in VR. Based on a set of design implications derived from the user studies, we developed SnowRun, a VR exergame application, demonstrating the use of the RedirectedStepper concept.2025QLQuang-Tri Le et al.University of Science, VNU-HCMFull-Body Interaction & Embodied InputImmersion & Presence ResearchSerious & Functional GamesCHI
SIM2VR: Towards Automated Biomechanical Testing in VRAutomated biomechanical testing has great potential for the development of VR applications, as initial insights into user behaviour can be gained in silico early in the design process. In particular, it allows prediction of user movements and ergonomic variables, such as fatigue, prior to conducting user studies. However, there is a fundamental disconnect between simulators hosting state-of-the-art biomechanical user models and simulators used to develop and run VR applications. Existing user simulators often struggle to capture the intricacies of real-world VR applications, reducing ecological validity of user predictions. In this paper, we introduce SIM2VR, a system that aligns user simulation with a given VR application by establishing a continuous closed loop between the two processes. This, for the first time, enables training simulated users directly in the same VR application that real users interact with. We demonstrate that SIM2VR can predict differences in user performance, ergonomics and strategies in a fast-paced, dynamic arcade game. In order to expand the scope of automated biomechanical testing beyond simple visuomotor tasks, advances in cognitive models and reward function design will be needed.2024FFFlorian Fischer et al.Human Pose & Activity RecognitionVR Medical Training & RehabilitationUIST
When the Body Became Data: Historical Data Cultures and Anatomical IllustrationWith changing attitudes around knowledge, medicine, art, and technology, the human body has become a source of information and, ultimately, shareable and analyzable data. Centuries of illustrations and visualizations of the body occur within particular historical, social, and political contexts. These contexts are enmeshed in different so-called data cultures: ways that data, knowledge, and information are conceptualized and collected, structured and shared. In this work, we explore how information about the body was collected as well as the circulation, impact, and persuasive force of the resulting images. We show how mindfulness of data cultural influences remain crucial for today's designers, researchers, and consumers of visualizations. We conclude with a call for the field to reflect on how visualizations are not timeless and contextless mirrors on objective data, but as much a product of our time and place as the visualizations of the past.2024MCMichael Correll et al.Northeastern UniversityVisualization Perception & CognitionMuseum & Cultural Heritage DigitizationCHI
AiCommentator: A Multimodal Conversational Agent for Embedded Visualization in Football ViewingTraditionally, sports commentators provide viewers with diverse information, encompassing in-game developments and player performances. Yet young adult football viewers increasingly use mobile devices for deeper insights during football matches. Such insights into players on the pitch and performance statistics support viewers’ understanding of game stakes, creating a more engaging viewing experience. Inspired by commentators’ traditional roles and to incorporate information into a single platform, we developed AiCommentator, a Multimodal Conversational Agent (MCA) for embedded visualization and conversational interactions in football broadcast video. AiCommentator integrates embedded visualization, either with an automated non-interactive or with a responsive interactive commentary mode. Our system builds upon multimodal techniques, integrating computer vision and large language models, to demonstrate ways for designing tailored, interactive sports-viewing content. AiCommentator’s event system infers game states based on a multi-object tracking algorithm and computer vision backend, facilitating automated responsive commentary. We address three key topics: evaluating young adults’ satisfaction and immersion across the two viewing modes, enhancing viewer understanding of in-game events and players on the pitch, and devising methods to present this information in a usable manner. In a mixed-method evaluation (n=16) of AiCommentator, we found that the participants appreciated aspects of both system modes but preferred the interactive mode, expressing a higher degree of engagement and satisfaction. Our paper reports on our development of AiCommentator and presents the results from our user study, demonstrating the promise of interactive MCA for a more engaging sports viewing experience. Systems like AiCommentator could be pivotal in transforming the interactivity and accessibility of sports content, revolutionizing how sports viewers engage with video content.2024PAPeter Andrews et al.Intelligent Voice Assistants (Alexa, Siri, etc.)Social & Collaborative VRInteractive Data VisualizationIUI
Conversations with the News: Co-speculation into Conversational Interactions with News ContentConversational agents have limited conversational capabilities and there is a debate as to whether interactions with conversational user interfaces (CUIs) are truly conversational. Currently, most news and journalistic content is presented in a monologic form. Simultaneously, there is an expectation that CUIs can change how we interact with news content. To explore what conversational interactions with the news could look like, two co-speculation workshops were arranged. The design-led inquiries focus on how conversations can be used as a resource for designing interactions with CUIs for news. Three different prototyping techniques were used in the design explorations: storyboarding, scripting and role-playing. Our work offers two main contributions: 1) We identify three dimensions relevant to the design space of CUI for news: the CUIs’ role, conversational capabilities, and locus of control, and 2) a critical reflection on the potential of different techniques for prototyping CUIs.2023ONOda Elise Nordberg et al.Conversational ChatbotsAgent Personality & AnthropomorphismCUI
Designing for Control in Nurse-AI Collaboration During Emergency Medical CallsAI-powered symptom checkers are automating the work of telephone triage nurses in assessing patient urgency. Yet, these systems exclude several vulnerable patient groups and overlook telenurses' competent interaction with their patients. This study, conducted in collaboration with telenurses, examines how AI can support their clinical assessment and was carried out in four phases: 1) interviews that revealed telenurses' challenge of juggling decision-support and documentation interfaces, 2) a co-design workshop that conceptualized continuous nurse-AI interaction, 3) development of a prototype that suggested questions for nurses to ask callers, and 4) a role-play workshop that demonstrated nurse-AI interaction in practice. The study addresses how we can design for control in human-AI collaboration in order to enhance, rather than replace, human decision-making processes.2023ABArngeir Berge et al.AI-Assisted Decision-Making & AutomationAI Ethics, Fairness & AccountabilityUser Research Methods (Interviews, Surveys, Observation)DIS
"This is the story of me": Designing audiovisual narratives to support reflection on cancer journeysRecovering from serious illness involves a bodily and psychosocial reorientation in everyday life. Survivors of gynecological cancer often experience bodily changes, fear of cancer recurrence, and changes in sexual health. This paper explores how we can use audiovisual narratives based on experiences of gynecological cancer survival in the design of an online intervention. From a typology of cancer survival, we designed three audiovisual narratives in an experience-centered design process involving gynecological cancer survivors. The narratives were evaluated by 10 participants formerly treated for gynecological cancer. In a thematic analysis, we explore how these narratives set the stage for identification and reflection by being experienced as relatable, provoking, and realistic. Finally, we discuss how the survivors' experience of the narratives can be construed as meaningful, and how accounts of experiences can be included in a design process to create narrative content for online interventions.2021EFEivind Flobak et al.Mental Health Apps & Online Support CommunitiesPrototyping & User TestingDIS
Show, don't tell: Using Go-along Interviews in Immersive Virtual RealityGo-along interviewing is an emerging qualitative research method where researcher and interviewee go together to a location relevant for the research. Usually employed in ethnographic studies, the method is used to provide a contextualized understanding of a participant's experience. This paper explores performing Go-along interviews in Immersive Virtual Reality (VR). Through an analysis of ten interviews conducted inside our participants’ Virtual Mind Palaces we show how the interlocutors' shared presence in the virtual environment established a common ground beneficial for communication. Being in VR enabled our participants to demonstrate interactions spontaneously, and, by providing a guided tour, show us relevant objects and locations in their Virtual Mind Palace. Benefits and challenges of adapting this method to VR are discussed and recommendations for researchers who want to conduct VR Go-along interviews are provided. Finally, we argue the method as an effective tool for eliciting contextual, phenomenological accounts of virtual environments.2021JVJoakim Vindenes et al.Immersion & Presence ResearchUser Research Methods (Interviews, Surveys, Observation)DIS
Conversational Futures: Emancipating Conversational Interactions for Futures Worth WantingWe present a vision for conversational user interfaces (CUIs) as probes for speculating with, rather than as objects to speculate about. Popular CUIs, e.g., Alexa, are changing the way we converse, narrate, and imagine the world(s) to come. Yet, current conversational interactions normatively may promote non-desirable ends, delivering a restricted range of request-response interactions with sexist and digital colonialist tendencies. Our critical design approach envisions alternatives by considering how future voices can reside in CUIs as enabling probes. We present novel explorations that illustrate the potential of CUIs as critical design material, by critiquing present norms and conversing with imaginary species. As micro-level interventions, we show that conversations with diverse futures through CUIs can persuade us to critically shape our discourse on macro-scale concerns of the present, e.g., sustainability. We reflect on how conversational interactions with pluralistic, imagined futures can contribute to how being human stands to change.2021MLMinha Lee et al.Eindhoven University of TechnologyMultilingual & Cross-Cultural Voice InteractionAgent Personality & AnthropomorphismTechnology Ethics & Critical HCICHI
Participatory Design of VR Scenarios for Exposure TherapyVirtual reality (VR) applications for exposure therapy predominantly use computer-generated imagery to create controlled environments in which users can be exposed to their fears. Creating 3D animations, however, is demanding and time-consuming. This paper presents a participatory approach for prototyping VR scenarios that are enabled by 360° video and grounded in lived experiences. We organized a participatory workshop with adolescents to prototype such scenarios, consisting of iterative phases of ideation, storyboarding, live-action plays recorded by a 360° camera, and group evaluation. Through an analysis of the participants' interactions, we outline how they worked to design prototypes that depict situations relevant to those with a fear of public speaking. Our analysis also explores how participants used their experiences and reflections as resources for design. Six clinical psychologists evaluated the prototypes from the workshop and concluded they were viable therapeutic tools, emphasizing the immersive, realistic experience they presented. We argue that our approach makes the design of VR scenarios more accessible.2019EFEivind Flobak et al.University of BergenVR Medical Training & RehabilitationLive Streaming & Content CreatorsInteractive Narrative & Immersive StorytellingCHI
Making the News: Digital Creativity Support for JournalistsThis paper reports the design and first evaluations of new digital support for journalists to discover and examine crea-tive angles on news stories under development. The support integrated creative news search algorithms, interactive crea-tive sparks and reusable concept cards into one daily work tool of journalists. The first evaluations of INJECT by jour-nalists in their places of work to write published news sto-ries revealed that the journalists generated new angles on existing stories rather than new stories, changed their writ-ing behaviour, and reported evidence that INJECT use had the potential to increase the objectivity and the boldness of journalism methods used.2018NMNeil Maiden et al.City University LondonAI-Assisted Creative WritingPrototyping & User TestingCHI