Nods of Agreement: Webcam-Driven Avatars Improve Meeting and Avatar Satisfaction Over Audio-Driven or Static Avatars in All-Avatar Work VideoconferencingAvatars are edging into mainstream videoconferencing, but evaluation of how avatar animation modalities contribute to work meeting outcomes has been limited. We report a within-group videoconferencing experiment in which 68 employees of a global technology company, in 16 groups, used the same stylized avatars in three modalities (static picture, audio-animation, and webcam-animation) to complete collaborative decision-making tasks. Quantitatively, for meeting outcomes, webcam-animated avatars improved meeting effectiveness over the picture modality and were also reported to be more comfortable and inclusive than both other modalities. In terms of avatar satisfaction, there was a similar preference for webcam animation as compared to both other modalities. Our qualitative analysis shows participants expressing a preference for the holistic motion of webcam animation, and that meaningful movement outweighs realism for meeting outcomes, as evidenced through a systematic overview of ten thematic factors. We discuss implications for research and commercial deployment and conclude that webcam-animated avatars are a plausible alternative to video in work meetings.2025FMFang Ma et al.Making Work Meetings BetterCSCW
To Each Their Own: Exploring Highly Personalised Audiovisual Media Accessibility Interventions with People with AphasiaDigital audiovisual media (e.g., TV, streamed video) is an essential aspect of our modern lives, yet it lacks accessibility -- people living with disabilities can experience significant barriers. While accessibility interventions can improve the access to audiovisual media, people living with complex communication needs have been under-represented in research and are potentially left behind. Future visions of accessible digital audiovisual media posit highly personalised content that meets complex accessibility needs. We explore the impact of such a future by conducting bespoke co-design sessions with people with aphasia -- a language impairment common post-stroke -- creating four highly personal accessibility interventions that leverage audiovisual media personalisation. We then trialled these prototypes with 11 users with aphasia; examining the effects on shared social experiences, creative intent, interaction complexity, and feasibility for content producers. We conclude by critically reflecting on future implementations, raising open questions and suggesting future research directions.2025ANAlexandre Nevsky et al.Voice AccessibilityDeaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)DIS
Trusting Tracking: Perceptions of Non-Verbal Communication Tracking in VideoconferencingVideoconferencing is integral to modern work and living. Recently, technologists have sought to leverage data captured -- e.g. from cameras and microphones -- to augment communication. This might mean capturing communication information about verbal (e.g. speech, chat messages), or non-verbal exchanges (e.g. body language, gestures, tone of voice) and using this to mediate -- and potentially improve -- communication. However, such tracking has implications for user experience and raises wider concerns (e.g. privacy). To design tools which account for user needs and preferences, this study investigates perspectives on communication tracking through a global survey and interviews, exploring how daily behaviours and the impact of specific features influence user perspectives. We examine user preferences on non-verbal communication tracking, preferred methods of how this information is conveyed and to whom this should be communicated. Our findings aim to guide the development of non-verbal communication tools which augment videoconferencing that prioritise user needs.2025CGCarlota Vazquez Gonzalez et al.King's College LondonPrivacy by Design & User ControlNotification & Interruption ManagementCHI
Sounds Accessible: Envisioning Accessible Audio Media Futures with People with AphasiaAudio-media, such as radio and podcasts, are a vital means to engage with global events, access education, or offer entertainment. However, for people with complex communication needs, such as aphasia, there can be accessibility challenges. While accessibility research has largely focused on audiovisual media, little work has considered audio-media, particularly for users with complex communication needs. To address this gap, we undertook six co-design workshops with 10 people with aphasia to re-imagine access to audio-media. We uncover how our co-designers perceive audio-media as more than a tool, but a part of daily intimacies; shaping social relationships and contributing to therapeutic recovery. Through a Research-through-Design process culminating in one low-fidelity and three high-fidelity technology probes that embody novel accessibility interventions, our findings further challenge conventional approaches to audio-media accessibility and signal new directions for future design.2025FBFilip Bircanin et al.King's College London , Department of InformaticsVoice AccessibilityAugmentative & Alternative Communication (AAC)Universal & Inclusive DesignCHI
Friend or Foe? Navigating and Re-configuring ``Snipers' Alley''In a 'digital by default’ society, essential services must be accessed online. This opens users to digital deception not only from criminal fraudsters but from a range of actors in a marketised digital economy. Using grounded empirical research from northern England, we show how supposedly 'trusted' actors, such as governments, (re)produce the insecurities and harms that they seek to prevent. Enhanced by a weakening of social institutions amid a drive for efficiency and scale, this has built a constricted, unpredictable digital channel. We conceptualise this as a ''snipers' alley''. Four key snipers articulated by participants' lived experiences are examined: 1) Governments; 2) Business; 3) Criminal Fraudsters; and 4) Friends and Family to explore how snipers are differentially experienced and transfigure through this constricted digital channel. We discuss strategies to re-configure the alley, and how crafting and adopting opportunity models can enable more equitable forms of security for all.2025ADAndrew Carl Dwyer et al.Royal Holloway, University of London, Information Security GroupPrivacy by Design & User ControlPrivacy Perception & Decision-MakingTechnology Ethics & Critical HCICHI
VisUnit: Literate Visualisation Studies Assembled from Reusable Test-SuitesWe make four contributions to lower the overhead of conducting visualisation user studies and promote the reuse and extension of their materials. (i) A declarative Javascript specification lets experimenters describe how studies are assembled from tested visualisations, datasets, tasks and chosen evaluation strategies. (ii) A VisUnit library translates these into sequences of visual stimuli and delivers them to participants. We move away from monolithic evaluation stimuli typical of previous work and construct studies around three ingredients -- visual encodings, datasets, and tasks -- that can be developed independently and recombined flexibly. (iii) This paves the way for developing benchmark data+tasks test-suites as independent, reusable resources to support multiple studies. (iv) Structuring user studies as ``literate'' visualisation notebooks brings together in the open all ingredients necessary for replication and scrutiny: formal design specification; underlying materials; participant-facing views; and narratives justifying design and supporting reuse.2025RJRadu Jianu et al.City, University of LondonInteractive Data VisualizationUser Research Methods (Interviews, Surveys, Observation)Prototyping & User TestingCHI
"The Internet is Hard. Is Words": Investigating Information Search Difficulties Experienced by People with Aphasia and Strategies for Combatting ThemPeople rely on online information for important life tasks such as managing personal finances and understanding medical symptoms. However, due to its intrinsically language-focused nature, online search poses considerable difficulties for people with language impairments. Currently these difficulties are poorly understood. We report findings from an observation of the information search behavior of 12 people with aphasia. We identify a wide range of difficulties and strategies aimed at combating them, spanning the entire information search process. Findings include previously unreported difficulties and strategies that highlight the importance of designing search technologies to better support the complex needs of people who find language challenging, such as by facilitating word finding cueing strategies, error prevention and recovery, browsing, appropriation, text interpretation and and by decreasing reliance on language competency in general. This has the potential not only to benefit searchers with language impairments, but to make information search easier for all.2025VKVasiliki Kladouchou et al.City St George's, University of LondonCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Augmentative & Alternative Communication (AAC)Universal & Inclusive DesignCHI
Making data work countIn this paper, we examine the work of data annotation. Specifically, we focus on the role of counting or quantification in organising annotation work. Based on an ethnographic study of data annotation in two outsourcing centres in India, we observe that counting practices and its associated logics are an integral part of day-to-day annotation activities. In particular, we call attention to the presumption of total countability observed in annotation—the notion that everything, from tasks, datasets and deliverables, to workers, work time, quality and performance, can be managed by applying the logics of counting. To examine this, we draw on sociological and socio-technical scholarship on quantification and develop the lens of a ‘regime of counting’ that makes explicit the specific counts, practices, actors and structures that underpin the pervasive counting in annotation. We find that within the AI supply chain and data work, counting regimes aid the assertion of authority by the AI clients (also called requesters) over annotation processes, constituting them as reductive, standardised, and homogenous. We illustrate how this has implications for i) how annotation work and workers get valued, ii) the role human discretion plays in annotation, and iii) broader efforts to introduce accountable and more just practices in AI. Through these implications, we illustrate the limits of operating within the logic of total countability. Instead, we argue for a view of counting as partial—located in distinct geographies, shaped by specific interests and accountable in only limited ways. This, we propose, sets the stage for a fundamentally different orientation to counting and what counts in data annotation.2024SCSrravya Chandhiramowuli et al.Session 1e: Empowering Data WorkCSCW
Mid-Air Haptic Feedback Improves Implicit Agency and Trust in Gesture-Based Automotive Infotainment Systems: a Driving Simulator StudyGesture-based interactions for automotive infotainment systems pose advantages over touchscreens such as alleviating the visual field. While the focus of these advantages is on improving the driving task, it is also important that a user feels in control and perceives influence over the in-vehicle system. This is known as the user’s sense of agency in psychology, and sensory feedback is a key aspect. The current study involved a dual-task driving (simulator) and gesture-controlled infotainment interaction, accompanied by mid-air haptic or audio feedback. With 30 participants, we utilized an experimental approach with implicit and explicit measures of agency, as well as trust and usability. Results illustrated no difference in explicit judgements of agency, however mid-air haptic feedback improved the implicit feeling. More trust was also reported in the system with mid-air haptics. Our findings provide empirical evidence for mid-air haptics fostering user agency and trust in gesture-based automotive UI.2024GEGeorge Evangelou et al.In-Vehicle Haptic, Audio & Multimodal FeedbackMid-Air Haptics (Ultrasonic)Hand Gesture RecognitionAutoUI
Sonic Entanglements with Electromyography: Between Bodies, Signals, and RepresentationsThis paper investigates sound and music interactions arising from the use of electromyography (EMG) to instrumentalise signals from muscle exertion of the human body. We situate EMG within a family of embodied interaction modalities, where it occupies a middle ground, considered as a "signal from the inside'' compared with external observations of the body (e.g., motion capture), but also seen as more volitional than neurological states recorded by brain electroencephalogram (EEG). To understand the messiness of gestural interaction afforded by EMG, we revisit the phenomenological turn in HCI, reading Paul Dourish's work on the transparency of "ready-to-hand'' technologies against the grain of recent posthumanist theories, which offer a performative interpretation of musical entanglements between bodies, signals, and representations. We take music performance as a use case, reporting on the opportunities and constraints posed by EMG in workshop-based studies of vocal, instrumental, and electronic practices. We observe that across our diverse range of musical subjects, they consistently challenged notions of EMG as a transparent tool that directly registered the state of the body, reporting instead that it took on "present-at-hand'' qualities, defamiliarising the performer's own sense of themselves and reconfiguring their embodied practice.2024CRCourtney N. Reed et al.Electrical Muscle Stimulation (EMS)Conversational ChatbotsAgent Personality & AnthropomorphismDIS
Guidelines for Integrating Value Sensitive Design in Responsible AI ToolkitsValue Sensitive Design (VSD) is a framework for integrating human values throughout the technology design process. In parallel, Responsible AI (RAI) advocates for the development of systems aligning with ethical values, such as fairness and transparency. In this study, we posit that a VSD approach is not only compatible, but also advantageous to the development of RAI toolkits. To empirically assess this hypothesis, we conducted four workshops involving 17 early-career AI researchers. Our aim was to establish links between VSD and RAI values while examining how existing toolkits incorporate VSD principles in their design. Our findings show that collaborative and educational design features within these toolkits, including illustrative examples and open-ended cues, facilitate an understanding of human and ethical values, and empower researchers to incorporate values into AI systems. Drawing on these insights, we formulated six design guidelines for integrating VSD values into the development of RAI toolkits.2024MSMalak Sadek et al.Imperial College LondonAI-Assisted Decision-Making & AutomationAI Ethics, Fairness & AccountabilityPrivacy by Design & User ControlCHI
User Characteristics in Explainable AI: The Rabbit Hole of Personalization?As Artificial Intelligence (AI) becomes ubiquitous, the need for Explainable AI (XAI) has become critical for transparency and trust among users. A significant challenge in XAI is catering to diverse users, such as data scientists, domain experts, and end-users. Recent research has started to investigate how users' characteristics impact interactions with and user experience of explanations, with a view to personalizing XAI. However, are we heading down a rabbit hole by focusing on unimportant details? Our research aimed to investigate how user characteristics are related to using, understanding, and trusting an AI system that provides explanations. Our empirical study with 149 participants who interacted with an XAI system that flagged inappropriate comments showed that very few user characteristics mattered; only age and the personality trait openness influenced actual understanding. Our work provides evidence to reorient user-focused XAI research and question the pursuit of personalized XAI based on fine-grained user characteristics.2024RNRobert Nimmo et al.University of GlasgowExplainable AI (XAI)AI-Assisted Decision-Making & AutomationCHI
Not All the Same: Understanding and Informing Similarity Estimation in Tile-Based Video GamesSimilarity estimation is essential for many game AI applications, from the procedural generation of distinct assets to automated exploration with game-playing agents. While similarity metrics often substitute human evaluation, their alignment with our judgement is unclear. Consequently, the result of their application can fail human expectations, leading to e.g. unappreciated content or unbelievable agent behaviour. We alleviate this gap through a multi-factorial study of two tile-based games in two representations, where participants (N=456) judged the similarity of level triplets. Based on this data, we construct domain-specific perceptual spaces, encoding similarity-relevant attributes. We compare 12 metrics to these spaces and evaluate their approximation quality through several quantitative lenses. Moreover, we conduct a qualitative labelling study to identify the features underlying the human similarity judgement in this popular genre. Our findings inform the selection of existing metrics and highlight requirements for the design of new similarity metrics benefiting game development and research.2024SBSebastian Berns et al.Queen Mary University of LondonGame UX & Player BehaviorRole-Playing & Narrative GamesCHI
Communication, Collaboration, and Coordination in a Co-located Shared Augmented Reality Game: Perspectives From Deaf and Hard of Hearing PeopleCo-located collaborative shared augmented reality (CS-AR) environments have gained considerable research attention, mainly focusing on design, implementation, accuracy, and usability. Yet, a gap persists in our understanding regarding the accessibility and inclusivity of such environments for diverse user groups, such as deaf and Hard of Hearing (DHH) people. To investigate this domain, we used Urban Legends, a multiplayer game in a co-located CS-AR setting. We conducted a user study followed by one-on-one interviews with 17 DHH participants. Our findings revealed the usage of multimodal communication (verbal and non-verbal) before and during the game, impacting the amount of collaboration among participants and how their coordination with AR components, their surroundings, and other participants improved throughout the rounds. We utilize our data to propose design enhancements, including onscreen visuals and speech-to-text transcription, centered on participant perspectives and our analysis.2024SLSanzida Mojib Luna et al.Rochester Institute of TechnologySocial & Collaborative VRDeaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)Accessible GamingCHI
Lights, Camera, Access: A Closeup on Audiovisual Media Accessibility and AphasiaThe presence of audiovisual media is a mainstay in the lives of many, increasingly so with technological progress. Accessing video and audio content, however, can be challenging for people with diverse needs. Existing research has explored a wide range of accessibility challenges and worked with disabled communities to design technologies that help bridge the access gap. Despite this work, our understanding of the challenges faced by communities with complex communication needs (CCNs) remains poor. To address this shortcoming, we present the first study that investigates the viewing experience of people with the communication impairment aphasia through an online survey (N=41) and two focus group sessions (N=10), with the aim of understanding their specific access challenges. We find that aphasia significantly impact viewing experience and present a taxonomy of access barriers and facilitators, with suggestions for future research.2024ANAlexandre Nevsky et al.King's College LondonAugmentative & Alternative Communication (AAC)Universal & Inclusive DesignCHI
CASES: A Cognition-Aware Smart Eyewear System for Understanding How People Read"The process of reading has attracted decades of scientific research. Work in this field primarily focuses on using eye gaze patterns to reveal cognitive processes while reading. However, eye gaze patterns suffer from limited resolution, jitter noise, and cognitive biases, resulting in limited accuracy in tracking cognitive reading states. Moreover, using sequential eye gaze data alone neglects the linguistic structure of text, undermining attempts to provide semantic explanations for cognitive states during reading. Motivated by the impact of the semantic context of text on the human cognitive reading process, this work uses both the semantic context of text and visual attention during reading to more accurately predict the temporal sequence of cognitive states. To this end, we present a Cognition-Aware Smart Eyewear System (CASES), which fuses semantic context and visual attention patterns during reading. The two feature modalities are time-aligned and fed to a temporal convolutional network based multi-task classification deep model to automatically estimate and further semantically explain the reading state timeseries. CASES is implemented in eyewear and its use does not interrupt the reading process, thus reducing subjective bias. Furthermore, the real-time association between visual and semantic information enables the interactions between visual attention and semantic context to be better interpreted and explained. Ablation studies with 25 subjects demonstrate that CASES improves multi-label reading state estimation accuracy by 20.90% for sentence compared to eye tracking alone. Using CASES, we develop an interactive reading assistance system. Three and a half months of deployment with 13 in-field studies enables several observations relevant to the study of reading. In particular, observed how individual visual history interacts with the semantic context at different text granularities. Furthermore, CASES enables just-in-time intervention when readers encounter processing difficulties, thus promoting self-awareness of the cognitive process involved in reading and helping to develop more effective reading habits." https://doi.org/10.1145/36109102023XQXiangyao Qi et al.Eye Tracking & Gaze InteractionMental Health Apps & Online Support CommunitiesUbiComp
Comparing Measures of Perceived Challenge and Demand in Video Games: Exploring the Conceptual Dimensions of CORGIS and VGDSMeasuring perceived challenge and demand in video games is crucial as these player experiences are essential to creating enjoyable games. Two recent measures that identified seemingly distinct structures of challenge (Challenge Originating from Recent Gameplay Interaction Scale (CORGIS) - cognitive, emotional, performative, decision-making) and demand (Video Game Demand Scale (VGDS) - cognitive, emotional, controller, exertional, social) have been theorised to overlap, reflecting the five-factor demand structure. To investigate the overlap between these two scales we compared a five (complete overlap) and nine-factor (no overlap) model by surveying 1,101 players asking them to recall their last gaming experience before completing CORGIS and VGDS. After failing to confirm both models, we conducted an exploratory factor analysis. Our findings reveal seven dimensions, where the five-factor VGDS model holds alongside two additional CORGIS dimensions of performative and decision-making, ultimately providing a more holistic understanding of the concepts whilst highlighting unique aspects of each approach.2023AFAlex Flint et al.City, University of LondonGame UX & Player BehaviorSerious & Functional GamesRole-Playing & Narrative GamesCHI
"My Perfect Platform Would Be Telepathy" - Reimagining the Design of Social Media with Autistic AdultsIn this paper, we critically examine the design of mainstream social media platforms from the point of view of autistic experiences and perspectives, drawing inspiration from the neurodiversity movement, the notion of autism as neurodivergence, and the concept of autistic sociality. We conducted 12 participatory design sessions with 20 autistic adult collaborators. Through thematic analysis of qualitative data, we identify seven challenges our participants experienced when using social media, and a set of imagined features that represent their vision of how design could better support their social media use. We discuss how mainstream social media platforms are primarily designed to address neurotypical sensitivities, and fail autistic adults through lack of user control, inadequate mechanisms for expressing tone and intention, and an orientation towards phatic interactions. To close, we outline how autistic sociality can inspire the design of kinder and more considerate social media platforms.2023BPBelén Barros Pena et al.City, University of LondonCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Gender & Race Issues in HCIEmpowerment of Marginalized GroupsCHI
Understanding Social Interactions in Location-based Games as Hybrid Spaces: Coordination and Collaboration in Raiding in Pokémon GOThe overlaying of physical spaces with digital information produces hybrid spaces, redefining people’s experience of social interactions. Location-based games (LBGs) with social components are a good case. Yet, the impact LBGs have on sociability remains under-researched. In April 2020, the new in-person/remote raiding format in the LBG Pokémon GO provided a lens to explore people’s social interactions in hybrid spaces. We interviewed 41 Pokémon GO players to understand how players coordinate and collaborate for in-person/remote raids and other social patterns. Our findings demonstrate that new social dynamics occurred: participants’ social interactions highly rely on external social media groups bridging cyberspace and the physical world. In such external social media groups, spontaneously formed leadership roles and mentor-mentee relationships demonstrate autonomy among players in the hybrid space. However, we observed that the interoperability issue challenges people’s experience. Overall, this work sheds light on the social interactions in LBGs as hybrid spaces.2023JXJiangnan Xu et al.Rochester Institute of TechnologyMultiplayer & Social GamesInteractive Narrative & Immersive StorytellingCHI
Complex Daily Activities, Country-Level Diversity, and Smartphone Sensing: A Study in Denmark, Italy, Mongolia, Paraguay, and UKSmartphones enable understanding human behavior with activity recognition to support people’s daily lives. Prior studies focused on using inertial sensors to detect simple activities (sitting, walking, running, etc.) and were mostly conducted in homogeneous populations within a country. However, people are more sedentary in the post-pandemic world with the prevalence of remote/hybrid work/study settings, making detecting simple activities less meaningful for context-aware applications. Hence, the understanding of (i) how multimodal smartphone sensors and machine learning models could be used to detect complex daily activities that can better inform about people’s daily lives, and (ii) how models generalize to unseen countries, is limited. We analyzed in-the-wild smartphone data and ~216K self-reports from 637 college students in five countries (Italy, Mongolia, UK, Denmark, Paraguay). Then, we defined a 12-class complex daily activity recognition task and evaluated the performance with different approaches. We found that even though the generic multi-country approach provided an AUROC of 0.70, the country-specific approach performed better with AUROC scores in [0.79-0.89]. We believe that research along the lines of diversity awareness is fundamental for advancing human behavior understanding through smartphones and machine learning, for more real-world utility across countries.2023KAKarim Assi et al.École Polytechnique Fédérale de LausanneHuman Pose & Activity RecognitionContext-Aware ComputingCHI