People with Disabilities Redefining Identity through Robotic and Virtual Avatars: A Case Study in Avatar Robot CafeRobotic avatars and telepresence technology enable people with disabilities to engage in physical work. Despite the recent popularity of the metaverse, few studies have explored the use of virtual avatars and environments by people with disabilities. In this study, seven disabled participants working in a cafe where remote customer service is provided via robotic avatars, were engaged in the development and use of personalized virtual avatars displayed on a large screen in-situ in combination with existing physical robots, creating a hybrid cyber-physical space. We conducted longitudinal semi-structured interviews to investigate the psychological changes experienced by the participants. The results revealed that mass-produced robotic avatars allowed participants to not disclose their disability if they did not want to, but also backgrounded their identities; by contrast, customized virtual avatars shaped without physical constraints, highlighted their personalities. The combined use of robotic and virtual avatars complemented each other and can support pilots in redefining their identity.2024YHYuji Hatada et al.The University of TokyoIdentity & Avatars in XRSocial Robot InteractionTeleoperation & TelepresenceCHI
Synlogue with Aizuchi-bot: Investigating the Co-Adaptive and Open-Ended Interaction ParadigmIn contrast to dialogue, wherein the exchange of completed messages occurs through turn-taking, synlogue is a mode of conversation characterized by co-creative processes, such as mutually complementing incomplete utterances and cooperative overlaps of backchannelings. Such co-creative conversations have the potential to alleviate social divisions in contemporary information environments. This study proposed the design concept of a synlogue based on literature in linguistics and anthropology and explored features that facilitate synlogic interactions in computer-mediated interfaces. Through an experiment, we focused on aizuchi, an important backchanneling element that drives synlogic conversation, and compared the speech and perceptual changes of participants when a bot dynamically uttered aizuchi or otherwise silent in a situation simulating an online video call. Consequently, we discussed the implications for interaction design based on our qualitative and quantitative analysis of the experiment. The synlogic perspective presented in this study is expected to facilitate HCI researchers to achieve more convivial forms of communication.2024KYKazumi Yoshimura et al.Waseda UniversityConversational ChatbotsAgent Personality & AnthropomorphismCHI
Cultivating Spoken Language Technologies for Unwritten LanguagesWe report on community-centered, collaborative research that weaves together HCI, natural language processing, linguistic, and design insights to develop spoken language technologies for unwritten languages. Across three visits to a Banjara farming community in India, we use participatory, technical, and creative methods to engage community members, collect spoken language photo annotations, and develop an information retrieval (IR) system. Drawing on orality theory, we interrogate assumptions and biases of current speech interfaces and create a simple application that leverages our IR system to match fluidly spoken queries with recorded annotations and surface corresponding photos. In-situ evaluations show how our novel approach returns reliable results and inspired the co-creation of media retrieval use-cases that are more appropriate in oral contexts. The very low (< 4h) spoken data requirements makes our approach adaptable to other contexts where languages are unwritten or have no digital language resources available.2024TRThomas Reitmaier et al.Swansea UniversityVoice User Interface (VUI) DesignIntelligent Voice Assistants (Alexa, Siri, etc.)Developing Countries & HCI for Development (HCI4D)CHI
“I am both here and there” Parallel Control of Multiple Robotic Avatars by Disabled Workers in a CafeRobotic avatars can help disabled people extend their reach in interacting with the world. Technological advances make it possible for individuals to embody multiple avatars simultaneously. However, existing studies have been limited to laboratory conditions and did not involve disabled participants. In this paper, we present a real-world implementation of a parallel control system allowing disabled workers in a café to embody multiple robotic avatars at the same time to carry out different tasks. Our data corpus comprises semi-structured interviews with workers, customer surveys, and videos of café operations. Results indicate that the system increases workers' agency, enabling them to better manage customer journeys. Parallel embodiment and transitions between avatars create multiple interaction loops where the links between disabled workers and customers remain consistent, but the intermediary avatar changes. Based on our observations, we theorize that disabled individuals possess specific competencies that increase their ability to manage multiple avatar bodies.2023GBGiulia Barbareschi et al.Keio UniversityDomestic RobotsSocial Robot InteractionRobots in Education & HealthcareCHI
Dementia Eyes: Co-Design and Evaluation of a Dementia Education Augmented Reality Experience for Medical WorkersDementia describes a syndrome of cognitive degeneration, and Behavioural and Psychological Symptoms of Dementia (BPSD) is the non-cognitive symptom. BPSD can be improved by care services. To aid better care service, we explore the potential of using Augmented Reality (AR) to support dementia education for medical workers in three steps: (1) We explore medical workers' perspective on dementia care lived experience and XR, (2) we co-design an educational experience containing an AR-based application and a 5-min activity with medical workers, (3) we evaluate the effectiveness of the system through a mixed method study. Our result shows that the AR experience successfully touches participants, and motivates them to reflect on the provision of care service. On this basis, we discuss the elements and challenges of designing XR-enabled dementia education for users unfamiliar with novel technology, and the potential of using XR in clinical education.2023XSXiming Shen et al.Keio University Graduate School of Media DesignAR Navigation & Context AwarenessVR Medical Training & RehabilitationCHI
Opportunities and Challenges of Automatic Speech Recognition Systems for Low-Resource Language SpeakersAutomatic Speech Recognition (ASR) researchers are turning their attention towards supporting low-resource languages, such as isiXhosa or Marathi, with only limited training resources. We report and reflect on collaborative research across ASR & HCI to situate ASR-enabled technologies to suit the needs and functions of two communities of low-resource language speakers, on the outskirts of Cape Town, South Africa and in Mumbai, India. We build on longstanding community partnerships and draw on linguistics, media studies and HCI scholarship to guide our research. We demonstrate diverse design methods to: remotely engage participants; collect speech data to test ASR models; and ultimately field-test models with users. Reflecting on the research, we identify opportunities, challenges, and use-cases of ASR, in particular to support pervasive use of WhatsApp voice messaging. Finally, we uncover implications for collaborations across ASR & HCI that advance important discussions at CHI surrounding data, ethics, and AI.2022TRThomas Reitmaier et al.Swansea UniversityMultilingual & Cross-Cultural Voice InteractionExplainable AI (XAI)Cognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)CHI
Light in Light Out (LiLo) Displays: Harvesting and Manipulating Light to Provide Novel Forms of CommunicationMany of us daily encounter shadow and reflected light patterns alongside macro-level changes in ambient light levels. These are caused by elements - opaque objects, glass, mirrors, even clouds - in our environment interfacing with sunlight or artificial indoor lighting. Inspired by these phenomena, we explored ways of creating digitally-supported displays that use light, shade and reflection for output and harness the energy they need to operate from the sun or indoor ambient light. Through a set of design workshops we developed exemplar devices: SolarPix, ShadMo and GlowBoard. We detail their function and implementation, as well as evidencing their technical viability. The designs were informed by material understandings from the Global North and Global South and demonstrated in a cross-cultural workshop run in parallel in India and South Africa where community co-designers reflected on their uses and value given lived experience of their communication practices and unreliable energy networks.2022KSKrishna Seunarine et al.Swansea UniversitySustainable HCIEcological Design & Green ComputingDigital Art Installations & Interactive PerformanceCHI
Can't Touch This: Rethinking Public Technology in a COVID-19 EraWhat do pedestrian crossings, ATMs, elevators and ticket machines have in common? These are just a few of the ubiquitous yet essential elements of public-space infrastructure that rely on physical buttons or touchscreens; common interactions that, until recently, were considered perfectly safe to perform. This work investigates how we might integrate touchless technologies into public-space infrastructure in order to minimise physical interaction with shared devices in light of the ongoing COVID-19 pandemic. Drawing on an ethnographic exploration into how public utilities are being used, adapted or avoided, we developed and evaluated a suite of technology probes that can be either retro tted into, or replace, these services. In-situ community deployments of our probes demonstrate strong uptake and provide insight into how hands-free technologies can be adapted and utilised for the public domain; and, in turn, used to inform the future of walk-up-and use public technologies.2022JPJennifer Pearson et al.Swansea UniversityContext-Aware ComputingUbiquitous ComputingCHI
Exploring a Makeup Support System for Transgender Passing based on Automatic Gender RecognitionHow to handle gender with machine learning is a controversial topic. A growing critical body of research brought attention to the numerous issues transgender communities face with the adoption of current automatic gender recognition (AGR) systems. In contrast, we explore how such technologies could potentially be appropriated to support transgender practices and needs, especially in non-Western contexts like Japan. We designed a virtual makeup probe to assist transgender individuals with passing, that is to be perceived as the gender they identify as. To understand how such an application might support expressing transgender individuals gender identity or not, we interviewed 15 of them in Tokyo and found that in the right context and under strict conditions, AGR based systems could assist transgender passing.2021TCToby Chong et al.The University of TokyoGender & Race Issues in HCIEmpowerment of Marginalized GroupsCHI
PV-Pix: Slum Community Co-design of Self-Powered Deformable Smart Messaging MaterialsWorking with emergent users in two of Mumbai’s slums, we explored the value and uses of photovoltaic (PV) self-powering digital materials. Through a series of co-design workshops, a diary study and responses by artists and craftspeople, we developed the PV-Pix concept for inter-home connections. Each PV-Pix element consists of a deformable energy harvesting material that, when actuated by a person in one home, changes its physical state both there and in a connected home. To explore the concept we considered two forms of PV-Pix: one uses rigid materials and the other flexible ones. We deployed two low-fidelity prototypes, each constructed of a grid of one PV-Pix type, in four slum homes over a four week period to further understand the usability and uses of the materials, eliciting interesting inter-family communication practices. Encouraged by these results we report on a first-step towards working prototypes and demonstrate the technical viability of the approach.2021DRDani Kalarikalayil Raju et al.Studio HasiShape-Changing Interfaces & Soft Robotic MaterialsParticipatory DesignSustainable HCICHI
Exploring Nudge Designs to Help Adolescent SNS Users Avoid Privacy and Safety ThreatsA nudge is a method to influence individual choices without taking away freedom of choice. We are interested in whether nudges can help adolescents avoid privacy and safety threats on social network services (SNS). We conducted an online survey to compare how 11 different nudge designs influence decisions on 9 scenarios featuring various privacy and safety threats. We collected 29,608 responses from adolescent SNS users (self-claimed high school and university students), and found that nudges can help to educe potentially risk choices. Participants were more likely to avoid potentially risky choices when presented with negative frames (e.g., "90% of users would not share a photo without permission'') than affirmative ones (e.g., "10% of users would''). Social nudges displaying statistics on how likely other people would make potentially risky decisions can have a negative effect in comparison to a nudge with only general privacy and safety suggestions. We conclude by providing design considerations for privacy/safety nudges targeting adolescent SNS users.2020HMHiroaki Masaki et al.University of TokyoPrivacy by Design & User ControlDark Patterns RecognitionCHI
An Honest Conversation: Transparently Combining Machine and Human Speech Assistance in Public SpacesThere is widespread concern over the ways speech assistant providers currently use humans to listen to users' queries without their knowledge. We report two iterations of the TalkBack smart speaker, which transparently combines machine and human assistance. In the first, we created a prototype to investigate whether people would choose to forward their questions to a human answerer if the machine was unable to help. Longitudinal deployment revealed that most users would do so when given the explicit choice. In the second iteration we extended the prototype to draw upon spoken answers from previous deployments, combining machine efficiency with human richness. Deployment of this second iteration shows that this corpus can help provide relevant, human-created instant responses. We distil lessons learned for those developing conversational agents or other AI-infused systems about how to appropriately enlist human-in-the-loop information services to benefit users, task workers and system performance.2020TRThomas Reitmaier et al.Swansea UniversityIntelligent Voice Assistants (Alexa, Siri, etc.)Human-LLM CollaborationPrivacy by Design & User ControlCHI