The TAEG Questionnaire: Assessing Individual Affinity for Technology Across Different CountriesPeople have different levels of affinity for technology, which impacts their attitudes and behavior when using novel technologies. Capturing this difference requires a validated multi-language instrument. Hence, we translated and validated English, Japanese, and Spanish versions of the Affinity for Technology questionnaire (TAEG), which has so far only been available in German. The TAEG consists of four scales assessing enthusiasm, perceived competence, and positive and negative consequences of technology. After systematic translation, we collected and analyzed age and gender-stratified samples from Germany, Mexico, Japan, and the US, with a total sample N=1206. All TAEG versions showed an excellent fit with the four-factor model and good criterion validity. We also introduced a short-scale (TAEG-S) that captures the global construct. We found significant cross-country variations, with Mexico reporting the highest TAEG scores on all scales. The validated versions of TAEG provide a robust tool to assess individuals’ affinity for technology internationally.2025EREileen Roesler et al.George Mason University, Human-Agent Collaboration LabMultilingual & Cross-Cultural Voice InteractionUser Research Methods (Interviews, Surveys, Observation)CHI
Imagination vs. Reality: Investigating the Acceptance and Preferred Anthropomorphism in Service HRI While the use of robots in public spaces is increasing, still few studies explore the resulting everyday human-robot interactions (HRI). The present study sought to bridge the disparity between real-world interactions and the frequently examined hypothetical interactions. To do so, we investigate the imagined and actual interaction with an ice cream serving robot. In two studies and an exploratory study comparison, we investigated user acceptance and preference for the degree of anthropomorphic appearance. Although a typical human service task was taken over by a robot, an industrial robot was preferred according to participant’s ratings in both studies. Moreover, both studies demonstrated that robot enthusiasm significantly relates to participants' acceptance of the robot for the task. Besides these commonalities, the results showed also that while humans were preferred over robots in the imagined setting, no clear preference was found in the real-life setting. Additional analyses compared the free text answers of the two studies and provided insights into participants' general attitudes toward robots in the workforce. In line with the higher preferences for humans over robots in the imagined setting, considerably more participants mentioned a better customer experience with humans as important in the imagined study compared to the participants who actually interacted with the robot. The studies strikingly demonstrated that imaginary settings yield similar outcomes to those where participants physically engage with the robot in certain aspects, such as their preference for anthropomorphism. However, this phenomenon does not appear to hold for other facets, such as their favored service agent.2024KWKatharina Wzietek et al.Agent Personality & AnthropomorphismAI Ethics, Fairness & AccountabilitySocial Robot InteractionHRI
Classifying Head Movements to Separate Head-Gaze and Head Gestures as Distinct Modes of InputHead movement is widely used as a uniform type of input for human-computer interaction. However, there are fundamental differences between head movements coupled with gaze in support of our visual system, and head movements performed as gestural expression. Both Head-Gaze and Head Gestures are of utility for interaction but differ in their affordances. To facilitate the treatment of Head-Gaze and Head Gestures as separate types of input, we developed HeadBoost as a novel classifier, achieving high accuracy in classifying gaze-driven versus gestural head movement (F1-Score: 0.89). We demonstrate the utility of the classifier with three applications: gestural input while avoiding unintentional input by Head-Gaze; target selection with Head-Gaze while avoiding Midas Touch by head gestures; and switching of cursor control between Head-Gaze for fast positioning and Head Gesture for refinement. The classification of Head-Gaze and Head Gesture allows for seamless head-based interaction while avoiding false activation.2023BHBaosheng James HOU et al.Lancaster UniversityEye Tracking & Gaze InteractionHuman Pose & Activity RecognitionCHI
The TAC Toolkit: Supporting Design for User Acceptance of Health Technologies from a Macro-Temporal PerspectiveUser acceptance is key for the successful uptake and use of health technologies, but also impacted by numerous factors not always easily accessible nor operationalised by designers in practice. This work seeks to facilitate the application of acceptance theory in design practice through the Technology Acceptance (TAC) toolkit: a novel theory-based design tool and method comprising 16 cards, 3 personas, 3 scenarios, a virtual think-space, and a website, which we evaluated through workshops conducted with 21 designers of health technologies. Findings showed that the toolkit revised and extended designers' knowledge of technology acceptance, fostered their appreciation, empathy and ethical values while designing for acceptance, and contributed towards shaping their future design practice. We discuss implications for considering user acceptance a dynamic, multi-stage process in design practice, and better supporting designers in imagining distant acceptance challenges. Finally, we examine the generative value of the TAC toolkit and its possible future evolution.2022CNCamille Nadal et al.Trinity College DublinMental Health Apps & Online Support CommunitiesPrototyping & User TestingCHI
CASSIE: Curve and Surface Sketching in Immersive EnvironmentsWe present CASSIE, a conceptual modeling system in VR that leverages freehand mid-air sketching, and a novel 3D optimization framework to create connected curve network armatures, predictively surfaced using patches with C0 continuity. Our system provides a judicious balance of interactivity and automation, providing a homogeneous 3D drawing interface for a mix of freehand curves, curve networks, and surface patches. Our system encourages and aids users in drawing consistent networks of curves, easing the transition from freehand ideation to concept modeling. A comprehensive user study with professional designers as well as amateurs (N=12), and a diverse gallery of 3D models, show our armature and patch functionality to offer a user experience and expressivity on par with freehand ideation, while creating sophisticated concept models for downstream applications.2021EYEmilie Yu et al.Inria, Université Côte d'AzurImmersion & Presence Research3D Modeling & AnimationCHI
Exploring Semi-Supervised Learning for Predicting Listener BackchannelsDeveloping human-like conversational agents is a prime area in HCI research and subsumes many tasks. Predicting listener backchannels is one such actively-researched task. While many studies have used different approaches for backchannel prediction, they all have depended on manual annotations for a large dataset. This is a bottleneck impacting the scalability of development. To this end, we propose using semi-supervised techniques to automate the process of identifying backchannels, thereby easing the annotation process. To analyze our identification module's feasibility, we compared the backchannel prediction models trained on (a) manually-annotated and (b) semi-supervised labels. Quantitative analysis revealed that the proposed semi-supervised approach could attain 95% of the former's performance. Our user-study findings revealed that almost 60% of the participants found the backchannel responses predicted by the proposed model more natural. Finally, we also analyzed the impact of personality on the type of backchannel signals and validated our findings in the user-study.2021VJVidit Jain et al.Indraprastha Institute of Information Technology (IIIT)Conversational ChatbotsAgent Personality & AnthropomorphismCHI
MUBS: A Personalized Recommender System for Behavioral Activation in Mental HealthDepression is a leading cause of disability worldwide, which has inspired the design of mobile health (mHealth) applications for disease monitoring, prediction, and diagnosis. Less mHealth research has, however, focused on the treatment of depressive disorders. Clinical evidence shows that depressive symptoms can be reduced through a behavior change method known as Behavioral Activation (BA). This paper presents MUBS; a smartphone-based system for BA, which specifically contributes a personalized content-based activity recommendation model using a unique list of validated activities. An 8-week feasibility study with 17 depressive patients provided detailed insight into how MUBS provided inspiration and motivation for planning and engaging in more pleasant activities, thereby facilitating the core components of BA. Based on this study, the paper discusses how recommender technology can be used in the design of mHealth technology for BA.2020DRDarius A. Rohani et al.Technical University of DenmarkRecommender System UXMental Health Apps & Online Support CommunitiesCHI
The Low/High Index of Pupillary ActivityA novel eye-tracked measure of pupil diameter oscillation is derived as an indicator of cognitive load. The new metric, termed the Low/High Index of Pupillary Activity (LHIPA), is able to discriminate cognitive load (vis-a-vis task difficulty) in several experiments where the Index of Pupillary Activity fails to do so. Rationale for the LHIPA is tied to the functioning of the human autonomic nervous system yielding a hybrid measure based on the ratio of Low/High frequencies of pupil oscillation. The paper's contribution is twofold. First, full documentation is provided for the calculation of the LHIPA. As with the IPA, it is possible for researchers to apply this metric to their own experiments where a measure of cognitive load is of interest. Second, robustness of the LHIPA is shown in analysis of three experiments, a restrictive fixed-gaze number counting task, a less restrictive fixed-gaze n-back task, and an applied eye-typing task.2020ADAndrew Duchowski et al.Clemson UniversityEye Tracking & Gaze InteractionVisualization Perception & CognitionCHI