Mind Your Manners: The Dynamics of Politeness in Human-AI vs. Human-Human InteractionsThe rapid integration of artificial intelligence (AI) into communication systems has significantly altered how users interact with digital tools and collaborate with AI agents. This study investigates the dynamics of politeness in human-AI interactions through a controlled experiment with 1,684 participants, each completing sequential text-based tasks with a conversational AI system. Participants were randomly assigned to one of several conditions that varied in the AI’s visual identity (no icon, robot icon, or human face), allowing us to examine the role of perceived anthropomorphism through a minimal visual cue. Politeness was measured using linguistic markers and analyzed using statistical models that account for task sequence and individual differences. Our findings show that politeness toward AI declines over time, with a temporary increase at the start of a second task. Compared to human-human interactions in a benchmark dataset, politeness in human-AI interactions eroded more quickly. Younger participants were less polite overall, and although frequent AI users also appeared less polite descriptively, adjusted models showed a small positive association with daily AI use. Anthropomorphic visual cues, especially human-like avatars, led to more sustained polite behavior. These results offer insight into how users adapt social norms in AI-mediated collaboration and suggest design strategies for fostering respectful and effective human-AI communication.2025TLTeddy Lazebnik et al.Communicating properly, interpreting signsCSCW
Interpersonal Synchrony Over a Distance – the Effect of Network Noise on Synchronization and its Prosocial ConsequencesInterpersonal motor synchronization (IMS) occurs when people move together, in temporal alignment. Being in IMS can result in prosocial effects: increased liking, similarity and trust. We address the possibility of remote IMS (rIMS) between people who are not co-located, through mobile phone interactions. A threat to rIMS is the temporal noise inherent to communication networks. We created a mobile phone application in which a human participant tries to tap in synchrony with a remote participant, that is in fact a responsive computer algorithm. We introduced three levels of synthetic network noise to the joint tapping. We show that pro-sociality can be created in rIMS, but that as network noise increases the prosocial effects decrease. Participants' textual answers are analyzed thematically to learn about the effects of remote synchronization. Our findings motivate the creation of remote interactions with elements of IMS as well as inform the network requirements for successful rIMS.2025MRMichal Rinott et al.Ben Gurion University of the Negev, Software and Information Systems Engineering; Shenkar, Kadar Design and Technology CenterFull-Body Interaction & Embodied InputKnowledge Management & Team AwarenessCHI
AI-Augmented Brainwriting: Investigating the use of LLMs in group ideationThe growing availability of generative AI technologies such as large language models (LLMs) has significant implications for creative work. This paper explores twofold aspects of integrating LLMs into the creative process – the divergence stage of idea generation, and the convergence stage of evaluation and selection of ideas. We devised a collaborative group-AI Brainwriting ideation framework, which incorporated an LLM as an enhancement into the group ideation process, and evaluated the idea generation process and the resulted solution space. To assess the potential of using LLMs in the idea evaluation process, we design an evaluation engine and compared it to idea ratings assigned by three expert and six novice evaluators. Our findings suggest that integrating LLM in Brainwriting could enhance both the ideation process and its outcome. We also provide evidence that LLMs can support idea evaluation. We conclude by discussing implications for HCI education and practice.2024OSOrit Shaer et al.Wellesley CollegeGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationCHI
Driving from a Distance: Challenges and Guidelines for Autonomous Vehicle Teleoperation InterfacesAutonomous vehicle (AV) technologies are rapidly evolving with the vision of having self-driving cars moving safely with no human input. However, it is clear that at least in the near and foreseeable future, AVs will not be able to resolve all road incidents and that in some situations remote human assistance will be required. However, remote driving is not trivial and introduces many challenges stemming mostly from the physical disconnect of the remote operator. In order to highlight these challenges and understand how to better design AV teleoperation interfaces, we conducted several observations of AV teleoperation sessions as well as in-depth interviews with 14 experts. Based on these interviews, we provide an investigation and analysis of the major AV teleoperation challenges. We follow this by providing design suggestions for the development of future teleoperation interfaces for assistance and driving of AVs.2022FTFelix Tener et al.University of Haifa, University of HaifaTeleoperated DrivingCHI
Physicality As an Anchor for Coordination: Examining Collocated Collaboration in Physical and Mobile Augmented Reality SettingsDesign and co-creation activities around 3D artifacts often require close collocated coordination between multiple users. Augmented reality (AR) technology can support collocated work enabling users to flexibly work with digital objects while still being able to use the physical space for coordination. With most of current research focusing on remote AR collaboration, less is known about collocated collaboration in AR, particularly in relation to interpersonal dynamics between the collocated collaborators. Our study aims at understanding how shared augmented reality facilitated by mobile devices (mobile augmented reality or MAR) affects collocated users' coordination. We compare the coordination behaviors that emerged in a MAR setting with those in a comparable fully physical setting by simulating the same task – co-creation of a 3D artifact. Our results demonstrate the importance of the shared physical dimension for participants' ability to coordinate in the context of collaborative co-creation. Namely, participants working in a fully physical setting were better able to leverage the work artifact itself for their coordination needs, working in a mode that we term artifact-oriented coordination. Conversely, participants collaborating around an AR artifact leveraged the shared physical workspace for their coordination needs, working in what we refer to as space-oriented coordination. We discuss implications for a AR-based collaboration and propose directions for designers of AR tools.2021LPLev Poretski et al.VR and Immersive InterfacesCSCW
Exploring Visual Information Flows in InfographicsInfographics are engaging visual representations that tell an informative story using a fusion of data and graphical elements. The large variety of infographic design poses a challenge for their high-level analysis. We use the concept of Visual Information Flow (VIF), which is the underlying semantic structure that links graphical elements to convey the information and story to the user. To explore VIF, we collected a repository of over 13K infographics. We use a deep neural network to identify visual elements related to information, agnostic to their various artistic appearances. We construct the VIF by automatically chaining these visual elements together based on Gestalt principles. Using this analysis, we characterize the VIF design space by a taxonomy of 12 different design patterns. Exploring in a real-world infographic dataset, we discuss the design space and potentials of VIF in light of this taxonomy.2020MLMin Lu et al.Shenzhen UniversityInteractive Data VisualizationData StorytellingCHI
Evaluating Expert Curation in a Baby Milestone Tracking AppEarly childhood developmental screening is critical for timely detection and intervention. babyTRACKS (Formerly Baby CROINC, CROwd INtelligence Curation.) is a free, live, interactive developmental tracking mobile app with over 3,000 children's diaries. Parents write or select short milestone texts, like "began taking first steps," to record their babies' developmental achievements, and receive crowd-based percentiles to evaluate development and catch potential delays.<br>Currently, an expert-based Curated Crowd Intelligence (CCI) process manually groups incoming novel parent-authored milestone texts according to their similarity to existing milestones in the database (for example, starting to walk), or determining that the milestone represents a new developmental concept not seen before in another child's diary. CCI cannot scale well, however, and babyTRACKS is mature enough, with a rich enough database of existing milestone texts, to now consider machine learning tools to replace or assist the human curators. Three new studies explore (1) the usefulness of automation, by analyzing the human cost of CCI and how the work is currently broken down; (2) the validity of automation, by testing the inter-rater reliability of curators; and (3) the value of automation, by appraising the "real world" clinical value of milestones when assessing child development.<br>We conclude that automation can indeed be appropriate and helpful for a large percentage, though not all, of CCI work. We further establish realistic upper bounds for algorithm performance; confirm that the babyTRACKS milestones dataset is valid for training and testing purposes; and verify that it represents clinically meaningful developmental information.2019ABAyelet Ben-Sasson et al.University of HaifaCognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Special Education TechnologyMental Health Apps & Online Support CommunitiesCHI
Virtual Objects in the Physical World: Relatedness and Psychological Ownership in Augmented RealityAs technology advances, people increasingly interact with virtual objects in settings such as augmented reality (AR) where the virtual layer is superimposed on top of the physical world. Similarly to interactions with physical objects, users may assign virtual objects with value, experience a sense of relatedness, and develop psychological ownership over these objects. The objective of this study is to understand how AR's unique characteristics influences the emergence of meaning and ownership perceptions amongst users. We conducted a study of users' interactions with a virtual dog over a three-week period, comparing AR and fully virtual settings. Our findings show that engagement with the application is a key determinant of the relation users develop with virtual objects. However, the effect of the background layer–whether physical or virtual–dominates the development of relatedness and ownership feelings, highlighting the importance of the "real" physical layer in shaping users' perceptions.2019LPLev Poretski et al.University of HaifaAR Navigation & Context AwarenessIdentity & Avatars in XRCHI
Normative Tensions in Shared Augmented RealityNovel collaborative technologies afford new modes of behavior, which are often not regulated by established social norms. In particular, shared augmented reality (AR) - where multiple users can create, attach, and interact with the same virtual elements embedded into the physical environment – has the potential to interrupt current social norms of behavior. The objective of our study is to shed light on the ways in which shared AR challenges existing behavioral expectations. Using a simulated lab experimental design, we performed a study of users’ interactions in a shared AR setting. Content analysis of participants’ interviews reveals users’ concerns over the preservation of their self- and social identity, as well as concerns related to personal space and the sense of psychological ownership over one’s body and belongings. Our findings also point to the need for regulation of shared AR spaces and design of the technology’s control mechanisms.2018LPLev Poretski et al.Sharing and CollaborationCSCW
Helping customers make the most out of product reviews: A framework for visualizing service comparisons – A case study using restaurantsOnline customers’ opinions about products and services, in the form of reviews, are a major part of today’s web culture. However, customers, when looking for a product or service, do not have the time or the desire to read even a small part of the available product reviews. Moreover, they often would like to examine reviews of similar products, and get a comprehensive picture of how different aspects of these products compare. In this work, by introducing a generic framework for analyzing and presenting a visual summary based on comparative sentences extracted from customer reviews, we offer the user an easy and intuitive understanding of the differences between a set of products. The contributions of this study if twofold: First, it focusses on reviews of services (using the restaurant domain as a case study), unlike most of the related studies that consider tangible products and second, it combines state-of-the-art text analysis techniques with an intuitive visualization into a n easy to use prototype to visualize summarized service comparisons to the users.2018YDYaakov Danone et al.Interactive Data VisualizationData StorytellingIUI
It was fun, but did it last? The dynamic interplay between fun motives and contributors’ activity in peer productionPeer production communities often struggle to retain contributors beyond initial engagement. This may be a result of contributors' level of motivation, as it is deeply intertwined with activity. Existing studies on participation focus on activity dynamics but overlook the accompanied changes in motivation. To fill this gap, this study examines the interplay between contributors’ fun motives and activity over time. We combine motivational data from two surveys of Wikipedia newcomers with data of two periods of editing activity. We find that persistence in editing is related to fun, while the amount of editing is not: individuals who persist in editing are characterized by higher fun motives early on (when compared to dropouts), though their motives are not related to the number of edits made. Moreover, we found that newcomers’ experience of fun was reinforced by their amount of activity over time: editors who were initially motivated by fun entered a virtuous cycle, whereas those who initially had low fun motives entered a vicious cycle. Our findings shed new light on the importance of early experiences and reveal that the relationship between motivation and participation levels is more complex than previously understood.2018MBMartina Balestra et al.Motivation in Online CollaborationCSCW
Better Understanding of Foot Gestures: An Elicitation StudyWe present a study aimed to better understand users’ perceptions of foot gestures employed on a horizontal surface. We applied a user elicitation methodology, in which participants were asked to suggest foot gestures to actions (referents) in three conditions: standing up in front of a large display, sitting down in front of a desktop display, and standing on a projected surface. Based on majority count and agreement scores, we identified three gesture sets, one for each condition. Each gesture set shows a mapping between a common action and its chosen gesture. As a further contribution, we suggest a new measure called specification score, which indicates the degree to which a gesture is specific, preferable and intuitive to an action in a specific condition of use. Finally, we present measurable insights that can be implemented as guidelines for future development and research of foot interaction.2018YFYasmin Felberbaum et al.University of HaifaFoot & Wrist InteractionCHI