Detecting Interaction Patterns in Educational Collaborative WritingWriting and collaboration are crucial skills in professional and academic settings. However, assessments of collaborative writing often focus only on the final text, overlooking individual contributions and the diverse strategies students employ during the writing process. To support teachers in interpreting and assessing student behaviour, we propose analysing writing data at a granular level, down to individual characters, and using user sessions as observation units to capture coherent interaction patterns. This paper introduces three methods for identifying interaction patterns in collaborative writing. The first method classifies session types by analysing features such as writing, reading, and communication behaviours, session length, and the number of group members collaborating synchronously. The second method identifies frequent sequences of session types by examining their order, enabling process analyses at an abstract yet manageable level compared to using log data. The third method focuses on text-level collaboration by evaluating the frequency of text passage modifications made by the original author or other group members. This approach quantifies individual collaboration and, at the group level, identifies isolated versus closely connected group members, shedding light on the mode of collaboration and degree of group cohesion. We demonstrate these three methods in a case study involving two cohorts, K_A=294 and K_B=242 groups of up to 9 learners (N_A=1,848, N_B=1,463). The interaction patterns identified using these methods are intended to help teachers understand collaborative writing processes and identify situations where the participating learners require support.2025NSNiels Seidel et al.Enhancing LearningCSCW
From Pegs to Pixels: A Comparative Analysis of the Nine Hole Peg Test and a Digital Copy Drawing Test for Fine Motor Control AssessmentUser interaction with digital systems requires Fine Motor Control (FMC), especially if the interfaces are complex or require high fidelity and fine-grained interactions. Despite its importance, Fine Motor Control is often overlooked in interactive system design, partly because of its complex assessment. Measuring changes in fine motor abilities due to prolonged use or fatigue currently requires repeated manual testing. This paper analyzes the concept of using the digital mobile devices' input behavior to assess the user's Fine Motor Control. For this, we show that Fine Motor Control can be assessed for touch and stylus-based interaction with a digital mobile system. We conducted a user study, where participants performed a Nine Hole Peg Test and a predefined Copy Drawing Test before and after exercises that affect fine motor skills. Based on this data, we investigated how metrics such as pressure, velocity, and entropy for touch and stylus input can be used to predict Fine Motor Control.2025DSDominik Schön et al.Motor Impairment Assistive Input TechnologiesPrototyping & User TestingMobileHCI
Understanding the Influence of Electrical Muscle Stimulation on Motor Learning: Enhancing Motor Learning or Disrupting Natural Progression?Electrical Muscle Stimulation (EMS) induces muscle movement through external currents, offering a novel approach to motor learning. Researchers investigated using EMS as an alternative to conventional non-movement-inducing feedback techniques, such as vibrotactile and electrotactile feedback. While EMS shows promise in areas such as dance, sports, and motor skill acquisition, neurophysiological models of motor learning conflict about the impact of externally induced movements on sensorimotor representations. This study evaluated EMS against electrotactile feedback and a control condition in a two-session experiment assessing fast learning, consolidation, and learning transfer. Our results suggest an overall positive impact of EMS in motor learning. Although traditional electrotactile feedback had a higher learning rate, EMS increased the learning plateau, as measured by a three-factor exponential decay model. This study provides empirical evidence supporting EMS as a plausible method for motor augmentation and skill transfer, contributing to understanding its role in motor learning.2025SVSteeven Villa et al.LMU MunichVibrotactile Feedback & Skin StimulationElectrical Muscle Stimulation (EMS)CHI
Ad-Blocked Reality: Evaluating User Perceptions of Content Blocking Concepts Using Extended RealityInspired by the concepts of diminishing reality and ad-blocking in browsers, this study investigates the perceived benefits and concerns of blocking physical, real-world content, particularly ads, through Extended Reality (XR). To understand how users perceive this concept, we first conducted a user study (N=18) with an ad-blocking prototype to gather initial insights. The results revealed a mixed willingness to adopt XR blockers, with participants appreciating aspects such as customizability, convenience, and privacy. Expected benefits included enhanced focus and reduced stress, while concerns centered on missing important information and increased feelings of isolation. Hence, we investigated the user acceptance of different ad-blocking visualizations through a follow-up online survey (N=120), comparing six concepts based on related work. The results indicated that the XR ad-blocker visualizations play a significant role in how and for what kinds of advertisements such a concept might be used, paving the path for future feedback-driven prototyping.2025CKChristopher Katins et al.HU BerlinPrivacy by Design & User ControlSocial Platform Design & User BehaviorCHI
"Create a Fear of Missing Out" – ChatGPT Implements Unsolicited Deceptive Designs in Generated Websites Without WarningWith the recent advancements in Large Language Models (LLMs), web developers increasingly apply their code-generation capabilities to website design. However, since these models are trained on existing designerly knowledge, they may inadvertently replicate bad or even illegal practices, especially deceptive designs (DD). This paper examines whether users can accidentally create DD for a fictitious webshop using GPT-4. We recruited 20 participants, asking them to use ChatGPT to generate functionalities (product overview or checkout) and then modify these using neutral prompts to meet a business goal (e.g., "increase the likelihood of us selling our product"). We found that all 20 generated websites contained at least one DD pattern (mean: 5, max: 9), with GPT-4 providing no warnings. When reflecting on the designs, only 4 participants expressed concerns, while most considered the outcomes satisfactory and not morally problematic, despite the potential ethical and legal implications for end-users and those adopting ChatGPT's recommendations.2025VKVeronika Krauß et al.TU DarmstadtExplainable AI (XAI)Privacy by Design & User ControlDark Patterns RecognitionCHI
The Illusion of Privacy: Investigating User Misperceptions in Browser Tracking ProtectionThird parties track users' web browsing activities, raising privacy concerns. Tracking protection extensions prevent this, but their influence on privacy protection beliefs shaped by narratives remains uncertain. This paper investigates users' misperception of tracking protection offered by browser plugins. Our study explores how different narratives influence users' perceived privacy protection by examining three tracking protection extension narratives: no protection, functional protection, and a placebo. In a study (N=36), participants evaluated their anticipated protection during a hotel booking process, influenced by the narrative about the plugin's functionality. However, participants viewed the same website without tracking protection adaptations. We show that users feel more protected when informed they use a functional or placebo extension, compared to no protection. Our findings highlight the deceptive nature of misleading privacy tools, emphasizing the need for greater transparency to prevent users from a false sense of protection, as such misleading tools negatively affect user study results.2025MWMaximiliane Windl et al.LMU Munich; Munich Center for Machine Learning (MCML)Privacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Using Pupil Dilation to Adaptively Select Speed-Reading Parameters in Virtual RealityRapid Serial Visual Presentation (RSVP) improves the reading speed for optimizing the user's information processing capabilities on Virtual Reality (VR) devices. Yet, the user's RSVP reading performance changes over time while the reading speed remains static. In this paper, we evaluate pupil dilation as a physiological metric to assess the mental workload of readers in real-time. We assess mental workload under different background lighting and RSVP presentation speeds to estimate the optimal color that discriminates the pupil diameter varying RSVP presentation speeds. We discovered that a gray background provides the best contrast for reading at various presentation speeds. Then, we conducted a second study to evaluate the classification accuracy of mental workload for different presentation speeds. We find that pupil dilation relates to mental workload when reading with RSVP. We discuss how pupil dilation can be used to adapt the RSVP speed in future VR applications to optimize information intake.2024JGJesse W Grootjen et al.Eye Tracking & Gaze InteractionImmersion & Presence ResearchMobileHCI
"AI enhances our performance, I have no doubt this one will do the same": The Placebo effect is robust to negative descriptions of AIHeightened AI expectations facilitate performance in human-AI interactions through placebo effects. While lowering expectations to control for placebo effects is advisable, overly negative expectations could induce nocebo effects. In a letter discrimination task, we informed participants that an AI would either increase or decrease their performance by adapting the interface, when in reality, no AI was present in any condition. A Bayesian analysis showed that participants had high expectations and performed descriptively better irrespective of the AI description when a sham-AI was present. Using cognitive modeling, we could trace this advantage back to participants gathering more information. A replication study verified that negative AI descriptions do not alter expectations, suggesting that performance expectations with AI are biased and robust to negative verbal descriptions. We discuss the impact of user expectations on AI interactions and evaluation.2024AKAgnes Mercedes Kloft et al.Aalto UniversityExplainable AI (XAI)AI-Assisted Decision-Making & AutomationAI Ethics, Fairness & AccountabilityCHI
Improving Electromyographic Muscle Response Times through Visual and Tactile Prior Stimulation in Virtual RealityElectromyography (EMG) enables hands-free interactions by detecting muscle activity at different human body locations. Previous studies have demonstrated that input performance based on isometric contractions is muscle-dependent and can benefit from synchronous biofeedback. However, it remains unknown whether stimulation before interaction can help to localize and tense a muscle faster. In a response-based VR experiment (N=21), we investigated whether prior stimulation using visual or tactile cues at four different target muscles (biceps, triceps, upper leg, calf) can help reduce the time to perform isometric muscle contractions. The results show that prior stimulation decreases EMG reaction times with visual, vibrotactile, and electrotactile cues. Our experiment also revealed important findings regarding learning and fatigue at the different body locations. We provide qualitative insights into the participants' perceptions and discuss potential reasons for the improved interaction. We contribute with implications and use cases for prior stimulated muscle activation.2024JSJessica Sehrt et al.Frankfurt University of Applied SciencesElectrical Muscle Stimulation (EMS)Full-Body Interaction & Embodied InputVR Medical Training & RehabilitationCHI
Assessing User Apprehensions About Mixed Reality Artifacts and Applications: The Mixed Reality Concerns (MRC) QuestionnaireCurrent research in Mixed Reality (MR) presents a wide range of novel use cases for blending virtual elements with the real world. This yet-to-be-ubiquitous technology challenges how users currently work and interact with digital content. While offering many potential advantages, MR technologies introduce new security, safety, and privacy challenges. Thus, it is relevant to understand users' apprehensions towards MR technologies, ranging from security concerns to social acceptance. To address this challenge, we present the Mixed Reality Concerns (MRC) Questionnaire, designed to assess users' concerns towards MR artifacts and applications systematically. The development followed a structured process considering previous work, expert interviews, iterative refinements, and confirmatory tests to analytically validate the questionnaire. The MRC Questionnaire offers a new method of assessing users' critical opinions to compare and assess novel MR artifacts and applications regarding security, privacy, social implications, and trust.2024CKChristopher Katins et al.HU BerlinMixed Reality WorkspacesPrivacy by Design & User ControlSmart Home Privacy & SecurityCHI
Technical Design Space Analysis for Unobtrusive Driver Emotion Assessment Using Multi-Domain ContextDriver emotions play a vital role in driving safety and performance. Consequently, regulating driver emotions through empathic interfaces have been investigated thoroughly. However, the prerequisite - driver emotion sensing - is a challenging endeavor: Body-worn physiological sensors are intrusive, while facial and speech recognition only capture overt emotions. In a user study (N=27), we investigate how emotions can be unobtrusively predicted by analyzing a rich set of contextual features captured by a smartphone, including road and traffic conditions, visual scene analysis, audio, weather information, and car speed. We derive a technical design space to inform practitioners and researchers about the most indicative sensing modalities, the corresponding impact on users' privacy, and the computational cost associated with processing this data. Our analysis shows that contextual emotion recognition is significantly more robust than facial recognition, leading to an overall improvement of 7% using a leave-one-participant-out cross-validation. https://dl.acm.org/doi/10.1145/35694662023DBDavid Bethge et al.Automated Driving Interface & Takeover DesignPrivacy by Design & User ControlContext-Aware ComputingUbiComp
SensCon: Embedding Physiological Sensing into Virtual Reality ControllersVirtual reality experiences increasingly use physiological data for virtual environment adaptations to evaluate user experience and immersion. Previous research required complex medical-grade equipment to collect physiological data, limiting real-world applicability. To overcome this, we present SensCon for skin conductance and heart rate data acquisition. To identify the optimal sensor location in the controller, we conducted a first study investigating users' controller grasp behavior. In a second study, we evaluated the performance of SensCon against medical-grade devices in six scenarios regarding user experience and signal quality. Users subjectively preferred SensCon in terms of usability and user experience. Moreover, the signal quality evaluation showed satisfactory accuracy across static, dynamic, and cognitive scenarios. Therefore, SensCon reduces the complexity of capturing and adapting the environment via real-time physiological data. By open-sourcing SensCon, we enable researchers and practitioners to adapt their virtual reality environment effortlessly. Finally, we discuss possible use cases for virtual reality-embedded physiological sensing.2023FCFrancesco Chiossi et al.Immersion & Presence ResearchBiosensors & Physiological MonitoringContext-Aware ComputingMobileHCI
Tailor Twist: Assessing Rotational Mid-Air Interactions for Augmented RealityMid-air gestures, widely used in today's Augmented Reality (AR) applications, are prone to the “gorilla arm” effect, leading to discomfort with prolonged interactions. While prior work has proposed metrics to quantify this effect and means to improve comfort and ergonomics, these works usually only consider simplistic, one-dimensional AR interactions, like reaching for a point or pushing a button. However, interacting with AR environments also involves far more complex tasks, such as rotational knobs, potentially impacting ergonomics. This paper advances the understanding of the ergonomics of rotational mid-air interactions in AR. For this, we contribute the results of a controlled experiment exposing the participants to a rotational task in the interaction space defined by their arms' reach. Based on the results, we discuss how novel future mid-air gesture modalities benefit from our findings concerning ergonomic-aware rotational interaction.2023DSDominik Schön et al.Technical University of DarmstadtFull-Body Interaction & Embodied InputAR Navigation & Context AwarenessCHI
TicTacToes: Assessing Toe Movements as an Input ModalityFrom carrying grocery bags to holding onto handles on the bus, there are a variety of situations where one or both hands are busy, hindering the vision of ubiquitous interaction with technology. Voice commands, as a popular hands-free alternative, struggle with ambient noise and privacy issues. As an alternative approach, research explored movements of various body parts (e.g., head, arms) as input modalities, with foot-based techniques proving particularly suitable for hands-free interaction. Whereas previous research only considered the movement of the foot as a whole, in this work, we argue that our toes offer further degrees of freedom that can be leveraged for interaction. To explore the viability of toe-based interaction, we contribute the results of a controlled experiment with 18 participants assessing the impact of five factors on the accuracy, efficiency and user experience of such interfaces. Based on the findings, we provide design recommendations for future toe-based interfaces.2023FMFlorian Müller et al.LMU MunichFoot & Wrist InteractionCHI