SpineLoft: Interactive Spine-based 2D-to-3D Modeling3D artists (professionals and novices alike) often take inspiration from sketches or photos to guide their designs. Yet, existing modeling systems are not tailored to fully make use of such input. Consequently, significant effort and expertise are needed when creating model prototypes or exploring design options. In this work, we introduce a system to support the exploratory modeling process by enabling the transformation of 2D image elements into geometric 3D objects. Our solution relies on a novel d2 distance function, supporting a region-based lofting process, and delivers easily-editable 3D geometric "spine-rib" representations. The user draws a spine, and the system generates and modifies a generalized cylinder around it, considering image edges. The proposed approach, driven by simple user-defined scribble definitions, can robustly handle various image sources, ranging from photos to hand-drawn content.2025ATAlexandre Thiault et al.Institut Polytechnique de Paris, Telecom Paris3D Modeling & AnimationCustomizable & Personalized ObjectsCHI
MyWebstrates: Webstrates as Local-first SoftwareWebstrates are web substrates, a practical realization of shareable dynamic media under which distributability, shareability, and malleability are fundamental software principles. Webstrates blur the distinction between application and document in a way that enables users to share, repurpose, and refit software across a variety of domains, but its reliance on a central server constrains its use; it is at odds with personal and collective control of data; and limits applications to the web. We extend the fundamental principles to include interoperability and sovereignty over data and propose MyWebstrates, an implementation of Webstrates on top of a new, lower-level substrate for synchronization built around local-first software principles. MyWebstrates registers itself in the user’s browser and function as a piece of local software that can selectively synchronise data over sync servers or peer-to-peer connections. We show how MyWebstrates extends Webstrates to enable offline collaborative use, interoperate between Webstrates on non-web technologies such as Unity, and maintain personal and collective sovereignty over data. We demonstrate how this enables new types of applications of Webstrates and discuss limitations of this approach and new challenges that it reveals.2024CKClemens Nylandsted Klokmose et al.Distributed Team CollaborationPrototyping & User TestingComputational Methods in HCIUIST
“Oh my Dave, your face says it all!”: Exploring the Multimodal Practices of Tasting in Live StreamingResearch on the activity of tasting, examined through the lens of conversation analysis, reveals that in face-to-face contexts, tasting is an interactive and sequential process that combines individual sensory experience with a public, witnessable, accountable, and intersubjective dimension. This perspective can be extended to the realm of live-streamed tasting, where streamers demonstrate and communicate the taste of food products to online audiences in real time. By adopting multimodal conversation analysis to scrutinize the unfolding sequence moment by moment, my study aims to demonstrate 1) Three practices to achieve the configuration of "tasting heads," wherein the current taster's face is displayed on-screen. 2) Patterns of gaze withdrawal from the screen and subsequent gaze back to the screen during the tasting process. 3) The performative and animated facial expressions. 4) The structure of responses from both streamers and viewers after the tasting.2024LSLe SONGLive Streaming & Spectating ExperienceLive Streaming & Content CreatorsMobileHCI
SoftBioMorph: Fabricating Sustainable Shape-changing Interfaces using Soft BiopolymersBio-based and bio-degradable materials have shown promising results for sustainable Human-Computer Interaction (HCI) applications, including shape-changing interfaces. However, the diversity of shape-changing behaviors achievable with these materials remains unclear as the fabrication knowledge is scattered across multiple research fields. This paper introduces SoftBioMorph, a fabrication framework that aims to integrate the fabrication know-how of sustainable soft shape-changing interfaces with biopolymers. Based on the example of Sodium Alginate, the framework contributes (1) a set of material synthesis processes that modify the biopolymer's properties to fulfill different functions; (2) a set of DIY crafting-based assembling techniques that functionalize the material and assembling properties to achieve three primitive types of change in shape; and (3) a series of application cases that demonstrate the versatility of the framework. We further discuss limitations, research questions, and fabrication challenges, presenting a comprehensive approach to sustainable prototyping in HCI.2024MNMadalina Nicolae et al.Shape-Changing Interfaces & Soft Robotic MaterialsShape-Changing Materials & 4D PrintingSustainable HCIDIS
Input Visualization: Collecting and Modifying Data with Visual RepresentationsWe examine input visualizations, visual representations that are designed to collect (and represent) new data rather than encode preexisting datasets. Information visualization is commonly used to reveal insights and stories within existing data. As a result, most contemporary visualization approaches assume existing datasets as the starting point for design, through which that data is mapped to visual encodings. Meanwhile, the implications of visualizations as inputs and as data sources have received little attention—despite the existence of visual and physical examples stretching back centuries. In this paper, we present a design space of 50 input visualizations analyzing their visual representation, data, artifact, context, and input. Based on this, we identify input modalities, purposes of input visualizations, and a set of design considerations. Finally, we discuss the relationship between input visualization and traditional visualization design and suggest opportunities for future research to better understand these visual representations and their potential.2024NBNathalie Bressa et al.Télécom Paris, IP ParisData StorytellingPrototyping & User TestingCHI
Was it Real or Virtual? Confirming the Occurrence and Explaining Causes of Memory Source Confusion between Reality and Virtual RealitySource confusion occurs when individuals attribute a memory to the wrong source (e.g., confusing a picture with an experienced event). Virtual Reality (VR) represents a new source of memories particularly prone to being confused with reality. While previous research identified causes of source confusion between reality and other sources (e.g., imagination, pictures), there is currently no understanding of what characteristics specific to VR (e.g., immersion, presence) could influence source confusion. Through a laboratory study (n=29), we 1) confirm the existence of VR source confusion with current technology, and 2) present a quantitative and qualitative exploration of factors influencing VR source confusion. Building on the Source Monitoring Framework, we identify VR characteristics and assumptions about VR capabilities (e.g., poor rendering) that are used to distinguish virtual from real memories. From these insights, we reflect on how the increasing realism of VR could leave users vulnerable to memory errors and perceptual manipulations.2024EBElise Bonnail et al.Institut Polytechnique de ParisEye Tracking & Gaze InteractionImmersion & Presence ResearchCHI
pARam: Leveraging Parametric Design in Extended Reality to Support the Personalization of Artifacts for Personal FabricationExtended Reality (XR) allows in-situ previewing of designs to be manufactured through Personal Fabrication (PF). These in-situ interactions exhibit advantages for PF, like incorporating the environment into the design process. However, design-for-fabrication in XR often happens through either highly complex 3D-modeling or is reduced to rudimentary adaptations of crowd-sourced models. We present pARam, a tool combining parametric designs (PDs) and XR, enabling in-situ configuration of artifacts for PF. In contrast to modeling- or search-focused approaches, pARam supports customization through embodied and practical inputs (e.g., gestures, recommendations) and evaluation (e.g., lighting estimation) without demanding complex 3D-modeling skills. We implemented pARam for HoloLens 2 and evaluated it (n=20), comparing XR and desktop conditions. Users succeeded in choosing context-related parameters and took their environment into account for their configuration using pARam. We reflect on the prospects and challenges of PDs in XR to streamline complex design methods for PF while retaining suitable expressivity.2024ESEvgeny Stemasov et al.Ulm UniversityAR Navigation & Context AwarenessDesktop 3D Printing & Personal FabricationCustomizable & Personalized ObjectsCHI
AI is Entering Regulated Territory: Understanding the Supervisors' Perspective for Model Justifiability in Financial Crime DetectionArtificial intelligence (AI) has the potential to bring significant benefits to highly regulated industries such as healthcare or banking. Adoption, however, remains low. AI's entry into complex socio-techno-legal systems raises issues of transparency, specifically for regulators. However, the perspective of supervisors, regulators who monitor compliance with applicable financial regulations, has rarely been studied. This paper focuses on understanding the needs of supervisors in anti-money laundering (AML) to better inform the design of AI justifications and explanations in highly regulated fields. Through scenario-based workshops with 13 supervisors and 6 banking professionals, we outline the auditing practices and socio-technical context of the supervisor. By combining the workshops’ insights with an analysis of compliance requirements, we identify the AML obligations that conflict with AI opacity. We then formulate seven needs that supervisors have for model justifiability. We discuss the role of explanations as reliable evidence on which to base justifications.2024ABAstrid Bertrand et al.Télécom Paris, Institut Polytechnique de ParisExplainable AI (XAI)AI Ethics, Fairness & AccountabilityAlgorithmic Fairness & BiasCHI
DungeonMaker: Embedding Tangible Creation and Destruction in Hybrid Board Games through Personal Fabrication TechnologyHybrid board games (HBGs) augment their analog origins digitally (e.g., through apps) and are an increasingly popular pastime activity. Continuous world and character development and customization, known to facilitate engagement in video games, remain rare in HBGs. If present, they happen digitally or imaginarily, often leaving physical aspects generic. We developed DungeonMaker, a fabrication-augmented HBG bridging physical and digital game elements: 1) the setup narrates a story and projects a digital game board onto a laser cutter; 2) DungeonMaker assesses player-crafted artifacts; 3) DungeonMaker's modified laser head senses and moves player- and non-player figures, and 4) can physically damage figures. An evaluation (n=4x3) indicated that DungeonMaker provides an engaging experience, may support players' connection to their figures, and potentially spark novices' interest in fabrication. DungeonMaker provides a rich constellation to play HBGs by blending aspects of craft and automation to couple the physical and digital elements of an HBG tightly.2024ESEvgeny Stemasov et al.Ulm UniversityDigitalization of Board & Tabletop GamesDesktop 3D Printing & Personal FabricationCHI
mmStress: Distilling Human Stress from Daily Activities via Contact-less Millimeter-wave Sensing"Long-term exposure to stress hurts human's mental and even physical health,and stress monitoring is of increasing significance in the prevention, diagnosis, and management of mental illness and chronic disease. However, current stress monitoring methods are either burdensome or intrusive, which hinders their widespread usage in practice. In this paper, we propose mmStress, a contact-less and non-intrusive solution, which adopts a millimeter-wave radar to sense a subject's activities of daily living, from which it distills human stress. mmStress is built upon the psychologically-validated relationship between human stress and "displacement activities", i.e., subjects under stress unconsciously perform fidgeting behaviors like scratching, wandering around, tapping foot, etc. Despite the conceptual simplicity, to realize mmStress, the key challenge lies in how to identify and quantify the latent displacement activities autonomously, as they are usually transitory and submerged in normal daily activities, and also exhibit high variation across different subjects. To address these challenges, we custom-design a neural network that learns human activities from both macro and micro timescales and exploits the continuity of human activities to extract features of abnormal displacement activities accurately. Moreover, we also address the unbalance stress distribution issue by incorporating a post-hoc logit adjustment procedure during model training. We prototype, deploy and evaluate mmStress in ten volunteers' apartments for over four weeks, and the results show that mmStress achieves a promising accuracy of ~80% in classifying low, medium and high stress. In particular, mmStress manifests advantages, particularly under free human movement scenarios, which advances the state-of-the-art that focuses on stress monitoring in quasi-static scenarios." https://doi.org/10.1145/36109262023KLKun Liang et al.Human Pose & Activity RecognitionSleep & Stress MonitoringBiosensors & Physiological MonitoringUbiComp
Surveying the Social Comfort of Body, Device, and Environment-Based Augmented Reality Interactions in Confined Passenger Spaces Using Mixed Reality Composite Videos"Augmented Reality (AR) headsets could significantly improve the passenger experience, freeing users from the restrictions of physical smartphones, tablets and seatback displays. However, the confined space of public transport and the varying proximity to other passengers may restrict what interaction techniques are deemed socially acceptable for AR users - particularly considering current reliance on mid-air interactions in consumer headsets. We contribute and utilize a novel approach to social acceptability video surveys, employing mixed reality composited videos to present a real user performing interactions across different virtual transport environments. This approach allows for controlled evaluation of perceived social acceptability whilst freeing researchers to present interactions in any simulated context. Our resulting survey (N=131) explores the social comfort of body, device, and environment-based interactions across seven transit seating arrangements. We reflect on the advantages of discreet inputs over mid-air and the unique challenges of face-to-face seating for passenger AR." https://doi.org/10.1145/36109232023DMDaniel Medeiros et al.AR Navigation & Context AwarenessMixed Reality WorkspacesImmersion & Presence ResearchUbiComp
Embracing Consumer-level UWB-equipped Devices for Fine-grained Wireless SensingRF sensing has been actively exploited in the past few years to enable novel IoT applications. Among different wireless technologies, WiFi-based sensing is most popular owing to the pervasiveness of WiFi infrastructure. However, one critical issue associated with WiFi sensing is that the information required for sensing can not be obtained from consumer-level devices such as smartphones or smart watches. The commonly-seen WiFi devices in our everyday lives actually can not be utilized for sensing. Instead, dedicated hardware with a specific WiFi card (e.g., Intel 5300) needs to be used for WiFi sensing. This paper involves Ultra-Wideband (UWB) into the ecosystem of RF sensing and makes RF sensing work on consumer-level hardware such as smartphones and smart watches for the first time. We propose a series of methods to realize UWB sensing on consumer-level electronics without any hardware modification. By leveraging fine-grained human respiration monitoring as the application example, we demonstrate that the achieved performance on consumer-level electronics is comparable to that achieved using dedicated UWB hardware. We show that UWB sensing hosted on consumer-level electronics is able to achieve fine granularity, robustness against interference and also multi-target sensing, pushing RF sensing one step towards real-life adoption. https://dl.acm.org/doi/10.1145/35694872023FZFusang Zhang et al.Biosensors & Physiological MonitoringContext-Aware ComputingUbiquitous ComputingUbiComp
Biohybrid Devices: Prototyping Interactive Devices with Growable MaterialsLiving bio-materials are increasingly used in HCI for fabricating objects by growing. However, how to integrate electronics to make these objects interactive still needs to be clarified. This paper presents an exploration of the fabrication design space of Biohybrid Interactive Devices, a class of interactive devices fabricated by merging electronic components and living organisms. From the exploration of this space using bacterial cellulose, we outline a fabrication framework centered on the biomaterials‘ life cycle phases. We introduce a set of novel fabrication techniques for embedding conductive elements, sensors, and output components through biological (e.g. bio-fabrication and bio-assembling) and digital processes. We demonstrate the combinatory aspect of the framework by realizing three tangible, wearable, and shape-changing interfaces. Finally, we discuss the sustainability of our approach, its limitations, and the implications for bio-hybrid systems in HCI.2023MNMadalina Nicolae et al.Automated Driving Interface & Takeover DesignShape-Changing Interfaces & Soft Robotic MaterialsUIST
Memory Manipulations in Extended RealityHuman memory has notable limitations (e.g., forgetting) which have necessitated a variety of memory aids (e.g., calendars). As we grow closer to mass adoption of everyday Extended Reality (XR), which is frequently leveraging perceptual limitations (e.g., redirected walking), it becomes pertinent to consider how XR could leverage memory limitations (forgetting, distorting, persistence) to induce memory manipulations. As memories highly impact our self-perception, social interactions, and behaviors, there is a pressing need to understand XR Memory Manipulations (XRMMs). We ran three speculative design workshops (n=12), with XR and memory researchers creating 48 XRMM scenarios. Through thematic analysis, we define XRMMs, present a framework of their core components and reveal three classes (at encoding, pre-retrieval, at retrieval). Each class differs in terms of technology (AR, VR) and impact on memory (influencing quality of memories, inducing forgetting, distorting memories). We raise ethical concerns and discuss opportunities of perceptual and memory manipulations in XR.2023EBElise Bonnail et al.Institut Polytechnique de ParisElectrical Muscle Stimulation (EMS)Immersion & Presence ResearchCHI
On Selective, Mutable and Dialogic XAI: a Review of What Users Say about Different Types of Interactive ExplanationsExplainability (XAI) has matured in recent years to provide more human-centered explanations of AI-based decision systems. While static explanations remain predominant, interactive XAI has gathered momentum to support the human cognitive process of explaining. However, the evidence regarding the benefits of interactive explanations is unclear. In this paper, we map existing findings by conducting a detailed scoping review of 48 empirical studies in which interactive explanations are evaluated with human users. We also create a classification of interactive techniques specific to XAI and group the resulting categories according to their role in the cognitive process of explanation: "selective", "mutable" or "dialogic". We identify the effects of interactivity on several user-based metrics. We find that interactive explanations improve perceived usefulness and performance of the human+AI team but take longer. We highlight conflicting results regarding cognitive load and overconfidence. Lastly, we describe underexplored areas including measuring curiosity or learning or perturbing outcomes.2023ABAstrid Bertrand et al.Telecom Paris, Institut Polytechnique de ParisExplainable AI (XAI)AI-Assisted Decision-Making & AutomationCHI
FingerMapper: Mapping Finger Motions onto Virtual Arms to Enable Safe Virtual Reality Interaction in Confined SpacesWhole-body movements enhance the presence and enjoyment of Virtual Reality (VR) experiences. However, using large gestures is often uncomfortable and impossible in confined spaces (e.g., public transport). We introduce FingerMapper, mapping small-scale finger motions onto virtual arms and hands to enable whole-body virtual movements in VR. In a first target selection study (n=13) comparing FingerMapper to hand tracking and ray-casting, we found that FingerMapper can significantly reduce physical motions and fatigue while having a similar degree of precision. In a consecutive study (n=13), we compared FingerMapper to hand tracking inside a confined space (the front passenger seat of a car). The results showed participants had significantly higher perceived safety and fewer collisions with FingerMapper while preserving a similar degree of presence and enjoyment as hand tracking. Finally, we present three example applications demonstrating how FingerMapper could be applied for locomotion and interaction for VR in confined spaces.2023WTWen-Jie Tseng et al.Institut Polytechnique de ParisHand Gesture RecognitionFull-Body Interaction & Embodied InputCHI
Towards Understanding Diminished RealityDiminished reality (DR) refers to the concept of removing content from a user's visual environment. While its implementation is becoming feasible, it is still unclear how users perceive and interact in DR-enabled environments and what applications it benefits. To address this challenge, we first conduct a formative study to compare user perceptions of DR and mediated reality effects (e.g., changing the color or size of target elements) in four example scenarios. Participants preferred removing objects through opacity reduction (i.e., the standard DR implementation) and appreciated mechanisms for maintaining a contextual understanding of diminished items (e.g., outlining). In a second study, we explore the user experience of performing tasks within DR-enabled environments. Participants selected which objects to diminish and the magnitude of the effects when performing two separate tasks (video viewing, assembly). Participants were comfortable with decreased contextual understanding, particularly for less mobile tasks. Based on the results, we define guidelines for creating general DR-enabled environments.2022YCYifei Cheng et al.Swarthmore CollegeMixed Reality WorkspacesImmersion & Presence ResearchContext-Aware ComputingCHI
ShapeFindAR: Exploring In-Situ Spatial Search for Physical Artifact Retrieval using Mixed RealityPersonal fabrication is made more accessible through repositories like Thingiverse, as they replace modeling with retrieval. However, they require users to translate spatial requirements to keywords, which paints an incomplete picture of physical artifacts: proportions or morphology are non-trivially encoded through text only. We explore a vision of in-situ spatial search for (future) physical artifacts, and present ShapeFindAR, a mixed-reality tool to search for 3D models using in-situ sketches blended with textual queries. With ShapeFindAR, users search for geometry, and not necessarily precise labels, while coupling the search process to the physical environment (e.g., by sketching in-situ, extracting search terms from objects present, or tracing them). We developed ShapeFindAR for HoloLens 2, connected to a database of 3D-printable artifacts. We specify in-situ spatial search, describe its advantages, and present walkthroughs using ShapeFindAR, which highlight novel ways for users to articulate their wishes, without requiring complex modeling tools or profound domain knowledge.2022ESEvgeny Stemasov et al.Ulm UniversityMixed Reality WorkspacesDesktop 3D Printing & Personal FabricationCHI
The Dark Side of Perceptual Manipulations in Virtual Reality"Virtual-Physical Perceptual Manipulations'' (VPPMs) such as redirected walking and haptics expand the user's capacity to interact with Virtual Reality (VR) beyond what would ordinarily physically be possible. VPPMs leverage knowledge of the limits of human perception to effect changes in the user's physical movements, becoming able to (perceptibly and imperceptibly) nudge their physical actions to enhance interactivity in VR. We explore the risks posed by the malicious use of VPPMs. First, we define, conceptualize and demonstrate the existence of VPPMs. Next, using speculative design workshops, we explore and characterize the threats/risks posed, proposing mitigations and preventative recommendations against the malicious use of VPPMs. Finally, we implement two sample applications to demonstrate how existing VPPMs could be trivially subverted to create the potential for physical harm. This paper aims to raise awareness that the current way we apply and publish VPPMs can lead to malicious exploits of our perceptual vulnerabilities.2022WTWen-Jie Tseng et al.Institut Polytechnique de ParisImmersion & Presence ResearchDance & Body Movement ComputingCHI
Consent in the Age of AR: Investigating The Comfort With Displaying Personal Information in Augmented RealitySocial Media (SM) has shown that we adapt our communication and disclosure behaviors to available technological opportunities. Head-mounted Augmented Reality (AR) will soon allow to effortlessly display the information we disclosed not isolated from our physical presence (e.g., on a smartphone) but visually attached to the human body. In this work, we explore how the medium (AR vs. Smartphone), our role (being augmented vs. augmenting), and characteristics of information types (e.g., level of intimacy, self-disclosed vs. non-self-disclosed) impact the users' comfort when displaying personal information. Conducting an online survey (N=148), we found that AR technology and being augmented negatively impacted this comfort. Additionally, we report that AR mitigated the effects of information characteristics compared to those they had on smartphones. In light of our results, we discuss that information augmentation should be built on consent and openness, focusing more on the comfort of the augmented rather than the technological possibilities.2022JRJan Ole Rixen et al.Institute of Media InformaticsAR Navigation & Context AwarenessPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI