Beyond the Artifact: Power as a Lens for Creativity Support ToolsResearchers who build creativity support tools (CSTs) define abstractions and software representations that align with user needs to give users the power to accomplish tasks. However, these specifications also structure and limit how users can and should think, act, and express themselves. Thus, tool designers unavoidably exert power over their users by enacting a ``normative ground'' through their tools. Drawing on interviews with 11 creative practitioners, tool designers, and CST researchers, we offer a definition of empowerment in the context of creative practice, build a preliminary theory of how power relationships manifest in CSTs, and explain why researchers have had trouble addressing these concepts in the past. We re-examine CST literature through a lens of power and argue that mitigating power imbalances at the level of technical design requires enabling users in both vertical movement along levels of abstraction as well as horizontal movement between tools through interoperable representations. A lens of power is one possible orientation that lets us recognize the methodological shifts required towards building ``artistic support tools.''2023JLJingyi Li et al.Creative Collaboration & Feedback SystemsTechnology Ethics & Critical HCIUIST
Sensorimotor Simulation of Redirected Reaching using Stochastic Optimal Feedback ControlIllusory VR interaction techniques such as hand redirection work because humans use vision to adjust their motor commands during movement (e.g., reaching). Existing simulations of redirected reaching are limited, however, and have not yet incorporated important stochastic characteristics like sensorimotor noise, nor captured redirection's effect on movement duration. In this work, we propose adapting a stochastic optimal feedback control (SOFC) model of normal reach to simulate redirection by augmenting sensory feedback at run-time. We present a summary of our simulation and validate it against user data gathered in multiple redirection conditions. We also evaluate the impacts of visual attention on the effectiveness of redirection in real users and replicate the effects in simulation. Our results show that an infinite-horizon SOFC model is able to reproduce key characteristics of redirected reaches and highlight the benefits of SOFC as a tool for simulating, evaluating, and gaining insights about redirection techniques.2023EGEric J Gonzalez et al.Stanford UniversityShape-Changing Interfaces & Soft Robotic MaterialsFull-Body Interaction & Embodied InputCHI
Supporting Accessible Data Visualization Through Audio Data NarrativesOnline data visualizations play an important role in informing public opinion but are often inaccessible to screen reader users. To address the need for accessible data representations on the web that provide direct, multimodal, and up-to-date access to the data, we investigate audio data narratives –which combine textual descriptions and sonification (the mapping of data to non-speech sounds). We conduct two co-design workshops with screen reader users to define design principles that guide the structure, content, and duration of a data narrative. Based on these principles and relevant auditory processing characteristics, we propose a dynamic programming approach to automatically generate an audio data narrative from a given dataset. We evaluate our approach with 16 screen reader users. Findings show with audio narratives, users gain significantly more insights from the data. Users describe data narratives help them better extract and comprehend the information in both the sonification and description.2022ASAlexa F. Siu et al.Stanford University, Adobe ResearchVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Data StorytellingCHI
Slide-Tone and Tilt-Tone: 1-DOF Haptic Techniques for Conveying Shape Characteristics of Graphs to Blind UsersWe increasingly rely on up-to-date, data-driven graphs to understand our environments and make informed decisions. However, many of the methods blind and visually impaired users (BVI) rely on to access data-driven information do not convey important shape-characteristics of graphs, are not refreshable, or are prohibitively expensive. To address these limitations, we introduce two refreshable, 1-DOF audio-haptic interfaces based on haptic cues fundamental to object shape perception. Slide-tone uses finger position with sonification, and Tilt-tone uses fingerpad contact inclination with sonification to provide shape feedback to users. Through formative design workshops (n = 3) and controlled evaluations (n = 8), we found that BVI participants appreciated the additional shape information, versatility, and reinforced understanding these interfaces provide; and that task accuracy was comparable to using interactive tactile graphics or sonification alone. Our research offers insight into the benefits, limitations, and considerations for adopting these haptic cues into a data visualization context.2022DFDanyang Fan et al.Stanford UniversityVibrotactile Feedback & Skin StimulationVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)CHI
Beyond Being Real: A Sensorimotor Control Perspective on Interactions in Virtual RealityWe can create Virtual Reality (VR) interactions that have no equivalent in the real world by remapping spacetime or altering users' body representation, such as stretching the user’s virtual arm for manipulation of distant objects or scaling up the user’s avatar to enable rapid locomotion. Prior research has leveraged such approaches, what we call beyond-real techniques, to make interactions in VR more practical, efficient, ergonomic, and accessible. We present a survey categorizing prior movement-based VR interaction literature as reality-based, illusory, or beyond-real interactions. We survey relevant conferences (CHI, IEEE VR, VRST, UIST, and DIS) while focusing on selection, manipulation, locomotion, and navigation in VR. For beyond-real interactions, we describe the transformations that have been used by prior works to create novel remappings. We discuss open research questions through the lens of the human sensorimotor control system and highlight challenges that need to be addressed for effective utilization of beyond-real interactions in future VR applications, including plausibility, control, long-term adaptation, and individual differences.2022PAParastoo Abtahi et al.Stanford UniversityImmersion & Presence ResearchCHI
A Model Predictive Control Approach for Reach Redirection in Virtual RealityReach redirection is an illusion-based virtual reality (VR) interaction technique where a user’s virtual hand is shifted during a reach in order to guide their real hand to a physical location. Prior works have not considered the underlying sensorimotor processes driving redirection. In this work, we propose adapting a sensorimotor model for goal-directed reach to obtain a model for visually-redirected reach, specifically by incorporating redirection as a sensory bias in the state estimate used by a minimum jerk motion controller. We validate and then leverage this model to develop a Model Predictive Control (MPC) approach for reach redirection, enabling the real-time generation of spatial warping according to desired optimization criteria (e.g., redirection goals) and constraints (e.g., sensory thresholds). We illustrate this approach with two example criteria -- redirection to a desired point and redirection along a desired path -- and compare our approach against existing techniques in a user evaluation.2022EGEric J Gonzalez et al.Stanford UniversitySocial & Collaborative VRImmersion & Presence ResearchPrototyping & User TestingCHI
Automated Accessory Rigs for Layered 2D Character IllustrationsMix-and-match character creation tools enable users to quickly produce 2D character illustrations by combining various predefined accessories, like clothes and hairstyles, which are represented as separate, interchangeable artwork layers. However, these accessory layers are often designed to fit only the default body artwork, so users cannot modify the body without manually updating all the accessory layers as well. To address this issue, we present a method that captures and preserves important relationships between artwork layers so that the predefined accessories adapt with the character's body. We encode these relationships with four types of constraints that handle common interactions between layers: (1) occlusion, (2) attachment at a point, (3) coincident boundaries, and (4) overlapping regions. A rig is a set of constraints that allow a motion or deformation specified on the body to transfer to the accessory layers. We present an automated algorithm for generating such a rig for each accessory layer, but also allow users to select which constraints to apply to specific accessories. We demonstrate how our system supports a variety of modifications to body shape and pose using artwork from mix-and-match data sets.2021JLJingyi Li et al.3D Modeling & AnimationCustomizable & Personalized ObjectsUIST
Coupling Simulation and Hardware for Interactive Circuit DebuggingSimulation offers many advantages when designing analog circuits. Designers can explore alternatives quickly, without added cost or risk of hardware faults. However, it is challenging to use simulation as an aid during interactive debugging of physical circuits, due to difficulties in comparing simulated analyses with hardware measurements. Designers must continually configure simulations to match the state of the physical circuit (e.g. capturing sensor inputs), and must manually rework the hardware to replicate changes or analyses performed in simulation. We propose techniques leveraging instrumentation and programmable test hardware to create a tight coupling between a physical circuit and its simulated model. Bridging these representations helps designers to compare simulated and measured behaviors, and to quickly perform analytical techniques on hardware (e.g. parameter-response analysis) that are typically cumbersome outside of simulation. We implement these techniques in a prototype and show how it aids in efficiently debugging a variety of analog circuits.2021ESEvan Strasnick et al.Stanford UniversityCircuit Making & Hardware PrototypingCHI
REACH+: Extending the Reachability of Encountered-type Haptics Devices through Dynamic Redirection in VREncountered-type haptic devices (EHDs) face a number of challenges when physically embodying content in a virtual environment, including workspace limits and device latency. To address these issues, we propose REACH+, a framework for dynamic visuo-haptic redirection to improve the perceived performance of EHDs during physical interaction in VR. Using this approach, we estimate the user’s arrival time to their intended target and redirect their hand to a point within the EHD’s spatio-temporally reachable space. We present an evaluation of this framework implemented with a desktop mobile robot in a 2D target selection task, tested at four robot speeds (20, 25, 30 and 35 cm/s). Results suggest that REACH+ can improve the performance of lower-speed EHDs, increasing their rate of on-time arrival to the point of contact by up to 25% and improving users’ self-reported sense of realism.2020EGEric J. Gonzalez et al.Mid-Air Haptics (Ultrasonic)Immersion & Presence ResearchUIST
User-defined Swarm Robot ControlA swarm of robots can accomplish more than the sum of its parts, and swarm systems will soon see increased use in applications ranging from tangible interfaces to search and rescue teams. However, effective human control of robot swarms has been shown to be demonstrably more difficult than controlling a single robot, and swarm-specific interactions methodologies are relatively underexplored. As we envision even non-expert users will have more daily in-person encounters with different numbers of robots in the future, we present a user-defined set of control interactions for tabletop swarm robots derived from an elicitation study. We investigated the effects of number of robots and proximity on the user's interaction and found significant effects. For instance, participants varied between using 1-2 fingers, one hand, and both hands depending on the group size. We also provide general design guidelines such as preferred interaction modality, common strategies, and a high-agreement interaction set.2020LKLawrence H. Kim et al.Stanford UniversityHuman-Robot Collaboration (HRC)CHI
SwarmHaptics: Haptic Display with Swarm RobotsThis paper seeks to better understand the use of haptic feedback in abstract, ubiquitous robotic interfaces. We introduce and provide preliminary evaluations of SwarmHaptics, a new type of haptic display using a swarm of small, wheeled robots. These robots move on a flat surface and apply haptic patterns to the user's hand, arm, or any other accessible body parts. We explore the design space of SwarmHaptics including individual and collective robot parameters, and demonstrate example scenarios including remote social touch using the Zooids platform. To gain insights into human perception, we applied haptic patterns with varying number of robots, force type, frequency, and amplitude and obtained user's perception in terms of emotion, urgency, and Human-Robot Interaction metrics. In a separate elicitation study, users generated a set of haptic patterns for social touch. The results from the two studies help inform how users perceive and generate haptic patterns with SwarmHaptics.2019LKLawrence H. Kim et al.Stanford UniversityMid-Air Haptics (Ultrasonic)Haptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsCHI
Editing Spatial Layouts through Tactile Templates for People with Visual ImpairmentsSpatial layout is a key component in graphic design. While people who are blind or visually impaired (BVI) can use screen readers or magnifiers to access digital content, these tools fail to fully communicate the content's graphic design information. Through semi-structured interviews and contextual inquiries, we identify the lack of this information and feedback as major challenges in understanding and editing layouts. Guided by these insights and a co-design process with a blind hobbyist web developer, we developed an interactive, multimodal authoring tool that lets blind people understand spatial relationships between elements and modify layout templates. Our tool automatically generates tactile print-outs of a web page's layout, which users overlay on top of a tablet that runs our self-voicing digital design tool. We conclude with design considerations grounded in user feedback for improving the accessibility of spatially encoded information and developing tools for BVI authors.2019JLJingyi Li et al.Stanford UniversityVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Universal & Inclusive DesignCHI
Beyond The Force: Using Quadcopters to Appropriate Objects and the Environment for Haptics in Virtual RealityQuadcopters have been used as hovering encountered-type haptic devices in virtual reality. We suggest that quadcopters can facilitate rich haptic interactions beyond force feedback by appropriating physical objects and the environment. We present HoverHaptics, an autonomous safe-to-touch quadcopter and its integration with a virtual shopping experience. HoverHaptics highlights three affordances of quadcopters that enable these rich haptic interactions: (1) dynamic positioning of passive haptics, (2) texture mapping, and (3) animating passive props. We identify inherent challenges of hovering encountered-type haptic devices, such as their limited speed, inadequate control accuracy, and safety concerns. We then detail our approach for tackling these challenges, including the use of display techniques, visuo-haptic illusions, and collision avoidance. We conclude by describing a preliminary study (n = 9) to better understand the subjective user experience when interacting with a quadcopter in virtual reality using these techniques.2019PAParastoo Abtahi et al.Stanford UniversityAutomated Driving Interface & Takeover DesignDrone Interaction & ControlCHI
Pinpoint: A PCB Debugging Pipeline Using Interruptible Routing and InstrumentationDifficulties in accessing, isolating, and iterating on the components and connections of a printed circuit board (PCB) create unique challenges in PCB debugging. Manual probing methods are slow and error prone, and even dedicated PCB testing equipment remains limited by its inability to modify the circuit during testing. We present Pinpoint, a tool that facilitates in-circuit PCB debugging through techniques such as programmatically probing signals, dynamically disconnecting components and subcircuits to test in isolation, and splicing in new elements to explore potential modifications. Pinpoint automatically instruments a PCB design and generates designs for a physical jig board that interfaces the user's PCB to our custom testing hardware and to software tools. We evaluate Pinpoint's ability to facilitate the debugging of various PCB issues by instrumenting and testing different classes of boards, as well as by characterizing its technical limitations and by soliciting feedback through a guided exploration with PCB designers.2019ESEvan Strasnick et al.Stanford UniversityCircuit Making & Hardware PrototypingCHI
Visuo-Haptic Illusions for Improving the Perceived Performance of Shape DisplaysIn this work, we utilize visuo-haptic illusions to improve the perceived performance of encountered-type haptic devices, specifically shape displays, in virtual reality. Shape displays are matrices of actuated pins that travel vertically to render physical shapes; however, they have limitations such as low resolution, small display size, and low pin speed. To address these limitations, we employ illusions such as redirection, scaling, and retargeting that take advantage of the visual dominance effect, the idea that vision often dominates when senses conflict. Our evaluation of these techniques suggests that redirecting sloped lines with angles less than 40 degrees onto a horizontal line is an effective technique for increasing the perceived resolution of the display. Scaling up the virtual object onto the shape display by a factor less than 1.8x can also increase the perceived resolution. Finally, using vertical redirection a perceived 3x speed increase can be achieved.2018PAParastoo Abtahi et al.Stanford UniversityShape-Changing Interfaces & Soft Robotic MaterialsVisualization Perception & CognitionCHI
A Functional Optimization Based Approach for Continuous 3D Retargeted Touch of Arbitrary, Complex Boundaries in Haptic Virtual RealityPassive or actuated physical props can provide haptic feedback, leading to a satisfying sense of presence and realism in virtual reality. However, the mismatch between the physical and virtual surfaces (boundaries) can diminish user experience. Haptic retargeting can overcome this limitation by utilizing visio-haptic effects. Previous investigations in haptic retargeting have focused on methods for point based position retargeting and techniques for remapping 2D shapes or simple 3D shape changes. Our approach extends haptic retargeting to complex, arbitrary shapes that provide a continuous mapping across all points on a boundary. This new approach also allows for multi-finger interaction. We describe a functional optimization to find the ideal spatial warping function with different goals: a maximum mapping smoothness, a minimum mismatch between the real and virtual world, or the combination of the two. We report on a preliminary user study of different optimization goals and elaborate potential applications through a set of demonstrations.2018YZYiwei Zhao et al.Stanford UniversityShape-Changing Interfaces & Soft Robotic MaterialsCHI
shapeShift: 2D Spatial Manipulation and Self-Actuation of Tabletop Shape Displays for Tangible and Haptic InteractionWe explore interactions enabled by 2D spatial manipulation and self-actuation of a tabletop shape display. To explore these interactions, we developed shapeShift, a compact, high-resolution (7 mm pitch), mobile tabletop shape display. shapeShift can be mounted on passive rollers allowing for bimanual interaction where the user can freely manipulate the system while it renders spatially relevant content. shapeShift can also be mounted on an omnidirectional-robot to provide both vertical and lateral kinesthetic feedback, display moving objects, or act as an encountered-type haptic device for VR. We present a study on haptic search tasks comparing spatial manipulation of a shape display for egocentric exploration of a map versus exploration using a fixed display and a touch pad. Results show a 30% decrease in navigation path lengths, 24% decrease in task time, 15% decrease in mental demand and 29% decrease in frustration in favor of egocentric navigation.2018ASAlexa F Siu et al.Stanford UniversityShape-Changing Interfaces & Soft Robotic MaterialsShape-Changing Materials & 4D PrintingCHI
Grand Challenges in Shape-Changing Interface ResearchShape-changing interfaces have emerged as a new method for interacting with computers, using dynamic changes in a device's physical shape for input and output. With the advances of research into shape-changing interfaces, we see a need to synthesize the main, open research questions. The purpose of this synthesis is to formulate common challenges across the diverse fields engaged in shape-change research, to facilitate progression from single prototypes and individual design explorations to grander scientific goals, and to draw attention to challenges that come with maturity, including those concerning ethics, theory-building, and societal impact. In this article we therefore present 12 grand challenges for research on shape-changing interfaces, derived from a three-day workshop with 25 shape-changing interface experts with backgrounds in design, computer science, human-computer interaction, engineering, robotics, and material science.2018JAJason Alexander et al.Lancaster UniversityShape-Changing Interfaces & Soft Robotic MaterialsCHI