DesignMemo: Integrating Discussion Context into Online Collaboration with Enhanced Design Rationale TrackingRemote collaborative design has become increasingly popular, but current design tools often overlook the importance of contextual communication during synchronized design activities, which is critical for understanding the rationale and decisions behind design choices. In this paper, we introduce DesignMemo, a proof-of-concept system that integrates the verbal context of remote discussions into visual design history tracking. The system automatically labels the visual elements with an annotation, which is linked to a certain transcript of the meeting, so that the user can easily recall the context of the visual design by clicking the element. The system also integrates an LLM agent for annotation-oriented summarization based on global context tracking, so users can quickly follow the rationale of the design without reading the lengthy transcript. Our user study with 24 participants suggests that the ability to track communication context makes the iterative design process smoother and more efficient.2025BLBoyu Li et al.Working together (with other people)CSCW
MapStory: Prototyping Editable Map Animations with LLM AgentsWe introduce MapStory, an LLM‑powered animation prototyping tool that generates editable map animation sequences directly from natural language text by leveraging a dual-agent LLM architecture. Given a user-written script, MapStory automatically produces a scene breakdown, which decomposes the text into key map animation primitives such as camera movements, visual highlights, and animated elements. Our system includes a researcher agent that accurately queries geospatial information by leveraging an LLM with web search, enabling automatic extraction of relevant regions, paths, and coordinates while allowing users to edit and query for changes or additional information to refine the results. Additionally, users can fine-tune parameters of these primitive blocks through an interactive timeline editor. We detail the system’s design and architecture, informed by formative interviews with professional animators and by an analysis of 200 existing map animation videos. Our evaluation, which includes expert interviews (N=5), and a usability study (N=12), demonstrates that MapStory enables users to create map animations with ease, facilitates faster iteration, encourages creative exploration, and lowers barriers to creating map-centric stories.2025AGAditya Gunturu et al.Geospatial & Map VisualizationComputational Methods in HCIUIST
InSituTale: Enhancing Augmented Data Storytelling with Physical ObjectsAugmented data storytelling enhances narrative delivery by integrating visualizations with physical environments and presenter actions. Existing systems predominantly rely on body gestures or speech to control visualizations, leaving interactions with physical objects largely underexplored. We introduce augmented physical data storytelling, an approach enabling presenters to manipulate visualizations through physical object interactions. To inform this approach, we first conducted a survey of data-driven presentations to identify common visualization commands. We then conducted workshops with nine HCI/VIS researchers to collect mappings between physical manipulations and these commands.Guided by these insights, we developed InSituTale, a prototype that combines object tracking via a depth camera with Vision-LLM for detecting real-world events. Through physical manipulations, presenters can dynamically execute various visualization commands, delivering cohesive data storytelling experiences that blend physical and digital elements. A user study with 12 participants demonstrated that InSituTale enables intuitive interactions, offers high utility, and facilitates an engaging presentation experience.2025KTKentaro Takahira et al.Interactive Data VisualizationContext-Aware ComputingInteractive Narrative & Immersive StorytellingUIST
From Following to Understanding: Investigating the Role of Reflective Prompts in AR-Guided Tasks to Promote User UnderstandingAugmented Reality (AR) is a promising medium for guiding users through tasks, yet its impact on fostering deeper task understanding remains underexplored. This paper investigates the impact of reflective prompts---strategic questions that encourage users to challenge assumptions, connect actions to outcomes, and consider hypothetical scenarios---on task comprehension and performance. We conducted a two-phase study: a formative survey and co-design sessions (N=9) to develop reflective prompts, followed by a within-subject evaluation (N=16) comparing AR instructions with and without these prompts in coffee-making and circuit assembly tasks. Our results show that reflective prompts significantly improved objective task understanding and resulted in more proactive information acquisition behaviors during task completion. These findings highlight the potential of incorporating reflective elements into AR instructions to foster deeper engagement and learning. Based on data from both studies, we synthesized design guidelines for integrating reflective elements into AR systems to enhance user understanding without compromising task performance.2025NZNandi Zhang et al.University of CalgaryAR Navigation & Context AwarenessPrototyping & User TestingCHI
InflatableBots: Inflatable Shape-Changing Mobile Robots for Large-Scale Encountered-Type Haptics in VRWe introduce InflatableBots, shape-changing inflatable robots for large-scale encountered-type haptics in VR. Unlike traditional inflatable shape displays, which are immobile and limited in interaction areas, our approach combines mobile robots with fan-based inflatable structures. This enables safe, scalable, and deployable haptic interactions on a large scale. We developed three coordinated inflatable mobile robots, each of which consists of an omni-directional mobile base and a reel-based inflatable structure. The robot can simultaneously change its height and position rapidly (horizontal: 58.5 cm/sec, vertical: 10.4 cm/sec, from 40 cm to 200 cm), which allows for quick and dynamic haptic rendering of multiple touch points to simulate various body-scale objects and surfaces in real-time across large spaces (3.5 m x 2.5 m). We evaluated our system with a user study (N = 12), which confirms the unique advantages in safety, deployability, and large-scale interactability to significantly improve realism in VR experiences.2024RGRyota Gomi et al.Tohoku UniversityShape-Changing Interfaces & Soft Robotic MaterialsSocial & Collaborative VRImmersion & Presence ResearchCHI
Teachable Reality: Prototyping Tangible Augmented Reality with Everyday Objects by Leveraging Interactive Machine TeachingThis paper introduces Teachable Reality, an augmented reality (AR) prototyping tool for creating interactive tangible AR applications with arbitrary everyday objects. Teachable Reality leverages vision-based interactive machine teaching (e.g., Teachable Machine), which captures real-world interactions for AR prototyping. It identifies the user-defined tangible and gestural interactions using an on-demand computer vision model. Based on this, the user can easily create functional AR prototypes without programming, enabled by a trigger-action authoring interface. Therefore, our approach allows the flexibility, customizability, and generalizability of tangible AR applications that can address the limitation of current marker-based approaches. We explore the design space and demonstrate various AR prototypes, which include tangible and deformable interfaces, context-aware assistants, and body-driven AR applications. The results of our user study and expert interviews confirm that our approach can lower the barrier to creating functional AR prototypes while also allowing flexible and general-purpose prototyping experiences.2023KMKyzyl Monteiro et al.IIIT-Delhi, University of CalgaryShape-Changing Interfaces & Soft Robotic MaterialsHand Gesture RecognitionAR Navigation & Context AwarenessCHI
RealitySketch: Embedding Responsive Graphics and Visualizations in AR through Dynamic SketchingWe present RealitySketch, an augmented reality interface for sketching interactive graphics and visualizations. In recent years, an increasing number of AR sketching tools enable users to draw and embed sketches in the real world. However, with the current tools, sketched contents are inherently static, floating in mid air without responding to the real world. This paper introduces a new way to embed dynamic and responsive graphics in the real world. In RealitySketch, the user draws graphical elements on a mobile AR screen and binds them with physical objects in real-time and improvisational ways, so that the sketched elements dynamically move with the corresponding physical motion. The user can also quickly visualize and analyze real-world phenomena through responsive graph plots or interactive visualizations. This paper contributes to a set of interaction techniques that enable capturing, parameterizing, and visualizing real-world motion without pre-defined programs and configurations. Finally, we demonstrate our tool with several application scenarios, including physics education, sports training, and in-situ tangible interfaces.2020RSRyo Suzuki et al.AR Navigation & Context AwarenessInteractive Data VisualizationUIST
RoomShift: Room-scale Dynamic Haptics for VR with Furniture-moving Swarm RobotsRoomShift is a room-scale dynamic haptic environment for virtual reality, using a small swarm of robots that can move furniture. RoomShift consists of nine shape-changing robots: Roombas with mechanical scissor lifts. These robots drive beneath a piece of furniture to lift, move and place it. By augmenting virtual scenes with physical objects, users can sit on, lean against, place and otherwise interact with furniture with their whole body; just as in the real world. When the virtual scene changes or users navigate within it, the swarm of robots dynamically reconfigures the physical environment to match the virtual content. We describe the hardware and software implementation, applications in virtual tours and architectural design and interaction techniques.2020RSRyo Suzuki et al.University of Colorado BoulderShape-Changing Interfaces & Soft Robotic MaterialsMixed Reality WorkspacesHuman-Robot Collaboration (HRC)CHI
EnhancedTouchX: Smart Bracelets for Augmenting Interpersonal Touch InteractionsEnhancedTouchX, a bracelet-type interpersonal body area network device, not only detects but also quantifies interpersonal hand-to-hand touch interactions. Without any wired connection, it can identify the direction and gestures of a touch. The developed device can connect to an external device via Bluetooth Low Energy for monitoring and logging where, when, how long, who, and how the touch interactions occurred. These daily augmented touch interactions provided by such contextual information would offer a variety of applications to facilitate social interactions. Our experiment, conducted with several pairs of participants, demonstrates that the devices can identify the direction of a touch (from one initiating the touch (active touch) to the one being touched (passive touch)) with 95% accuracy. In addition, the devices are also capable of identifying four types of touch gestures with 85% accuracy using a simple threshold classifier.2019THTaku Hachisu et al.University of TsukubaHaptic WearablesCHI
Reactile: Programming Swarm User Interfaces through Direct Physical ManipulationWe explore a new approach to programming swarm user interfaces (Swarm UI) by leveraging direct physical manipulation. Existing Swarm UI applications are written using a robot programming framework: users work on a computer screen and think in terms of low-level controls. In contrast, our approach allows programmers to work in physical space by directly manipulating objects and think in terms of high-level interface design. Inspired by current UI programming practices, we introduce a four-step workflow—create elements, abstract attributes, specify behaviors, and propagate changes—for Swarm UI programming. We propose a set of direct physical manipulation techniques to support each step in this workflow. To demonstrate these concepts, we developed Reactile, a Swarm UI programming environment that actuates a swarm of small magnets and displays spatial information of program states using a DLP projector. Two user studies—an in-class survey with 148 students and a lab interview with eight participants—confirm that our approach is intuitive and understandable for programming Swarm UIs.2018RSRyo Suzuki et al.University of Colorado BoulderHead-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)Shape-Changing Interfaces & Soft Robotic MaterialsCHI
Dynablock: Dynamic 3D Printing for Instant and Reconstructable Shape FormationThis paper introduces Dynamic 3D Printing, a fast and reconstructable shape formation system. Dynamic 3D Printing can assemble an arbitrary three-dimensional shape from a large number of small physical elements. Also, it can disassemble the shape back to elements and reconstruct a new shape. Dynamic 3D Printing combines the capabilities of 3D printers and shape displays: Like conventional 3D printing, it can generate arbitrary and graspable three-dimensional shapes, while allowing shapes to be rapidly formed and reformed as in a shape display. To demonstrate the idea, we describe the design and implementation of Dynablock, a working prototype of a dynamic 3D printer. Dynablock can form a three-dimensional shape in seconds by assembling 3,000 9 mm blocks, leveraging a 24 x 16 pin-based shape display as a parallel assembler. Dynamic 3D printing is a step toward achieving our long-term vision in which 3D printing becomes an interactive medium, rather than the means for fabrication that it is today. In this paper, we explore possibilities for this vision by illustrating application scenarios that are difficult to achieve with conventional 3D printing or shape display systems.2018RSRyo Suzuki et al.Shape-Changing Materials & 4D PrintingUIST
PEP (3D Printed Electronic Papercrafts): An Integrated Approach for 3D Sculpting Paper-Based Electronic DevicesWe present PEP (Printed Electronic Papercrafts), a set of design and fabrication techniques to integrate electronic based interactivities into printed papercrafts via 3D sculpting. We explore the design space of PEP, integrating four functions into 3D paper products: actuation, sensing, display, and communication, leveraging the expressive and technical opportunities enabled by paper-like functional layers with a stack of paper. We outline a seven-step workflow, introduce a design tool we developed as an add-on to an existing CAD environment, and demonstrate example applications that combine the electronic enabled functionality, the capability of 3D sculpting, and the unique creative affordances by the materiality of paper.2018HOHyunjoo Oh et al.University of Colorado BoulderDesktop 3D Printing & Personal FabricationCircuit Making & Hardware PrototypingCHI