Shape n’ Swarm: Hands-on, Shape-aware Generative Authoring with Swarm UI and LLMsThis paper introduces a novel authoring method for swarm user interfaces that combines hands-on shape manipulation and speech to convey intent for generative motion and interaction. We refer to this authoring method as shape-aware generative authoring, which is generalizable to actuated tangible user interfaces. The proof-of-concept Shape n’ Swarm tool allows users to create diverse animations and interactions with tabletop robots by hand-arranging the robots and providing spoken instructions. The system employs multiple script-generating LLM agents that work together to handle user inputs for three major generative tasks: (1) thematically interpreting the shapes created by users; (2) creating animations for the manipulated shape and (3) flexibly building interaction by mapping I/O. In a user study (n = 11), participants could easily create diverse physical animations and interactions without coding. To lead this novel research space, we also share limitations, research challenges, and design recommendations.2025MJMatthew Jeung et al.Shape-Changing Interfaces & Soft Robotic MaterialsGenerative AI (Text, Image, Music, Video)Human-LLM CollaborationUIST
Buoyancé: Reeling Helium-Inflated Balloons with Mobile Robots on the Ground for Mid-Air Tangible Display, Interaction, and AssemblyWe introduce a novel approach to spatially actuated tangible UI by controlling helium-inflated balloons (HIBs) in mid-air using mobile reeling robots, named ReelBots. With a relatively compact device form factor, the robots can manipulate HIBs in an extensive vertical range, reaching relatively high altitudes (20m or more), thanks to its reeling mechanisms. The hardware offers diverse interactive functionalities and applications for representing abstract data in 3D space, reconfiguring lights and cameras in an everyday space, and assembling HIBs into diverse configurations. Our proof-of-concept implementation was developed based on omnidirectional mobile robots and a motion tracking system to demonstrate the novel approach of enriching 3D physical space. Our control software is designed to manipulate multiple robots to control the position of HIBs in real time via multiple options ranging from GUI control and tangible and gesture based controls.2025APAlan Pham et al.Shape-Changing Interfaces & Soft Robotic MaterialsDigital Art Installations & Interactive PerformanceUIST
Shape-Kit: A Design Toolkit for Crafting On-Body Expressive HapticsDriven by the vision of everyday haptics, the HCI community is advocating for “design touch first” and investigating “how to touch well.” However, a gap remains between the exploratory nature of haptic design and technical reproducibility. We present Shape-Kit, a hybrid design toolkit embodying our “crafting haptics” metaphor, where hand touch is transduced into dynamic pin-based sensations that can be freely explored across the body. An ad-hoc tracking module captures and digitizes these patterns. Our study with 14 designers and artists demonstrates how Shape-Kit facilitates sensorial exploration for expressive haptic design. We analyze how designers collaboratively ideate, prototype, iterate, and compose touch experiences and show the subtlety and richness of touch that can be achieved through diverse crafting methods with Shape-Kit. Reflecting on the findings, our work contributes key insights into haptic toolkit design and touch design practices centered on the “crafting haptics” metaphor. We discuss in-depth how Shape-Kit’s simplicity, though remaining constrained, enables focused crafting for deeper exploration, while its collaborative nature fosters shared sense-making of touch experiences.2025RZRan Zhou et al.University of Chicago; KTH Royal Institute of TechnologyHaptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsCHI
CARDinality: Interactive Card-shaped Robots with Locomotion and Haptics using VibrationThis paper introduces a novel approach to interactive robots by leveraging the form-factor of cards to create thin robots equipped with vibrational capabilities for locomotion and haptic feedback. The system is composed of flat-shaped robots with on-device sensing and wireless control, which offer lightweight portability and scalability. This research introduces a hardware prototype to explore the possibility of ‘vibration-based omni-directional sliding locomotion’. Applications include augmented card playing, educational tools, and assistive technology, which showcase CARDinality’s versatility in tangible interaction.2024ARAditya Retnanto et al.Vibrotactile Feedback & Skin StimulationForce Feedback & Pseudo-Haptic WeightShape-Changing Interfaces & Soft Robotic MaterialsUIST
TorqueCapsules: Fully-Encapsulated Flywheel Actuation Modules for Designing and Prototyping Movement-Based and Kinesthetic InteractionFlywheels are unique, versatile actuators that store and convert kinetic energy to torque, widely utilized in aerospace, robotics, haptics, and more. However, prototyping interaction using flywheels is not trivial due to safety concerns, unintuitive operation, and implementation challenges. We present TorqueCapsules: self-contained, fully-encapsulated flywheel actuation modules that make the flywheel actuators easy to control, safe to interact with, and quick to reconfigure and customize. By fully encapsulating the actuators with a wireless microcontroller, a battery, and other components, the module can be readily attached, embedded, or stuck to everyday objects, worn to people’s bodies, or combined with other devices. With our custom GUI, both novices and expert users can easily control multiple modules to design and prototype movements and kinesthetic haptics unique to flywheel actuation. We demonstrate various applications, including actuated everyday objects, wearable haptics, and expressive robots. We conducted workshops for novices and experts to employ TorqueCapsules to collect qualitative feedback and further application examples.2024WYZhuolin Yang et al.In-Vehicle Haptic, Audio & Multimodal FeedbackForce Feedback & Pseudo-Haptic WeightUIST
SHAPE-IT: Exploring Text-to-Shape-Display for Generative Shape-Changing Behaviors with LLMsThis paper introduces text-to-shape-display, a novel approach to generating dynamic shape changes in pin-based shape displays through natural language commands. By leveraging large language models (LLMs) and AI-chaining, our approach allows users to author shape-changing behaviors on demand through text prompts without programming. We describe the foundational aspects necessary for such a system, including the identification of key generative elements (primitive, animation, and interaction) and design requirements to enhance user interaction, based on formative exploration and iterative design processes. Based on these insights, we develop SHAPE-IT, an LLM-based authoring tool for a 24 x 24 shape display, which translates the user's textual command into executable code and allows for quick exploration through a web-based control interface. We evaluate the effectiveness of SHAPE-IT in two ways: 1) performance evaluation and 2) user evaluation (N= 10). The study conclusions highlight the ability to facilitate rapid ideation of a wide range of shape-changing behaviors with AI. However, the findings also expose accuracy-related challenges and limitations, prompting further exploration into refining the framework for leveraging AI to better suit the unique requirements of shape-changing systems.2024WQWanli Qian et al.Electrical Muscle Stimulation (EMS)Shape-Changing Interfaces & Soft Robotic MaterialsUIST
“Push-That-There”: Tabletop Multi-robot Object Manipulation via Multimodal 'Object-level Instruction'We present ``Push-That-There'', an interaction method and system enabling multi-model object-level user interaction with multi-robot system to autonomously and collectively manipulate objects on tabletop surfaces, inspired by ``Put-That-There''. Rather than requiring users to instruct individual robots, users directly specify how they want the objects to be moved, and the system responds by autonomously moving objects via our generalizable multi-robot control algorithm. The system is combined with various user instruction modalities, including gestures, GUI, tangible manipulation, and speech input, allowing users to intuitively create object-level instruction. We outline a design space, highlight interaction design opportunities facilitated by ``Push-That-There'', and provide an evaluation to assess our system's technical capabilities. While other recent HCI research has studied interaction using multi-robot system (e.g. Swarm UIs), our contribution is in the design and technical implementation of intuitive object-level interaction for multi-robot system that allows users to work at a high level, rather than needing to focus on the movements of individual robots.2024KWKeru Wang et al.Cognitive Impairment & Neurodiversity (Autism, ADHD, Dyslexia)Human-Robot Collaboration (HRC)DIS
Attention Receipts: Utilizing the Materiality of Receipts to Improve Screen-time Reflection on YouTubeYouTube remains a site of problematic persuasive media consumption, often overriding the goals of users when on the platform. In resistance, we present Attention Receipts - artifacts that materialize the cost of being persuaded by the engagement driven design of YouTube. We design and build a browser plugin and a receipt printer that helps users critically reflect upon their time spent watching videos on YouTube. In a 3 week field-deployment with 6 participants, we evaluate how the materiality of the receipt and their agency in the reflection process affect both the quality of reflection and the time spent consuming media. We find that the materiality of the receipts positively influences time spent consuming internet media and that users were split on having agency over when and how they reflect upon their screen-time. We conclude with design recommendations for domestic artifacts that utilize materiality to reveal the effects of persuasive technology.2024ASAnup Sathya et al.University of ChicagoDark Patterns RecognitionSocial Platform Design & User BehaviorCHI
Physica: Interactive Tangible Physics Simulation based on Tabletop Mobile Robots towards Explorable Physics EducationIn this paper, we introduce Physica, a tangible physics simulation system and approach based on tabletop mobile robots. In Physica, each tabletop robot can physically represent distinct simulated objects that are controlled through an underlying physics simulation, such as gravitational force, molecular movement, and spring force. It aims to bring the benefits of tangible and haptic interaction into explorable physics learning, which was traditionally only available on screen-based interfaces. The system utilizes off-the-shelf mobile robots (Sony Toio) and an open-source physics simulation tool (Teilchen). Built on top of them, we implement the interaction software pipeline that consists of 1) an event detector to reflect tangible interaction by users, and 2) target speed control to minimize the gap between the robot motion and simulated moving objects. To present the potential for physics education, we demonstrate various application scenarios that illustrate different forms of learning using Physica. In our user study, we investigate the effect and the potential of our approach through a perception study and interviews with physics educators.2023JLJiatong Li et al.Interactive Data VisualizationSTEM Education & Science CommunicationDesktop 3D Printing & Personal FabricationDIS
AeroRigUI: Actuated TUIs for Spatial Interaction using Rigging Swarm Robots on Ceilings in Everyday SpaceWe present AeroRigUI, an actuated tangible UI for 3D spatial embodied interaction. Using strings controlled by self-propelled swarm robots with a reeling mechanism on ceiling surfaces, our approach enables \textit{rigging} (control through strings) physical objects' position and orientation in the air. This can be applied to novel interactions in 3D space, including dynamic physical affordances, 3D information displays, and haptics. Utilizing the ceiling, an often underused room area, AeroRigUI can be applied for a range of applications such as room organization, data physicalization, and animated expressions. We demonstrate the applications based on our proof-of-concept prototype, which includes the hardware design of the rigging robots, named RigBots, and the software design for mid-air object control via interactive string manipulation. \change{We also introduce technical evaluation and analysis of our approach prototype to address the hardware feasibility and safety.} Overall, AeroRigUI enables a novel spatial and tangible UI system with great controllability and deployability.2023LYLilith Yu et al.University of ChicagoShape-Changing Interfaces & Soft Robotic MaterialsPrototyping & User TestingCHI
ThrowIO: Actuated TUIs that Facilitate "Throwing and Catching" Spatial Interaction with Overhanging Mobile Wheeled RobotsWe introduce ThrowIO, a novel style of actuated tangible user interface that facilitates throwing and catching spatial interaction powered by mobile wheeled robots on overhanging surfaces. In our approach, users throw and stick objects that are embedded with magnets to an overhanging ferromagnetic surface where wheeled robots can move and drop them at desired locations, allowing users to catch them. The thrown objects are tracked with an RGBD camera system to perform closed-loop robotic manipulations. By computationally facilitating throwing and catching interaction, our approach can be applied in many applications including kinesthetic learning, gaming, immersive haptic experience, ceiling storage, and communication. We demonstrate the applications with a proof-of-concept system enabled by wheeled robots, ceiling hardware design, and software control. Overall, ThrowIO opens up novel spatial, dynamic, and tangible interaction for users via overhanging robots, which has great potential to be integrated into our everyday space.2023TLTing-Han Lin et al.University of ChicagoShape-Changing Interfaces & Soft Robotic MaterialsHuman-Robot Collaboration (HRC)CHI
Sketched Reality: Sketching Bi-Directional Interactions Between Virtual and Physical Worlds with AR and Actuated Tangible UIThis paper introduces Sketched Reality, an approach that com- bines AR sketching and actuated tangible user interfaces (TUI) for bi-directional sketching interaction. Bi-directional sketching enables virtual sketches and physical objects to “affect” each other through physical actuation and digital computation. In the exist- ing AR sketching, the relationship between virtual and physical worlds is only one-directional — while physical interaction can affect virtual sketches, virtual sketches have no return effect on the physical objects or environment. In contrast, bi-directional sketch- ing interaction allows the seamless coupling between sketches and actuated TUIs. In this paper, we employ tabletop-size small robots (Sony Toio) and an iPad-based AR sketching tool to demonstrate the concept. In our system, virtual sketches drawn and simulated on an iPad (e.g., lines, walls, pendulums, and springs) can move, actuate, collide, and constrain physical Toio robots, as if virtual sketches and the physical objects exist in the same space through seamless coupling between AR and robot motion. This paper contributes a set of novel interactions and a design space of bi-directional AR sketching. We demonstrate a series of potential applications, such as tangible physics education, explorable mechanism, tangible gaming for children, and in-situ robot programming via sketching.2022HKHiroki Kaimoto et al.Automated Driving Interface & Takeover DesignShape-Changing Interfaces & Soft Robotic MaterialsAR Navigation & Context AwarenessUIST