LoopBot: Representing Continuous Haptics of Grounded Objects in Room-scale VRIn room-scale virtual reality, providing continuous haptic feedback from touching grounded objects, such as walls and handrails, has been challenging due to the user's walking range and the required force. In this study, we propose LoopBot, a novel technique to provide continuous haptic feedback from grounded objects using only a single user-following robot. Specifically, LoopBot is equipped with a loop-shaped haptic prop attached to an omnidirectional robot that scrolls to cancel out the robot's displacement, giving the user the haptic sensation that the prop is actually fixed in place, or ``grounded.'' We first introduce the interaction design space of LoopBot and, as one of its promising interaction scenarios, implement a prototype for the experience of walking while grasping handrails. A performance evaluation shows that scrolling the prop cancels $77.5\%$ of the robot's running speed on average. A preliminary user test ($N=10$) also shows that the subjective realism of the experience and the sense of the virtual handrails being grounded were significantly higher than when the prop was not scrolled. Based on these findings, we discuss possible further development of LoopBot.2024TITetsushi Ikeda et al.Force Feedback & Pseudo-Haptic WeightFull-Body Interaction & Embodied InputSmart Home Privacy & SecurityUIST
InflatableBots: Inflatable Shape-Changing Mobile Robots for Large-Scale Encountered-Type Haptics in VRWe introduce InflatableBots, shape-changing inflatable robots for large-scale encountered-type haptics in VR. Unlike traditional inflatable shape displays, which are immobile and limited in interaction areas, our approach combines mobile robots with fan-based inflatable structures. This enables safe, scalable, and deployable haptic interactions on a large scale. We developed three coordinated inflatable mobile robots, each of which consists of an omni-directional mobile base and a reel-based inflatable structure. The robot can simultaneously change its height and position rapidly (horizontal: 58.5 cm/sec, vertical: 10.4 cm/sec, from 40 cm to 200 cm), which allows for quick and dynamic haptic rendering of multiple touch points to simulate various body-scale objects and surfaces in real-time across large spaces (3.5 m x 2.5 m). We evaluated our system with a user study (N = 12), which confirms the unique advantages in safety, deployability, and large-scale interactability to significantly improve realism in VR experiences.2024RGRyota Gomi et al.Tohoku UniversityShape-Changing Interfaces & Soft Robotic MaterialsSocial & Collaborative VRImmersion & Presence ResearchCHI
SwapVid: Integrating Video Viewing and Document Exploration with Direct ManipulationVideos accompanied by documents---\textit{document-based videos}---enable presenters to share contents beyond videos and audience to use them for detailed content comprehension. However, concurrently exploring multiple channels of information could be taxing. We propose SwapVid, a novel interface for viewing and exploring document-based videos. SwapVid seamlessly integrates a video and a document into a single view and lets the content behaves as both video and a document; it adaptively switches a document-based video to act as a video or a document upon direct manipulation (\textit{e.g.,} scrolling the document, manipulating the video timeline). We conducted a user study with twenty participants, comparing SwapVid to a side-by-side video/document views. Results showed that our interface reduces time and physical workload when exploring slide-based documents based on video referencing. Based on the study findings, we extended SwapVid with additional functionalities and demonstrated that it further extends the practical capabilities.2024TMTaichi Murakami et al.Tohoku UniversityInteractive Data VisualizationData StorytellingContext-Aware ComputingCHI
TouchLog: Finger Micro Gesture Recognition Using Photo-Reflective SensorsFingertip input allows for interactions that are natural, easy to perform, and socially acceptable. It also has advantages in terms of low physical demand, confidentiality, and haptic feedback. In this study, we propose TouchLog, a fingernail-type device that uses skin deformation of the fingertip to identify finger micro gestures written with the thumb on the index finger. TouchLog is attached to the index fingernail and allows for one-handed fingertip input without compromising the haptic feedback on the finger. To evaluate the accuracy of 11 types of finger micro gesture recognition, we conducted a user study (N = 10) and obtained an average identification accuracy of 91.5\% (SD = 3.1\%). A continuous input method using skin deformation and contact pressure was also examined, and its usefulness as a wearable device was discussed.2023RKYoshifumi Kitamura et al.Vibrotactile Feedback & Skin StimulationHaptic WearablesHand Gesture RecognitionUbiComp
BirdViewAR: Surroundings-aware Remote Drone Piloting Using an Augmented Third-person PerspectiveWe propose BirdViewAR, a surroundings-aware remote drone-operation system that provides significant spatial awareness to pilots through an augmented third-person view (TPV) from an autopiloted secondary follower drone. The follower drone responds to the main drone's motions and directions using our optimization-based autopilot, allowing the pilots to clearly observe the main drone and its imminent destination without extra input. To improve their understanding of the spatial relationships between the main drone and its surroundings, the TPV is visually augmented with AR-overlay graphics, where the main drone's spatial statuses are highlighted: its heading, altitude, ground position, camera field-of-view (FOV), and proximity areas. We discuss BirdViewAR's design and implement its proof-of-concept prototype using programmable drones. Finally, we conduct a preliminary outdoor user study and find that BirdViewAR effectively increased spatial awareness and piloting performance.2023MIMaakito Inoue et al.Tohoku UniversityContext-Aware ComputingDrone Interaction & ControlCHI
WaddleWalls: Room-scale Interactive Partitioning System using a Swarm of Robotic PartitionsWe propose WaddleWalls, a room-scale interactive partitioning system using a swarm of robotic partitions that allows occupants to interactively reconfigure workspace partitions to satisfy their privacy and interaction needs. The system can automatically arrange the partitions' layout designed by the user on demand. The user specifies the target partition's position, orientation, and height using the controller's 3D manipulations. In this work, we discuss the design consideration of the interactive partition system and implement WaddleWalls' proof-of-concept prototype assembled with off-the-shelf materials. We demonstrate the functionalities of WaddleWalls through several application scenarios in an open-planned office environment. We also conduct an initial user evaluation that compares WaddleWalls with conventional wheeled partitions, finding that WaddleWalls allows effective workspace partitioning and mitigates the physical and temporal efforts needed to fulfill ad hoc social and privacy requirements. Finally, we clarify the feasibility, potential, and future challenges of WaddleWalls through an interview with experts.2022YOYuki Onishi et al.Domestic RobotsHuman-Robot Collaboration (HRC)UIST
ModularHMD: A Reconfigurable Mobile Head-Mounted Display Enabling Ad-hoc Peripheral Interactions with the Real WorldWe propose ModularHMD, a new mobile head-mounted display concept, which adopts a modular mechanism and allows a user to perform ad-hoc peripheral interaction with real-world devices or people during VR experiences. ModularHMD is comprised of a central HMD and three removable module devices installed in the periphery of the HMD cowl. Each module has four main states: occluding, extended VR view, video see-through (VST), and removed/reused. Among different combinations of module states, a user can quickly setup the necessary HMD forms, functions, and real-world visions for ad-hoc peripheral interactions without removing the headset. For instance, an HMD user can see her surroundings by switching a module into the VST mode. She can also physically remove a module to obtain direct peripheral visions of the real world. The removed module can be reused as an instant interaction device (e.g., touch keyboards) for subsequent peripheral interactions. Users can end the peripheral interaction and revert to a full VR experience by re-mounting the module. We design ModularHMD’s configuration and peripheral interactions with real-world objects and people. We also implement a proof-of-concept prototype of ModularHMD to validate its interactions capabilities through a user study. Results show that ModularHMD is an effective solution that enables both immersive VR and ad-hoc peripheral interactions.2021IEIsamu Endo et al.Mixed Reality WorkspacesImmersion & Presence ResearchUIST
Can Playing with Toy Blocks Reflect Behavior Problems in Children?Although children’s behavioral and mental problems are generally diagnosed in clinical settings, the prediction and awareness of children’s mental wellness in daily settings are getting increased attention. Toy blocks are both accessible in most children’s daily lives and provide physicality as a unique non-verbal channel to express their inner world. In this paper, we propose a toy block approach for predicting a range of behavior problems in young children (4-6 years old) measured by the Child Behavior Checklist (CBCL). We defined and classified a set of quantitative play actions from IMU-embedded toy blocks. Play data collected from 78 preschoolers revealed that specific play actions and patterns indicate total problems, internalizing problems, and aggressive behavior in children. The results align with our qualitative observations, and suggest the potential of predicting the clinical behavior problems of children based on short free-play sessions with sensor-embedded toy blocks.2021XWXiyue Wang et al.Tohoku UniversityHuman Pose & Activity RecognitionSpecial Education TechnologyBiosensors & Physiological MonitoringCHI
PinpointFly: An Egocentric Position-control Drone Interface using Mobile ARAccurate drone positioning is challenging because pilots only have a limited position and direction perception of a flying drone from their perspective. This makes conventional joystick-based speed control inaccurate and more complicated and significantly degrades piloting performance. We propose PinpointFly, an egocentric drone interface that allows pilots to arbitrarily position and rotate a drone using position-control direct interactions on a see-through mobile AR where the drone position and direction are visualized with a virtual cast shadow (i.e., the drone's orthogonal projection onto the floor). Pilots can point to the next position or draw the drone's flight trajectory by manipulating the virtual cast shadow and the direction/height slider bar on the touchscreen. We design and implement a prototype of PinpointFly for indoor and visual line of sight scenarios, which are comprised of real-time and predefined motion-control techniques. We conduct two user studies with simple positioning and inspection tasks. Our results demonstrate that PinpointFly makes the drone positioning and inspection operations faster, more accurate, simpler and fewer workload than a conventional joystick interface with a speed-control method.2021LCLinfeng Chen et al.Tohoku University, Tohoku UniversityAR Navigation & Context AwarenessDrone Interaction & ControlCHI
TiltChair: Manipulative Posture Guidance by Actively Inclining the Seat of an Office ChairWe propose TiltChair, an actuated office chair that physically manipulates the user's posture by actively inclining the chair's seat to address problems associated with prolonged sitting. The system controls the inclination angle and motion speed with the aim of achieving manipulative but unobtrusive posture guidance. To demonstrate its potential, we first built a prototype of TiltChair with a seat that could be tilted by pneumatic control. We then investigated the effects of the seat's inclination angle and motions on task performance and overall sitting experience through two experiments. The results show that the inclination angle mainly affects the difficulty of maintaining one's posture, while the motion speed affected the conspicuousness and subjective acceptability of the motion. However, these seating conditions did not affect objective task performance. Based on these results, we propose a design space for facilitating effective seat-inclination behavior using the three dimensions of angle, speed, and continuity. Furthermore, we discuss promising applications.2021KFKazuyuki Fujita et al.Tohoku UniversityForce Feedback & Pseudo-Haptic WeightWorkplace Wellbeing & Work StressCHI
ZoomWalls: Dynamic Walls that Simulate Haptic Infrastructure for Room-scale VR WorldWe focus on the problem of simulating the haptic infrastructure of a virtual environment (i.e. walls, doors). Our approach relies on multiple ZoomWalls—autonomous robotic encounter-type haptic wall-shaped props—that coordinate to provide haptic feedback for room-scale virtual reality. Based on a user’s movement through the physical space, ZoomWall props are coordinated through a predict-and-dispatch architecture to provide just-in-time haptic feedback for objects the user is about to touch. To refine our system, we conducted simulation studies of different prediction algorithms, which helped us to refine our algorithmic approach to realize the physical ZoomWall prototype. Finally, we evaluated our system through a user experience study, which showed that participants found that ZoomWalls increased their sense of presence in the VR environment. ZoomWalls represents an instance of autonomous mobile reusable props, which we view as an important design direction for haptics in VR.2020YYYAN YIXIAN et al.Shape-Changing Interfaces & Soft Robotic MaterialsMixed Reality WorkspacesUIST
Third-Person Piloting: Increasing Situational Awareness using a Spatially Coupled Second DroneWe propose Third-Person Piloting, a novel drone manipulation interface that increases situational awareness using an interactive third-person perspective from a second, spatially coupled drone. The pilot uses a controller with a manipulatable miniature drone. Our algorithm understands the relationship between the pilot’s eye position and the miniature drone and ensures that the same spatial relationship is maintained between the two real drones in the sky. This allows the pilot to obtain various third-person perspectives by changing the orientation of the miniature drone while maintaining standard the primary drone control using the conventional controller. We design and implement a working prototype with programmable drones and propose several representative operation scenarios. We gather user feedback to obtain initial insights of our interface design from novices, advanced beginners, and experts. Result shows that our interface was positively evaluated by all of them, and their feedback suggests the additional interactive third-person perspective increases spatial awareness and helps their primary drone manipulation.2019RTRyotaro Temma et al.Drone Interaction & ControlTeleoperation & TelepresenceUIST
Asian CHI Symposium: Emerging HCI Research CollectionThis symposium showcases the latest work from Asia on interactive systems and user interfaces that address under-explored problems and demonstrate unique approaches. In addition to circulating ideas and sharing a vision of future research in human-computer interaction, this symposium aims to foster social networks among academics (researchers and students) and practitioners and create a fresh research community from Asian region.2018SSSaki Sakaguchi et al.The University of TokyoDeveloping Countries & HCI for Development (HCI4D)User Research Methods (Interviews, Surveys, Observation)CHI