From Sci-Fi Imagination to Everyday Interaction: A Narrative Framework for the Self-Awakening Journey of a Smart LampIn recent years, digital technologies have become increasingly autonomous, offering "mind-like" experiences of intelligent objects across various things, including smart home devices, social robots, and voice assistants. Drawing inspiration from the classic "mind awakening" narratives of intelligent things in science fiction, this study employs design fiction to integrate such storylines into everyday contexts. We present EvoLumen, a conceptual lamp designed to explore the emergent self-awareness of a thing. The lamp was deployed in the homes of five participants for one week, generating daily first-person narratives that sequentially covered environmental perception, emotional emulation, dream states, self-reflection, and farewell. Analysis of participant feedback and observations revealed the influence of detection accuracy, emotional triggers, and science fiction elements on perceptions of the lamp's self-awareness. Additionally, we emphasize the pivotal role of time in shaping the agency of things and propose a "narrative framework" to guide the development of more immersive and experiential digital companions.2025BKBowen Kong et al.Smart Home Interaction DesignTechnology Ethics & Critical HCIDesign FictionDIS
Laser-Powered Vibrotactile RenderingSu 等人提出激光触觉渲染技术,利用激光在皮肤表面产生热振感,实现无接触触觉反馈,为 VR/AR 提供新型交互方案。2024YSYuning Su et al.Mid-Air Haptics (Ultrasonic)UbiComp
DrivingVibe: Enhancing VR Driving Experience using Inertia-based Vibrotactile Feedback around the HeadWe present DrivingVibe, which explores vibrotactile feedback designs around the head to enhance VR driving motion experiences. We propose two approaches that use a 360-degree vibrotactile headband: 1) mirroring and 2) 3D inertia-based.The mirroring approach extends the vibrotactile patterns of handheld controllers to actuate the entire headband uniformly. The 3D inertia-based approach uses the acceleration telemetry data that driving games/simulators export to motion platforms to generate directional vibration patterns, including: i) centrifugal forces, ii) horizontal acceleration/deceleration, and iii) vertical motion due to rough terrain. The two approaches are complementary as the mirroring approach supports all driving games because it does not require telemetry data, while the 3D inertia-based approach provides higher feedback fidelity for games that provide such data. We conducted a 24-person user experience evaluation in both passive passenger mode and active driving mode. Study results showed that both DrivingVibe designs significantly improved realism, immersion, and enjoyment (p<.01) with large effect sizes for the VR driving experiences. For overall preference, 88% (21/24) of participants preferred DrivingVibe, with a 2:1 preference for 3D inertia-based vs. mirroring designs (14 vs. 7 participants). For immersion and enjoyment, 96% (23/24) of participants preferred DrivingVibe, with nearly a 3:1 preference (17 vs. 6 participants) for the 3D inertia-based design.2023NYNeng-Hao Yu et al.Head-Up Display (HUD) & Advanced Driver Assistance Systems (ADAS)In-Vehicle Haptic, Audio & Multimodal FeedbackMobileHCI
Designing with AI: An Exploration of Co-Ideation with Image GeneratorsThis pictorial showcases the integration of AI into the design process through the use of image generators. Following a Research Through Design (RTD) approach, we recruited 6 designers to create design proposals with AI. They were invited to collect case studies, make annotated portfolios, collaborate with the generator for design ideation, and undergo semi-structured interviews. Through this process, we gathered empirical data to investigate how designers ideate with AI and what opportunities and challenges emerge in the collaboration. The results showed that the combination of human and AI input in the design process contribute to a fresh form of self-expression and communication. AI was found to bring a distinct perspective that opens up new avenues for artistic expression. The research highlights the potential of AI to augment human creativity and the need for ongoing exploration in this field.2023LCLi-Yuan Chiou et al.Generative AI (Text, Image, Music, Video)Graphic Design & Typography ToolsCreative Collaboration & Feedback SystemsDIS
HeadWind: Enhancing Teleportation Experience in VR by Simulating Air Drag during Rapid MotionTeleportation, which instantly moves users from their current location to the target location, has become the most popular locomotion technique in VR games. It enables fast navigation with reduced VR sickness but results in significantly reduced immersion. We present HeadWind, a novel approach to improve the experience of teleportation by simulating the haptic sensation of air drag when rapidly moving through the air in real life. Specifically, HeadWind modulates bursts of compressed air to the face and uses multiple nozzles to provide directional cues. To design the wearable device and to model airflow speed and duration for teleportation, we conducted three formative studies and a design session. User experience evaluation with 24 participants showed that HeadWind significantly improved realism, immersion, and enjoyment of teleportation in VR (p<.01) with large effect sizes (r>0.5), and was preferred by 96% of participants.2022CTChun-Miao Tseng et al.National Taiwan UniversityMid-Air Haptics (Ultrasonic)Full-Body Interaction & Embodied InputCHI
WalkingVibe: Reducing Virtual Reality Sickness and Improving Realism while Walking in VR using Unobtrusive Head-mounted Vibrotactile FeedbackVirtual Reality (VR) sickness is common with symptoms such as headaches, nausea, and disorientation, and is a major barrier to using VR. We propose WalkingVibe, which applies unobtrusive vibrotactile feedback for VR walking experiences, and also reduces VR sickness and discomfort while improving realism. Feedback is delivered through two small vibration motors behind the ears at a frequency that strikes a balance in inducing vestibular response while minimizing annoyance. We conducted a 240-person study to explore how visual, audio, and various tactile feedback designs affect the locomotion experience of users walking passively in VR while seated statically in reality. Results showed timing and location for tactile feedback have significant effects on VR sickness and realism. With WalkingVibe, 2-sided step-synchronized design significantly reduces VR sickness and discomfort while significantly improving realism. Furthermore, its unobtrusiveness and ease of integration make WalkingVibe a practical approach for improving VR experiences with new and existing VR headsets.2020YPYi-Hao Peng et al.National Taiwan UniversityVibrotactile Feedback & Skin StimulationImmersion & Presence ResearchCHI
Aarnio: Passive Kinesthetic Force Output for Foreground Interactions on an Interactive ChairWe propose a new type of haptic output for foreground interactions on an interactive chair, where input is carried out explicitly in the foreground of the user's consciousness. This type of force output restricts a user's motion by modulating the resistive force when rotating a seat, tilting the backrest, or rolling the chair. These interactions are useful for many applications in a ubiquitous computing environment, ranging from immersive VR games to rapid and private query of information for people who are occupied with other tasks (e.g. in a meeting). We carefully designed and implemented our proposed haptic force output on a standard office chair and determined the recognizability of five force profiles for rotating, tilting, and rolling the chair. We present the result of our studies, as well as a set of novel interaction techniques enabled by this new force output for chairs.2019STShan-Yuan Teng et al.National Taiwan UniversityForce Feedback & Pseudo-Haptic WeightImmersion & Presence ResearchUbiquitous ComputingCHI
TilePoP: Tile-type Pop-up Prop for Virtual RealityWe present TilePoP, a new type of pneumatically-actuated interface deployed as floor tiles which dynamically pop up to large shapes to construct proxy objects for whole-body interactions in Virtual Reality. TilePoP consists of a 2D array of stacked cube-shaped airbags designed with specific folding structures, enabling each airbag to be inflated into a physical proxy and deflated back to a tile when not in use. TilePoP is capable of providing haptic feedback for the whole body and can even support human body weight. Thus it affords new interaction possibilities in VR. We describe the design and implementation in details. We finally demonstrate the applications and conducted a preliminary user evaluation to understand the experiences of using TilePoP.2019STShan-Yuan Teng et al.Shape-Changing Interfaces & Soft Robotic MaterialsFull-Body Interaction & Embodied InputUIST
AutoFritz: Autocomplete for Prototyping Virtual Breadboard CircuitsWe propose autocomplete for the design and development of virtual breadboard circuits using software prototyping tools. With our system, a user inserts a component into the virtual breadboard, and it automatically provides a user with a list of suggested components. These suggestions complete or ex- tend the electronic functionality of the inserted component to save the user's time and reduce circuit error. To demon- strate the effectiveness of autocomplete, we implemented our system on Fritzing, a popular open source breadboard circuit prototyping software, used by novice makers. Our autocomplete suggestions were implemented based upon schematics from datasheets for standard components, as well as how components are used together from over 4000 circuit projects from the Fritzing community. We report the results of a controlled study with 16 participants, evaluating the effectiveness of autocomplete in the creation of virtual breadboard circuits, and conclude by sharing insights and directions for future research.2019JLJo-Yu Lo et al.National Chiao Tung UniversityCircuit Making & Hardware PrototypingCHI
Masque: Exploring Lateral Skin Stretch Feedback on the Face with Head-Mounted DisplaysWe propose integrating an array of skin stretch modules with an head-mounted display (HMD) to provide two-dimensional skin stretch feedback on the user’s face. Skin stretch has been found effective to induce the perception of force (e.g. weight or inertia) and to enable directional haptic cues. However, its potential as an HMD output for virtual reality (VR) remains to be exploited. Our explorative study firstly investigated the design of shear tactors. Based on our results, Masque has been implemented as an HMD prototype actuating six shear tactors positioned on the HMD’s face interface. A comfort study was conducted to ensure that skin stretches generated by Masque are acceptable to all participants. The following two perception-based studies examined the minimum changes in skin stretch distance and stretch angles that are detectable by participants. The results help us to design haptic profiles as well as our prototype applications. Finally, the user evaluation indicates that participants welcomed Masque and regarded skin stretch feedback as a worthwhile addition to HMD output.2019CWChi Wang et al.Mid-Air Haptics (Ultrasonic)Force Feedback & Pseudo-Haptic WeightShape-Changing Interfaces & Soft Robotic MaterialsUIST
RFIBricks: Interactive Building Blocks Based on RFIDWe present RFIBricks, an interactive building block system based on ultrahigh frequency radio-frequency identification (RFID) sensing. The system enables geometry resolution based on a simple yet highly generalizable mechanism: an RFID contact switch, which is made by cutting each RFID tag into two parts, namely antenna and chip. A magnetic connector is then coupled with each part. When the antenna and chip connect, an interaction event with an ID is transmitted to the reader. On the basis of our design of RFID contact switch patterns, we present a system of interactive physical building blocks that resolves the stacking order and orientation when one block is stacked upon another, determines a three-dimensional (3D) geometry built on a two-dimensional base plate, and detects user inputs by incorporating electromechanical sensors. Because it is calibration-free and does not require batteries in each block, it facilitates straightforward maintenance when deployed at scale. Compared with other approaches, this RFID-based system resolves several critical challenges in human-computer interaction, such as 1) determining the identity and the built 3D geometry of passive building blocks, 2) enabling stackable token+constraint interaction on a tabletop, and 3) tracking in-hand assembly.2018MHMeng-Ju Hsieh et al.National Taiwan UniversityDesktop 3D Printing & Personal FabricationCircuit Making & Hardware PrototypingCHI
PuPoP: Pop-up Prop on Palm for Virtual RealityThe sensation of being able to feel the shape of an object when grasping it in Virtual Reality (VR) enhances a sense of presence and the ease of object manipulation. Though most prior works focus on force feedback on fingers, the haptic emulation of grasping a 3D shape requires the sensation of touch using the entire hand. Hence, we present Pop-up Prop on Palm (PuPoP), a light-weight pneumatic shape-proxy interface worn on the palm that pops several airbags up with predefined primitive shapes for grasping. When a user's hand encounters a virtual object, an airbag of appropriate shape, ready for grasping, is inflated by way of the use of air pumps; the airbag then deflates when the object is no longer in play. Since PuPoP is a physical prop, it can provide the full sensation of touch to enhance the sense of realism for VR object manipulation. For this paper, we first explored the design and implementation of PuPoP with multiple shape structures. We then conducted two user studies to further understand its applicability. The first study shows that, when in conflict, visual sensation tends to dominate over touch sensation, allowing a prop with a fixed size to represent multiple virtual objects with similar sizes. The second study compares PuPoP with controllers and free-hand manipulation in two VR applications. The results suggest that utilization of dynamically-changing PuPoP, when grasped by users in line with the shapes of virtual objects, enhances enjoyment and realism. We believe that PuPoP is a simple yet effective way to convey haptic shapes in VR.2018STShan-Yuan Teng et al.Shape-Changing Interfaces & Soft Robotic MaterialsImmersion & Presence ResearchUIST
RollingStone: Using Single Slip Taxel for Enhancing Active Finger Exploration with a Virtual Reality ControllerWe propose using a single slip tactile pixel on virtual reality controllers to produce sensations of finger sliding and textures. When a user moves the controller on a virtual surface, we add a slip opposite to the movement, creating an illusion of a finger that is sliding on the surface, while varying the slip feedback changes lateral forces on fingertip. When coupled with hand motion the lateral forces can be used to create perceptions of artificial textures. RollingStone has been implemented as a prototype VR controller consisting of a ball-based slip display positioned under the user’s fingertip. Within the slip display, a pair of motors actuates the ball, which is capable of gener- ating both short- and long-term two-degree-of-freedom slip feedback. An exploratory study was conducted to ensure that changing the relative motion between the finger and the ball could alter the perceptions conveying the properties of a tex- ture. The following two perception-based studies examined the minimum changes in speed of slip and angle of slip that are detectable by users. The results help us to design haptic patterns as well as our prototype applications. Finally, our preliminary user evaluation indicated that participants wel- comed RollingStone as a useful addition to the range of VR controllers.2018JLJo-Yu Lo et al.Mid-Air Haptics (Ultrasonic)Force Feedback & Pseudo-Haptic WeightHaptic WearablesUIST
Jetto: Using Lateral Force Feedback for Smartwatch InteractionsInteracting with media and games is a challenging user experience on smartwatches due to their small screens. We propose using lateral force feedback to enhance these experiences. When virtual objects on the smartwatch display visually collide or push the edge of the screen, we add haptic feedback so that the user also feels the impact. This addition creates the illusion of a virtual object that is physically hitting or pushing the smartwatch, from within the device itself. Using this approach, we extend virtual space and scenes into a 2D physical space. To create realistic lateral force feedback, we first examined the minimum change in force magnitude that is detectable by users in different directions and weight levels, finding an average JND of 49% across all tested conditions, with no significant effect of weight and force direction. We then developed a proof-of-concept hardware prototype called Jetto and demonstrated its unique capabilities through a set of impact-enhanced videos and games. Our preliminary user evaluations indicated the concept was welcomed and is regarded as a worthwhile addition to smartwatch output and media experiences.2018JGJun Gong et al.Dartmouth CollegeForce Feedback & Pseudo-Haptic WeightSmartwatches & Fitness BandsCHI
SpeechBubbles: Enhancing Captioning Experiences for Deaf and Hard-of-Hearing People in Group ConversationsDeaf and hard-of-hearing (DHH) individuals encounter difficulties when engaged in group conversations with hearing individuals, due to factors such as simultaneous utterances from multiple speakers and speakers whom may be potentially out of view. We interviewed and co-designed with eight DHH participants to address the following challenges: 1) associating utterances with speakers, 2) ordering utterances from different speakers, 3) displaying optimal content length, and 4) visualizing utterances from out-of-view speakers. We evaluated multiple designs for each of the four challenges through a user study with twelve DHH participants. Our study results showed that participants significantly preferred speechbubble visualizations over traditional captions. These design preferences guided our development of SpeechBubbles, a real-time speech recognition interface prototype on an augmented reality head-mounted display. From our evaluations, we further demonstrated that DHH participants preferred our prototype over traditional captions for group conversations.2018YPYi-Hao Peng et al.National Taiwan UniversityAR Navigation & Context AwarenessDeaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)CHI