Exploring the Design of Collaborative Technological Systems to Assist Patients in Motivating Quitting Gambling with Family MembersGambling addiction can have a profound impact on the mental and financial well-being of individuals and their families. This paper presents an in-depth study on the development of technologies aimed at promoting collaborative efforts between patients and family members to deal with gaming addiction. The study performed interviews with ten pairs of gambling-addicted patients with a family member, six patients without family participation, and four treatment experts. Thematic analysis was conducted from the perspectives of patients and family members with the aim of identifying key themes underlying the development of the three antecedents of Planned Behavioral Theory: attitude, subjective norms, and perceived behavioral control. In accordance with the tenets of Prospect Theory, we sought to elucidate the process of attitude formation during editing and evaluation. We also conducted a thematic analysis of the opportunities and concerns in designing technologies aimed at overcoming gambling addiction. The identified themes provided a basis by which to assess design implications from the perspective of Planned Behavioral Theory. Our analysis revealed three directions for future development: 1) Helping patients to make informed gambling decisions through rational editing and evaluation, as suggested by Prospect Theory (attitude level); 2) promoting communication within families to enhance mutual understanding and trust (subjective norm level); and 3) helping patients to develop personal capabilities, while providing a realistic impression of their progress (perceived behavioral control level).2025CYChuang-Wen You et al.Facilitating Support and BelongingCSCW
GenTune: Toward Traceable Prompts to Improve Controllability of Image Refinement in Environment DesignEnvironment designers in the entertainment industry create imaginative 2D and 3D scenes for games, films, and television, requiring both fine-grained control of specific details and consistent global coherence. Designers have increasingly integrated generative AI into their workflows, often relying on large language models (LLMs) to expand user prompts for text-to-image generation, then iteratively refining those prompts and applying inpainting. However, our formative study with 10 designers surfaced two key challenges: (1) the lengthy LLM-generated prompts make it difficult to understand and isolate the keywords that must be revised for specific visual elements; and (2) while inpainting supports localized edits, it can struggle with global consistency and correctness. Based on these insights, we present GenTune, an approach that enhances human–AI collaboration by clarifying how AI-generated prompts map to image content. Our GenTune system lets designers select any element in a generated image, trace it back to the corresponding prompt labels, and revise those labels to guide precise yet globally consistent image refinement. In a summative study with 20 designers, GenTune significantly improved prompt-image comprehension, refinement quality and efficiency, and overall satisfaction (all p < .01) compared to current practice. A follow-up field study with two studios further demonstrated its effectiveness in real-world settings.2025WWWen-Fan Wang et al.Generative AI (Text, Image, Music, Video)Human-LLM CollaborationPrototyping & User TestingUIST
Surrogate Avatar: Enhancing Situated Co-Presence and User Mobility in Symmetric Telepresence ConversationsWe present Surrogate Avatar, an adaptive telepresence method that enhances user mobility and situated co-presence in symmetric avatar-mediated communication. The system enables a remote user’s avatar to autonomously position itself in socially and environmentally appropriate locations within the local user’s space—based on spatial affordances, interactional norms, and environmental constraints—supporting fluid interaction without requiring a shared environmental context. Through a formative study, we derived key adaptation objectives and implemented them using a distributed optimization framework based on the AUIT system. The framework distributes adaptation tasks across server and client to balance responsiveness and computational efficiency. A user study involving both stationary and nomadic scenarios demonstrated consistently high usability and presence, with some limitations observed under walking conditions. An additional exploratory field study in a semi-structured public setting demonstrated the system’s viability beyond controlled lab conditions. These findings motivate future designs of mobile telepresence systems that dynamically adapt to spatial and conversational context while mitigating misunderstandings that can arise from asymmetric environmental awareness and supporting privacy-sensitive interaction.2025SLSheng-Cian Lee et al.Teleoperation & TelepresenceMobileHCI
AIdeation: Designing a Human-AI Collaborative Ideation System for Concept DesignersConcept designers in the entertainment industry create highly detailed, often imaginary environments for movies, games, and TV shows. Their early ideation phase requires intensive research, brainstorming, visual exploration, and combination of various design elements to form cohesive designs. However, existing AI tools focus on image generation from user specifications, lacking support for the unique needs and complexity of concept designers' workflows. Through a formative study with 12 professional designers, we captured their workflows and identified key requirements for AI-assisted ideation tools. Leveraging these insights, we developed AIdeation to support early ideation by brainstorming design concepts with flexible searching and recombination of reference images. A user study with 16 professional designers showed that AIdeation significantly enhanced creativity, ideation efficiency, and satisfaction (all \textit{p}<.01) compared to current tools and workflows. A field study with 4 studios for 1 week provided insights into AIdeation's benefits and limitations in real-world projects. After the completion of the field study, two studios, covering films, television, and games, have continued to use AIdeation in their commercial projects to date, further validating AIdeation's improvement in ideation quality and efficiency.2025WWWen-Fan Wang et al.National Taiwan University, Computer Science and Information EngineeringHuman-LLM CollaborationAI-Assisted Creative WritingCHI
Blow Your Mind: Exploring the Effects of Scene-Switching and Visualization of Time Constraints on Brainstorming in Virtual RealityBrainstorming, a creative activity that aims to generate ideas, plays a crucial part in problem-solving processes and is widely employed to explore innovative solutions across various contexts. It may suffer from repetitive results, inefficient time management, and stagnation in discussion. Seeing the rich opportunities of virtual reality (VR) in visualizing the environments, we propose to use (1) scene-switching and (2) creative visualization of time constraints to facilitate this process. The current study sought to explore the effects of these two factors on brainstorming performance in VR. By conducting a mixed-method study of 20 three-participant groups, we found that scene-switching and implicit time limitations can stimulate creativity, establish diverse atmospheres, and open up new conversations during brainstorming. Additionally, we provide suggestions for visualizing time constraints and various virtual environments based on research results.2024NBNanyi Bi et al.Session 3f: Embodiment and Experience: Social Behavior and Decision-Making in VRCSCW
TacNote: Tactile and Audio Note-Taking for Non-Visual AccessBlind and visually impaired (BVI) people primarily rely on non-visual senses to interact with a physical environment. Doing so requires a high cognitive load to perceive and memorize the presence of a large set of objects, such as at home or in a learning setting. In this work, we explored opportunities to enable object-centric note-taking by using a 3D printing pen for interactive, personalized tactile annotations. We first identified the benefits and challenges of self-created tactile graphics in a formative diary study. Then, we developed TacNote, a system that enables BVI users to annotate, explore, and memorize critical information associated with everyday objects. Using TacNote, the users create tactile graphics with a 3D printing pen and attach them to the target objects. They capture and organize the physical labels by using TacNote’s camera-based mobile app. In addition, they can specify locations, ordering, and hierarchy via finger-pointing interaction and receive audio feedback. Our user study with ten BVI participants showed that TacNote effectively alleviated the memory burden, offering a promising solution for enhancing users’ access to information.2023WLWan-Chen Lee et al.Visual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Deaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)Motor Impairment Assistive Input TechnologiesUIST
Understanding (Non-)Visual Needs for the Design of Laser-Cut ArchitectureLaser-cutting is a promising fabrication method that empowers makers, including blind or visually-impaired (BVI) creators, to create technologies that fit their needs. Existing work on laser-cut accessibility has facilitated easier assembly as a workaround for existing models. However, laser-cut models are still not designed to accommodate the needs of BVI users. Integrating BVI needs can enrich the greater maker community by enabling cross-group discourse on laser-cut making. To investigate how laser-cut model design can be more accessible overall, we study laser-cut assembly as a process deeply intertwined with the fundamental design of laser-cut models. We present a study with seven sighted and seven BVI participants to compare their usage of laser-cut model affordances during assembly. Data for the BVI participants in this study originate from a previous work. We identify assembly cues common or unique to sighted and BVI users, and discuss implications to improve general accessibility in laser-cut design.2023RCRuei-Che Chang et al.University of MichiganVisual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Universal & Inclusive DesignLaser Cutting & Digital FabricationCHI
NFCStack: Identifiable Physical Building Blocks that Support Concurrent Construction and Frictionless InteractionIn this paper, we propose NFCStack, which is a physical building block system that supports stacking and frictionless interaction and is based on near-field communication (NFC). This system consists of a portable station that can support and resolve the order of three types of passive identifiable stackable: bricks, boxes, and adapters. The bricks support stable and sturdy physical construction, whereas the boxes support frictionless tangible interactions. The adapters provide an interface between the aforementioned two types of stackable and convert the top of a stack into a terminal for detecting interactions between NFC-tagged objects. In contrast to existing systems based on NFC or radio-frequency identification technologies, NFCStack is portable, supports simultaneous interactions, and resolves stacking and interaction events responsively, even when objects are not strictly aligned. Evaluation results indicate that the proposed system effectively supports 12 layers of rich-ID stacking with the three types of building block, even if every box is stacked with a 6-mm offset. The results also indicate possible generalized applications of the proposed system, including 2.5-dimensional construction. The interaction styles are described using several educational application examples, and the design implications of this research are explained.2022CLChi-Jung Lee et al.Aging-Friendly Technology DesignCircuit Making & Hardware PrototypingUIST
Daedalus in the Dark: Designing for Non-Visual Accessible Construction of Laser-Cut ArchitectureDesign tools and research for laser-cut architectures have been widely explored in the past decade. However, this discussion has mostly revolved around technical and structural design questions instead of another essential element of laser-cut models - assembly - a process that relies heavily on visual affordance of components, therefore less accessible to blind or low vision (BLV) people. To close this gap, we co-designed with 7 BLV people to examine their assembly experience with different laser-cut architecture. Distilled from their feedback, we proposed several design heuristics and guidelines for Daedalus, a generative design tool that can produce tactile aids for laser-cut assembly given a few high-level manual inputs. We validate the proposed aids in a user study with 8 new BLV participants. Our results revealed that BLV users can manage laser-cut assembly more efficiently with Daedalus. Going forth from this design iteration, we discuss implications for future research on accessible laser-cut assembly.2021RCRuei-Che Chang et al.Visual Impairment Technologies (Screen Readers, Tactile Graphics, Braille)Deaf & Hard-of-Hearing Support (Captions, Sign Language, Vibration)Universal & Inclusive DesignUIST
HairTouch: Providing Stiffness, Roughness and Surface Height Differences Using Reconfigurable Brush Hairs on a VR ControllerTactile feedback is widely used to enhance realism in virtual reality (VR). When touching virtual objects, stiffness and roughness are common and obvious factors perceived by the users. Furthermore, when touching a surface with complicated surface structure, differences from not only stiffness and roughness but also surface height are crucial. To integrate these factors, we propose a pin-based handheld device, HairTouch, to provide stiffness differences, roughness differences, surface height differences and their combinations. HairTouch consists of two pins for the two finger segments close to the index fingertip, respectively. By controlling brush hairs' length and bending direction to change the hairs' elasticity and hair tip direction, each pin renders various stiffness and roughness, respectively. By further independently controlling the hairs' configuration and pins' height, versatile stiffness, roughness and surface height differences are achieved. We conducted a perception study to realize users' distinguishability of stiffness and roughness on each of the segments. Based on the results, we performed a VR experience study to verify that the tactile feedback from HairTouch enhances VR realism.2021CLChi-Jung Lee et al.National Taiwan UniversityMid-Air Haptics (Ultrasonic)Haptic WearablesImmersion & Presence ResearchCHI
AccessibleCircuits: Adaptive Add-On Circuit Components for People with Blindness or Low VisionIn this paper, we propose the designs for low cost and 3D-printable add-on components to adapt existing breadboards, circuit components and electronics tools for users with blind or low vision (BLV). Through an initial user study, we identified several barriers to entry for beginners with BLV in electronics and circuit prototyping. These barriers guided the design and development of our add-on components. We focused on developing adaptations that provide additional information about the specific component pins and breadboard holes, modify tools to make them easier to use for users with BLV, and expand non-visual feedback (e.g., audio, tactile) for tasks that require vision. Through a second user study, we demonstrated that our adaptations can effectively overcome the accessibility barriers in breadboard circuit prototyping.2021RCRuei-Che Chang et al.Dartmouth CollegeMotor Impairment Assistive Input TechnologiesCircuit Making & Hardware PrototypingCHI
GuideBand: Intuitive 3D Multilevel Force Guidance on a Wristband in Virtual RealityFor haptic guidance, vibrotactile feedback is a commonly-used mechanism, but requires users to interpret its complicated patterns especially in 3D guidance, which is not intuitive and increases their mental effort. Furthermore, for haptic guidance in virtual reality (VR), not only guidance performance but also realism should be considered. Since vibrotactile feedback interferes with and reduces VR realism, it may not be proper for VR haptic guidance. Therefore, we propose a wearable device, GuideBand, to provide intuitive 3D multilevel force guidance upon the forearm, which reproduces an effect that the forearm is pulled and guided by a virtual guider or telepresent person in VR. GuideBand uses three motors to pull a wristband at different force levels in 3D space. Such feedback usually requires much larger and heavier robotic arms or exoskeletons. We conducted a just-noticeable difference study to understand users’ force level distinguishability. Based on the results, we performed a study to verify that compared with state-of-the-art vibrotactile guidance, GuideBand is more intuitive, needs a lower level of mental effort, and achieves similar guidance performance. We further conducted a VR experience study to observe how users combine and complement visual and force guidance, and prove that GuideBand enhances realism in VR guidance.2021HTHsin-Ruey Tsai et al.National Chengchi UniversityForce Feedback & Pseudo-Haptic WeightFull-Body Interaction & Embodied InputFoot & Wrist InteractionCHI
Combining Touchscreens with Passive Rich-ID Building Blocks to Support Context Construction in Touchscreen InteractionsThis research investigates the design space of combining touchscreens with passive rich-ID building block systems to support the physical construction of contexts in touchscreen interactions. With two proof-of-concept systems, RFIPillars and RFITiles, we explore various schemes for using tangible inputs for context enrichment in touchscreen interactions. Instead of incorporating an electronic touchscreen module that requires per-module maintenance, this work intentionally makes each tangible object passive. We explore rear-projection solutions to integrate touchscreen interactions into these passive building blocks with capacitive touch sensing techniques and deliberate physical forgiving to retain the merits of being both batteryless and wireless. The presented research artifacts embody the interaction designs and elucidate scalability challenges in integrating touchscreen interactions into this emerging tangible user interface.2021CLKen Pfeuffer et al.Communication and Multimedia LabCircuit Making & Hardware PrototypingCHI
Glissade: Generating Balance Shifting Feedback to Facilitate Auxiliary Digital Pen InputThis paper introduces Glissade, a digital pen that generates balance shifting feedback by changing the weight distribution of the pen. A pulley system shifts a brass mass inside the pen to change the pen's center of mass and moment of inertia. When the mass is stationary, the pen delivers a constant yet natural sensation of weight, which can be used to convey a status. The pen can also generate a variety of haptic clues by actuating the mass according to the tilt or rotation of the pen, two commonly-used auxiliary pen input channels. Glissade demonstrates new possibilities that balance shifting feedback can bring to digital pen interactions. We validated the usability of this feedback by determining the recognizability of six balance patterns a mix of static and dynamic patterns chosen based on our design considerations in two controlled experiments. The results show that, on average, the participants could distinguish between the patterns with a 94.25% accuracy. At the end, we demonstrate a set of novel interactions enabled by Glissade and discuss the directions for future research.2020KHDa-Yuan Huang et al.National Chiao Tung UniversityForce Feedback & Pseudo-Haptic WeightFull-Body Interaction & Embodied InputCHI
Gaiters: Exploring Skin Stretch Feedback on Legs for Enhancing Virtual Reality ExperiencesWe propose generating two-dimensional skin stretch feedback on the user's legs. Skin stretch is useful cutaneous feedback to induce the perception of virtual textures and illusory forces and to deliver directional cues. This feedback has been applied to the head, body, and upper limbs to simulate rich physical properties in virtual reality (VR). However, how to expand the benefit of skin stretch feedback and apply it to the lower limbs, remains to be explored. Our first two psychophysical studies examined the minimum changes in skin stretch distance and stretch angle that are perceivable by participants. We then designed and implemented Gaiters, a pair of ungrounded, leg-worn devices, each of which is able to generate multiple two-dimensional skin stretches on the skin of the user's leg. With Gaiters, we conducted an exploratory study to understand participants' experiences when coupling skin stretch patterns with various lower limb actions. The results indicate that rich haptic experiences can be created by our prototype. Finally, a user evaluation indicates that participants enjoyed the experiences when using Gaiters and considered skin stretch as compelling haptic feedback on the legs.2020CWChi Wang et al.National Taiwan University & National Chiao Tung UniversityVibrotactile Feedback & Skin StimulationHaptic WearablesCHI
ElastOscillation: 3D Multilevel Force Feedback for Damped Oscillation on VR ControllersForce feedback from damped oscillation is a common effect in our daily lives, especially when shaking an elastic object, an object hanging or containing other stuff, or a container with liquid, e.g., casting with a fishing pole or wine-swirling. Such a force, affected by complex physical variations and collisions, is difficult to properly simulate using current force feedback methods. Therefore, we propose ElastOscillation on a virtual reality (VR) controller to provide 3D multilevel force feedback for damped oscillation to enhance VR experiences. ElastOscillation consists of a proxy, six elastic bands and DC motors. It leverages the motors to control the bands' elasticity to restrain the movement of the proxy, which is connected with the bands. Therefore, when users shake the ElastOscillation device, the proxy shakes or moves in corresponding ranges of movement. The users then perceive the force from oscillation at different levels. In addition, elastic force from the bands further reinforces the oscillation force feedback. We conducted a force perception study to understand users' distinguishability for perceiving oscillation forces in 1D and 2D movement, respectively. Based on the results, we performed a VR experience study to show that the force feedback provided by ElastOscillation enhances VR realism.2020HTHsin-Ruey Tsai et al.National Chengchi UniversityForce Feedback & Pseudo-Haptic WeightCHI
Aarnio: Passive Kinesthetic Force Output for Foreground Interactions on an Interactive ChairWe propose a new type of haptic output for foreground interactions on an interactive chair, where input is carried out explicitly in the foreground of the user's consciousness. This type of force output restricts a user's motion by modulating the resistive force when rotating a seat, tilting the backrest, or rolling the chair. These interactions are useful for many applications in a ubiquitous computing environment, ranging from immersive VR games to rapid and private query of information for people who are occupied with other tasks (e.g. in a meeting). We carefully designed and implemented our proposed haptic force output on a standard office chair and determined the recognizability of five force profiles for rotating, tilting, and rolling the chair. We present the result of our studies, as well as a set of novel interaction techniques enabled by this new force output for chairs.2019STShan-Yuan Teng et al.National Taiwan UniversityForce Feedback & Pseudo-Haptic WeightImmersion & Presence ResearchUbiquitous ComputingCHI
TilePoP: Tile-type Pop-up Prop for Virtual RealityWe present TilePoP, a new type of pneumatically-actuated interface deployed as floor tiles which dynamically pop up to large shapes to construct proxy objects for whole-body interactions in Virtual Reality. TilePoP consists of a 2D array of stacked cube-shaped airbags designed with specific folding structures, enabling each airbag to be inflated into a physical proxy and deflated back to a tile when not in use. TilePoP is capable of providing haptic feedback for the whole body and can even support human body weight. Thus it affords new interaction possibilities in VR. We describe the design and implementation in details. We finally demonstrate the applications and conducted a preliminary user evaluation to understand the experiences of using TilePoP.2019STShan-Yuan Teng et al.Shape-Changing Interfaces & Soft Robotic MaterialsFull-Body Interaction & Embodied InputUIST
ElastImpact: 2.5D Multilevel Instant Impact Using Elasticity on Head-Mounted DisplaysImpact is a common effect in both daily life and virtual reality (VR) experiences, e.g., being punched, hit or bumped. Impact force is instantly produced, which is distinct from other force feedback, e.g., push and pull. We propose ElastImpact to provide 2.5D instant impact on a head-mounted display (HMD) for realistic and versatile VR experiences. ElastImpact consists of three impact devices. Each device uses a DC motor to extend an elastic band and block it with a mechanical brake to store the impact power. When releasing the brake, it provides impact instantly. Two impact devices are affixed on both sides of the head and connected with the HMD to provide the normal direction impact toward the face (i.e., 0.5D in z-axis). The other device is connected with a proxy collider and barrel in front of the HMD and rotated by a motor in the tangential plane of the face to provide 2D impact (i.e., xy-plane). By performing a just-noticeable difference (JND) study, we realize users’ impact force perception distinguishability on the heads in the normal direction and tangential plane, separately. Based on the results, we combine normal and tangential normal as 2.5D impact, and performed a VR experience study to verify that the 2.5D impact from ElastImpact significantly enhances realism.2019HTHsin-Ruey Tsai et al.Force Feedback & Pseudo-Haptic WeightShape-Changing Interfaces & Soft Robotic MaterialsUIST
AutoFritz: Autocomplete for Prototyping Virtual Breadboard CircuitsWe propose autocomplete for the design and development of virtual breadboard circuits using software prototyping tools. With our system, a user inserts a component into the virtual breadboard, and it automatically provides a user with a list of suggested components. These suggestions complete or ex- tend the electronic functionality of the inserted component to save the user's time and reduce circuit error. To demon- strate the effectiveness of autocomplete, we implemented our system on Fritzing, a popular open source breadboard circuit prototyping software, used by novice makers. Our autocomplete suggestions were implemented based upon schematics from datasheets for standard components, as well as how components are used together from over 4000 circuit projects from the Fritzing community. We report the results of a controlled study with 16 participants, evaluating the effectiveness of autocomplete in the creation of virtual breadboard circuits, and conclude by sharing insights and directions for future research.2019JLJo-Yu Lo et al.National Chiao Tung UniversityCircuit Making & Hardware PrototypingCHI