Designing Co-Creative AI for Virtual Environments Co-creative AI tools provide a method of creative collaboration between a user and machine. One form of co-creative AI called generative design requires the user to input design parameters and wait substantial periods of time while the system computes design solutions. We explore this interaction dynamic by providing an embodied experience in VR. Calliope is a virtual reality (VR) system that enables users to explore and manipulate generative design solutions in real time. Calliope accounts for the typical idle times in the generative design process by using a virtual environment to encourage parallelized and embodied data-exploration and synthesis, while maintaining a tight human-in-the-loop collaboration with the underlying algorithms. In this paper we discuss design considerations informed by formative studies with generative designers and artists, and provide design guidelines to aid others in the development of co-creative AI systems in virtual environments.2021JDJosh Urban Davis et al.Generative AI (Text, Image, Music, Video)Creative Collaboration & Feedback SystemsC&C
Think-Aloud Computing: Supporting Rich and Low-Effort Knowledge CaptureWhen users complete tasks on the computer, the knowledge they leverage and their intent is often lost because it is tedious or challenging to capture. This makes it harder to understand why a colleague designed a component a certain way or to remember requirements for software you wrote a year ago. We introduce think-aloud computing, a novel application of the think-aloud protocol where computer users are encouraged to speak while working to capture rich knowledge with relatively low effort. Through a formative study we find people shared information about design intent, work processes, problems encountered, to-do items, and other useful information. We developed a prototype that supports think-aloud computing by prompting users to speak and contextualizing speech with labels and application context. Our evaluation shows more subtle design decisions and process explanations were captured in think-aloud than via traditional documentation. Participants reported that think-aloud required similar effort as traditional documentation.2021RKRebecca Krosnick et al.University of MichiganKnowledge Worker Tools & WorkflowsPrototyping & User TestingCHI
MicroMentor: Peer-to-Peer Software Help Sessions in Three Minutes or LessWhile synchronous one-on-one help for software learning is rich and valuable, it can be difficult to find and connect with someone who can provide assistance. Through a formative user study, we explore the idea of fixed-duration, one-on-one help sessions and find that 3 minutes is often enough time for novice users to explain their problem and receive meaningful help from an expert. To facilitate this type of interaction, we developed MicroMentor, an on-demand help system that connects users via video chat for 3-minute help sessions. MicroMentor automatically attaches relevant supplementary materials and uses contextual information, such as command history and expertise, to encourage the most qualified users to accept incoming requests. These help sessions are recorded and archived, building a bank of knowledge that can further help a broader audience. Through a user study, we find MicroMentor to be useful and successful in connecting users for short teaching moments.2020NJNikhita Joshi et al.Autodesk Research & University of WaterlooCollaborative Learning & Peer TeachingKnowledge Worker Tools & WorkflowsCHI
Instrumenting and Analyzing Fabrication Activities, Users, and ExpertiseThe recent proliferation of fabrication and making activities has introduced a large number of users to a variety of tools and equipment. Monitored, reactive and adaptive fabrication spaces are needed to provide personalized information, feedback and assistance to users. This paper explores the sensorization of making and fabrication activities, where the environment, tools, and users were considered to be separate entities that could be instrumented for data collection. From this exploration, we present the design of a modular system that can capture data from the varied sensors and infer contextual information. Using this system, we collected data from fourteen participants with varying levels of expertise as they performed seven representative making tasks. From the collected data, we predict which activities are being performed, which users are performing the activities, and what expertise the users have. We present several use cases of this contextual information for future interactive fabrication spaces.2019JGJun Gong et al.Autodesk Research & Dartmouth CollegeDesktop 3D Printing & Personal FabricationCircuit Making & Hardware PrototypingComputational Methods in HCICHI
Geppetto: Enabling Semantic Design of Expressive Robot BehaviorsExpressive robots are useful in many contexts, from industrial to entertainment applications. However, designing expressive robot behaviors requires editing a large number of unintuitive control parameters. We present an interactive, data-driven system that allows editing of these complex parameters in a semantic space. Our system combines a physics-based simulation that captures the robot's motion capabilities, and a crowd-powered framework that extracts relationships between the robot's motion parameters and the desired semantic behavior. These relationships enable mixed-initiative exploration of possible robot motions. We specifically demonstrate our system in the context of designing emotionally expressive behaviors. A user-study finds the system to be useful for more quickly developing desirable robot behaviors, compared to manual parameter editing.2019RDRuta Desai et al.Carnegie Mellon UniversitySocial Robot InteractionHuman-Robot Collaboration (HRC)CHI
HydroRing: Supporting Mixed Reality Haptics Using Liquid FlowCurrent haptic devices are often bulky and rigid, making them unsuitable for ubiquitous interaction and scenarios where the user must also interact with the real world. To address this gap, we propose HydroRing, an unobtrusive, finger-worn device that can provide the tactile sensations of pressure, vibration, and temperature on the fingertip, enabling mixed-reality haptic interactions. Different from previous explorations, HydroRing in active mode delivers sensations using liquid travelling through a thin, flexible latex tube worn across the fingerpad, and has minimal impact on a user’s dexterity and their perception of stimuli in passive mode. Two studies evaluated participants’ ability to perceive and recognize sensations generated by the device, as well as their ability to perceive physical stimuli while wearing the device. We conclude by exploring several applications leveraging this mixed-reality haptics approach.2018THTeng Han et al.In-Vehicle Haptic, Audio & Multimodal FeedbackHaptic WearablesUIST
Leveraging Community-Generated Videos and Command Logs to Classify and Recommend Software WorkflowsUsers of complex software applications often rely on inefficient or suboptimal workflows because they are not aware that better methods exist. In this paper, we develop and validate a hierarchical approach combining topic modeling and frequent pattern mining to classify the workflows offered by an application, based on a corpus of community-generated videos and command logs. We then propose and evaluate a design space of four different workflow recommender algorithms, which can be used to recommend new workflows and their associated videos to software users. An expert validation of the task classification approach found that 82% of the time, experts agreed with the classifications. We also evaluate our workflow recommender algorithms, demonstrating their potential and suggesting avenues for future work.2018XWXu Wang et al.Carnegie Mellon University, Autodesk ResearchCrowdsourcing Task Design & Quality ControlKnowledge Worker Tools & WorkflowsCHI
Maestro: Designing a System for Real-Time Orchestration of 3D Modeling WorkshopsInstructors of 3D design workshops for children face many challenges, including maintaining awareness of students’ progress, helping students who need additional attention, and creating a fun experience while still achieving learning goals. To help address these challenges, we developed Maestro, a workshop orchestration system that visualizes students’ progress, automatically detects and draws attention to common challenges faced by students, and provides mechanisms to address common student challenges as they occur. We present the design of Maestro, and the results of a case-study evaluation with an experienced facilitator and 13 children. The facilitator appreciated Maestro’s real-time indications of which students were successfully following her tutorial demonstration, and recognized the system’s potential to “extend her reach” while helping struggling students. Participant interaction data from the study provided support for our follow-along detection algorithm, and the capability to remind students to use 3D navigation.2018VDVolodymyr Dziubak et al.Programming Education & Computational ThinkingPrototyping & User TestingUIST
Investigating How Online Help and Learning Resources Support Children's Use of 3D Design Software3D design software is increasingly available to children through libraries, maker spaces, and for free on the web. This unprecedented availability has the potential to unleash children’s creativity in cutting edge domains, but is limited by the steep learning curve of the software. Unfortunately, there is little past work studying the breakdowns faced by children in this domain—most past work has focused on adults in professional settings. In this paper, we present a study of online learning resources and help-seeking strategies available to children starting out with 3D design software. We find that children face a range of challenges when trying to learn 3D design independently—tutorials present instructions at a granularity that leads to overlooked and incorrectly-performed actions, and online help-seeking is largely ineffective due to challenges with query formulation and evaluating found information. Based on our findings, we recommend design directions for next-generation help and learning systems tailored to children.2018NHNathaniel Hudson et al.Autodesk Research, Ross VideoDesktop 3D Printing & Personal FabricationMakerspace CultureCHI
Dream Lens: Exploration and Visualization of Large-Scale Generative Design DatasetsThis paper presents Dream Lens, an interactive visual analysis tool for exploring and visualizing large-scale generative design datasets. Unlike traditional computer aided design, where users create a single model, with generative design, users specify high-level goals and constraints, and the system automatically generates hundreds or thousands of candidates all meeting the design criteria. Once a large collection of design variations is created, the designer is left with the task of finding the design, or set of designs, which best meets their requirements. This is a complicated task which could require analyzing the structural characteristics and visual aesthetics of the designs. Two studies are conducted which demonstrate the usability and usefulness of the Dream Lens system, and a generatively designed dataset of 16,800 designs for a sample design problem is described and publicly released to encourage advancement in this area.2018JMJustin Matejka et al.Autodesk ResearchGenerative AI (Text, Image, Music, Video)Interactive Data VisualizationCHI
Forte: User-Driven Generative DesignLow-cost fabrication machines (e.g., 3D printers) offer the promise of creating custom-designed objects by a range of users. To maximize performance, generative design methods such as topology optimization can automatically optimize properties of a design based on high-level specifications. Though promising, such methods require people to map their design ideas--often unintuitively--to a small number of mathematical input parameters, and the relationship between those parameters and a generated design is often unclear, making it difficult to iterate a design. We present Forte, a sketch-based, real-time interactive tool for people to directly express and iterate on their designs via 2D topology optimization. Users can ask the system to add structures, provide a variation with better performance, or optimize internal material layouts. Users can globally control how much to `deviate' from the initial sketch, or perform local suggestive editing, which interactively prompts the system to update based on the new information. Design sessions with 10 participants demonstrate that Forte empowers designers to create and explore a range of optimized designs with custom forms and styles.2018XCXiang 'Anthony' Chen et al.Carnegie Mellon UniversityDesktop 3D Printing & Personal FabricationCustomizable & Personalized ObjectsCHI
Blocks-to-CAD : A Cross-Application Bridge from Minecraft to 3D ModelingLearning a new software application can be a challenge, requiring the user to enter a new environment where their existing knowledge and skills do not apply, or worse, work against them. To ease this transition, we propose the idea of cross-application bridges that start with the interface of a familiar application, and gradually change their interaction model, tools, conventions, and appearance to resemble that of an application to be learned. To investigate this idea, we developed Blocks-to-CAD, a cross-application bridge from Minecraft-style games to 3D solid modeling. A user study of our system demonstrated that our modifications to the game did not hurt enjoyment or increase cognitive load, and that players could successfully apply knowledge and skills learned in the game to tasks in a popular 3D solid modeling application. The process of developing Blocks-to-CAD also revealed eight design strategies that can be applied to design cross-application bridges for other applications and domains.2018BLBenjamin Lafreniere et al.Aging-Friendly Technology DesignCustomizable & Personalized ObjectsUIST
SymbiosisSketch: Combining 2D & 3D Sketching for Designing Detailed 3D Objects in SituWe present SymbiosisSketch, a hybrid sketching system that combines drawing in air (3D) and on a drawing surface (2D) to create detailed 3D designs of arbitrary scale in an augmented reality (AR) setting. SymbiosisSketch leverages the complementary affordances of 3D (immersive, unconstrained, life-sized) and 2D (precise, constrained, ergonomic) interactions for in situ 3D conceptual design. A defining aspect of our system is the ongoing creation of surfaces from unorganized collections of 3D curves. These surfaces serve a dual purpose: as 3D canvases to map strokes drawn on a 2D tablet, and as shape proxies to occlude the physical environment and hidden curves in a 3D sketch. SymbiosisSketch users draw interchangeably on a 2D tablet or in 3D within an ergonomically comfortable canonical volume, mapped to arbitrary scale in AR. Our evaluation study shows this hybrid technique to be easy to use in situ and effective in transcending the creative potential of either traditional sketching or drawing in air.2018RARahul Arora et al.University of TorontoMixed Reality Workspaces3D Modeling & AnimationCHI