InfoPrint: Embedding Interactive Information in 3D Prints Using Low-Cost Readily-Available Printers and MaterialsJiang 等人提出 InfoPrint 方法,利用低成本普通3D打印机和常规材料在打印物体中嵌入交互式信息,实现物理对象的数字化增强与可编程功能。2024WJWeiwei Jiang et al.Desktop 3D Printing & Personal FabricationCustomizable & Personalized ObjectsUbiComp
Characteristics of Deep and Skim Reading on Smartphones vs. Desktop: A Comparative StudyDeep reading fosters text comprehension, memory, and critical thinking. The growing prevalance of digital reading on mobile interfaces raises concerns that deep reading is being replaced by skimming and sifting through information, but this is currently unmeasured. Traditionally, reading quality is assessed using comprehension tests, which require readers to explicitly answer a set of carefully composed questions. To quantify and understand reading behaviour in natural settings and at scale, however, implicit measures are needed of deep versus skim reading across desktop and mobile devices, the most prominent digital reading platforms. In this paper, we present an approach to systematically induce deep and skim reading and subsequently train classifiers to discriminate these two reading styles based on eye movement patterns and interaction data. Based on a user study with 29 participants, we created models that detect deep reading on both devices with up to 0.82 AUC. We present the characteristics of deep reading and discuss how our models can be used to measure the effect of reading UI design and monitor long-term changes in reading behaviours.2023XCXiuge Chen et al.The University of Melbourne, The University of MelbourneEye Tracking & Gaze InteractionVisualization Perception & CognitionCHI
Bias-Aware Systems: Exploring Indicators for the Occurrences of Cognitive Biases when Facing Different OpinionsCognitive biases have been shown to play a critical role in creating echo chambers and spreading misinformation. They undermine our ability to evaluate information and can influence our behaviour without our awareness. To allow the study of occurrences and effects of biases on information consumption behaviour, we explore indicators for cognitive biases in physiological and interaction data. Therefore, we conducted two experiments investigating how people experience statements that are congruent or divergent from their own ideological stance. We collected interaction data, eye tracking data, hemodynamic responses, and electrodermal activity while participants were exposed to ideologically tainted statements. Our results indicate that people spend more time processing statements that are incongruent with their own opinion. We detected differences in blood oxygenation levels between congruent and divergent opinions, a first step towards building systems to detect and quantify cognitive biases.2023NBNattapat Boonprakong et al.University of MelbourneHuman Pose & Activity RecognitionVisualization Perception & CognitionChronic Disease Self-Management (Diabetes, Hypertension, etc.)CHI
Modeling Temporal Target Selection: A Perspective from Its Spatial CorrespondenceTemporal target selection requires users to wait and trigger the selection input within a bounded time window, with a selection cursor that is expected to be delayed. This task conceptualizes, for example, a variety of game scenarios such as determining the timing of shooting a projectile towards a moving object. In this work, we explore models that predict "when'' users typically perform a selection (i.e., user selection distribution) and their selection error rates in such tasks. We hypothesize that users react to temporal factors including "distance'', "width'', and "delay'' as how they treat the corresponding variables in spatial target selection. The derived models are evaluated in a controlled experiment and an MTurk-based online study. Our research contributes new knowledge on user behavior in temporal target selection tasks and its potential connection with its spatial correspondence. Our models and conclusions can benefit both users and designers of relevant interactive applications.2023DYDifeng Yu et al.University of MelbourneHuman Pose & Activity RecognitionVisualization Perception & CognitionGamification DesignCHI
Understanding How to Administer Voice Surveys through Smart SpeakersSmart speakers have become exceedingly popular and entered many people's homes due to their ability to engage users with natural conversations. Researchers have also looked into using smart speakers as an interface to collect self-reported health data through conversations. Responding to surveys prompted by smart speakers requires users to listen to questions and answer in voice without any visual stimuli. Compared to traditional web-based surveys, where users can see questions and answers visually, voice surveys may be more cognitively challenging. Therefore, to collect reliable survey data, it is important to understand what types of questions are suitable to be administered by smart speakers. We selected five common survey questionnaires and deployed them as voice surveys and web surveys in a within-subject study. Our 24 participants answered questions using voice and web questionnaires in one session. They then repeated the same study session after 1 week to provide a "retest" response. Our results suggest that voice surveys have comparable reliability to web surveys. We find that, when using 5-point or 7-point scales, voice surveys take about twice as long as web surveys. Based on objective measurements, such as response agreement and test-retest reliability, and subjective evaluations of user experience, we recommend that researchers consider adopting the binary scale and 5-point numerical scales for voice surveys on smart speakers.2022JWJing Wei et al.Human-AI collaboration; Human-AI collaborationCSCW
What Could Possibly Go Wrong When Interacting with Proactive Smart Speakers? A Case Study Using an ESM ApplicationVoice user interfaces (VUIs) have made their way into people's daily lives, from voice assistants to smart speakers. Although VUIs typically just react to direct user commands, increasingly, they incorporate elements of proactive behaviors. In particular, proactive smart speakers have the potential for many applications, ranging from healthcare to entertainment; however, their usability in everyday life is subject to interaction errors. To systematically investigate the nature of errors, we designed a voice-based Experience Sampling Method (ESM) application to run on proactive speakers. We captured 1,213 user interactions in a 3-week field deployment in 13 participants' homes. Through auxiliary audio recordings and logs, we identify substantial interaction errors and strategies that users apply to overcome those errors. We further analyze the interaction timings and provide insights into the time cost of errors. We find that, even for answering simple ESMs, interaction errors occur frequently and can hamper the usability of proactive speakers and user experience. Our work also identifies multiple facets of VUIs that can be improved in terms of the timing of speech.2022JWJing Wei et al.University of MelbourneVoice User Interface (VUI) DesignIntelligent Voice Assistants (Alexa, Siri, etc.)Explainable AI (XAI)CHI
Method for Appropriating the Brief Implicit Association Test to Elicit Biases in UsersImplicit tendencies and cognitive biases play an important role in how information is perceived and processed, a fact that can be both utilised and exploited by computing systems. The Implicit Association Test (IAT) has been widely used to assess people's associations of target concepts with qualitative attributes, such as the likelihood of being hired or convicted depending on race, gender, or age. The condensed version--the Brief IAT--aims to implicit biases by measuring the reaction time to concept classifications. To use this measure in HCI research, however, we need a way to construct and validate target concepts, which tend to quickly evolve and depend on geographical and cultural interpretations. In this paper, we introduce and evaluate a new method to appropriate the BIAT using crowdsourcing to measure people's leanings on polarising topics. We present a web-based tool to test participants' bias on custom themes, where self-assessments often fail. We validated our approach with 14 domain experts and assessed the fit of crowdsourced test construction. Our method allows researchers of different domains to create and validate bias tests that can be geographically tailored and updated over time. We discuss how our method can be applied to surface implicit user biases and run studies where cognitive biases may impede reliable results.2022TDTilman Dingler et al.University of MelbourneAlgorithmic Fairness & BiasComputational Methods in HCICHI
A Critique of Electrodermal Activity Practices at CHIElectrodermal activity data is widely used in HCI to capture rich and unbiased signals. Results from related fields, however, have suggested several methodological issues that can arise when practices do not follow established standards. In this paper, we present a systematic methodological review of CHI papers involving the use of EDA data according to best practices from the field of psychophysiology, where standards are well-established and mature. We found severe issues in our sample at all stages of the research process. To ensure the validity of future research, we highlight pitfalls and offer directions for how to improve community standards.2021EBEbrahim Babaei et al.University of MelbourneBiosensors & Physiological MonitoringResearch Ethics & Open ScienceCHI
Impact of Task on Attentional Tunneling in Handheld Augmented RealityAttentional tunneling describes a phenomenon in Augmented Reality (AR) where users excessively focus on virtual content while neglecting their physical surroundings. This leads to the concern that users could neglect hazardous situations when using AR applications. However, studies have often confounded the role of the virtual content with the role of the associated task in inducing attentional tunneling. In this paper, we disentangle the impact of the associated task and of the virtual content on the attentional tunneling effect by measuring reaction times to events in two user studies. We found that presenting virtual content did not significantly increase user reaction times to events, but adding a task to the content did. This work contributes towards our understanding of the attentional tunneling effect on handheld AR devices, and highlights the need to consider both task and context when evaluating AR application usage.2021BSBrandon Victor Syiem et al.The University of MelbourneAR Navigation & Context AwarenessImmersion & Presence ResearchCHI
Gaze-Supported 3D Object Manipulation in Virtual RealityThis paper investigates integration, coordination, and transition strategies of gaze and hand input for 3D object manipulation in VR. Specifically, this work aims to understand whether incorporating gaze input can benefit VR object manipulation tasks, and how it should be combined with hand input for improved usability and efficiency. We designed four gaze-supported techniques that leverage different combination strategies for object manipulation and evaluated them in two user studies. Overall, we show that gaze did not offer significant performance benefits for transforming objects in the primary working space, where all objects were located in front of the user and within the arm-reach distance, but can be useful for a larger environment with distant targets. We further offer insights regarding combination strategies of gaze and hand input, and derive implications that can help guide the design of future VR systems that incorporate gaze input for 3D object manipulation.2021DYDifeng Yu et al.The University of MelbourneEye Tracking & Gaze InteractionImmersion & Presence ResearchCHI
"Hi! I am the Crowd Tasker" Crowdsourcing through Digital Voice AssistantsInspired by the increasing prevalence of digital voice assistants, we demonstrate the feasibility of using voice interfaces to deploy and complete crowd tasks. We have developed Crowd Tasker, a novel system that delivers crowd tasks through a digital voice assistant. In a lab study, we validate our proof-of-concept and show that crowd task performance through a voice assistant is comparable to that of a web interface for voice-compatible and voice-based crowd tasks for native English speakers. We also report on a field study where participants used our system in their homes. We find that crowdsourcing through voice can provide greater flexibility to crowd workers by allowing them to work in brief sessions, enabling multi-tasking, and reducing the time and effort required to initiate tasks. We conclude by proposing a set of design guidelines for the creation of crowd tasks for voice and the development of future voice-based crowdsourcing systems.2020DHDanula Hettiachchi et al.University of MelbourneVoice User Interface (VUI) DesignConversational ChatbotsCHI
Faces of Focus: A Study on the Facial Cues of Attentional StatesAutomatically detecting attentional states is a prerequisite for designing interventions to manage attention — knowledge workers' most critical resource. As a first step towards this goal, it is necessary to understand how different attentional states are made discernible through visible cues in knowledge workers. In this paper, we demonstrate the important facial cues to detect attentional states by evaluating a data set of 15 participants that we tracked over a whole workday, which included their challenge and engagement levels. Our evaluation shows that gaze, pitch, and lips part action units are indicators of engaged work; while pitch, gaze movements, gaze angle, and upper-lid raiser action units are indicators of challenging work. These findings reveal a significant relationship between facial cues and both engagement and challenge levels experienced by our tracked participants. Our work contributes to the design of future studies to detect attentional states based on facial cues.2020EBEbrahim Babaei et al.University of MelbourneEye Tracking & Gaze InteractionHuman Pose & Activity RecognitionCHI
Does Smartphone Use Drive our Emotions or vice versa? A Causal AnalysisIn this paper, we demonstrate the existence of a bidirectional causal relationship between smartphone application use and user emotions. In a two-week long in-the-wild study with 30 participants we captured 502,851 instances of smartphone application use in tandem with corresponding emotional data from facial expressions. Our analysis shows that while in most cases application use drives user emotions, multiple application categories exist for which the causal effect is in the opposite direction. Our findings shed light on the relationship between smartphone use and emotional states. We furthermore discuss the opportunities for research and practice that arise from our findings and their potential to support emotional well-being.2020ZSZhanna Sarsenbayeva et al.University of MelbourneMental Health Apps & Online Support CommunitiesSleep & Stress MonitoringCHI
Context-Informed Scheduling and Analysis: Improving Accuracy of Mobile Self-ReportsMobile self-reports are a popular technique to collect participant labelled data in the wild. While literature has focused on increasing participant compliance to self-report questionnaires, relatively little work has assessed response accuracy. In this paper, we investigate how participant context can affect response accuracy and help identify strategies to improve the accuracy of mobile self-report data. In a 3-week study we collect over 2,500 questionnaires containing both verifiable and non-verifiable questions. We find that response accuracy is higher for questionnaires that arrive when the phone is not in ongoing or very recent use. Furthermore, our results show that long completion times are an indicator of a lower accuracy. Using contextual mechanisms readily available on smartphones, we are able to explain up to 13% of the variance in participant accuracy. We offer actionable recommendations to assist researchers in their future deployments of mobile self-report studies.2019NBNiels van Berkel et al.The University of MelbournePrivacy by Design & User ControlContext-Aware ComputingNotification & Interruption ManagementCHI
Continuous Alertness Assessments: Using EOG Glasses to Unobtrusively Monitor Fatigue Levels In-The-WildAs the day progresses, cognitive functions are subject to fluctuations. While the circadian process results in diurnal peaks and drops, the homeostatic process manifests itself in a steady decline of alertness across the day. Awareness of these changes allows the design of proactive recommender and warning systems, which encourage demanding tasks during periods of high alertness and flag accident-prone activities in low alertness states. In contrast to conventional alertness assessments, which are often limited to lab conditions, bulky hardware, or interruptive self-assessments, we base our approach on eye blink frequency data known to directly relate to fatigue levels. Using electrooculography sensors integrated into regular glasses' frames, we recorded the eye movements of 16 participants over the course of two weeks in-the-wild and built a robust model of diurnal alertness changes. Our proposed method allows for unobtrusive and continuous monitoring of alertness levels throughout the day.2019BTBenjamin Tag et al.Keio UniversityEye Tracking & Gaze InteractionBiosensors & Physiological MonitoringCHI