M3BAT: Unsupervised Domain Adaptation for Multimodal Mobile Sensing with Multi-Branch Adversarial TrainingMeegahapola 等人提出 M3BAT 多分支对抗训练框架,实现多模态移动传感数据在不同域间的无监督自适应迁移,提升了跨域传感器数据的识别准确率。2024LMLakmal Meegahapola et al.Context-Aware ComputingComputational Methods in HCIUbiComp
Learning About Social Context From Smartphone Data: Generalization Across Countries and Daily Life MomentsUnderstanding how social situations unfold in people's daily lives is relevant to designing mobile systems that can support users in their personal goals, well-being, and activities. As an alternative to questionnaires, some studies have used passively collected smartphone sensor data to infer social context (i.e., being alone or not) with machine learning models. However, the few existing studies have focused on specific daily life occasions and limited geographic cohorts in one or two countries. This limits the understanding of how inference models work in terms of generalization to everyday life occasions and multiple countries. In this paper, we used a novel, large-scale, and multimodal smartphone sensing dataset with over 216K self-reports collected from 581 young adults in five countries (Mongolia, Italy, Denmark, UK, Paraguay), first to understand whether social context inference is feasible with sensor data, and then, to know how behavioral and country-level diversity affects inferences. We found that several sensors are informative of social context, that partially personalized multi-country models (trained and tested with data from all countries) and country-specific models (trained and tested within countries) can achieve similar performance above 90% AUC, and that models do not generalize well to unseen countries regardless of geographic proximity. These findings confirm the importance of the diversity of mobile data, to better understand social context inference models in different countries.2024AMAurel Ruben Mäder et al.Idiap Research Institute, EPFLHuman Pose & Activity RecognitionContext-Aware ComputingUbiquitous ComputingCHI
A System for Human-Robot Teaming through End-User Programming and Shared AutonomyMany industrial tasks-such as sanding, installing fasteners, and wire harnessing-are difficult to automate due to task complexity and variability. We instead investigate deploying robots in an assistive role for these tasks, where the robot assumes the physical task burden and the skilled worker provides both the high-level task planning and low-level feedback necessary to effectively complete the task. In this article, we describe the development of a system for flexible human-robot teaming that combines state-of-the-art methods in end-user programming and shared autonomy and its implementation in sanding applications. We demonstrate the use of the system in two types of sanding tasks, situated in aircraft manufacturing, that highlight two potential workflows within the human-robot teaming setup. We conclude by discussing challenges and opportunities in human-robot teaming identified during the development, application, and demonstration of our system.2024MHMichael Hagenow et al.Human-Robot Collaboration (HRC)Computational Methods in HCIHRI
Generalization and Personalization of Mobile Sensing-Based Mood Inference Models: An Analysis of College Students in Eight CountriesMood inference with mobile sensing data has been studied in ubicomp literature over the last decade. This inference enables context-aware and personalized user experiences in general mobile apps and valuable feedback and interventions in mobile health apps. However, even though model generalization issues have been highlighted in many studies, the focus has always been on improving the accuracies of models using different sensing modalities and machine learning techniques, with datasets collected in homogeneous populations. In contrast, less attention has been given to studying the performance of mood inference models to assess whether models generalize to new countries. In this study, we collected a mobile sensing dataset with 329K self-reports from 678 participants in eight countries (China, Denmark, India, Italy, Mexico, Mongolia, Paraguay, UK) to assess the effect of geographical diversity on mood inference models. We define and evaluate country-specific (trained and tested within a country), continent-specific (trained and tested within a continent), country-agnostic (tested on a country not seen on training data), and multi-country (trained and tested with multiple countries) approaches trained on sensor data for two mood inference tasks with population-level (non-personalized) and hybrid (partially personalized) models. We show that partially personalized country-specific models perform the best yielding area under the receiver operating characteristic curve (AUROC) scores of the range 0.78--0.98 for two-class (negative vs. positive valence) and 0.76--0.94 for three-class (negative vs. neutral vs. positive valence) inference. Further, with the country-agnostic approach, we show that models do not perform well compared to country-specific settings, even when models are partially personalized. We also show that continent-specific models outperform multi-country models in the case of Europe. Overall, we uncover generalization issues of mood inference models to new countries and how the geographical similarity of countries might impact mood inference. https://dl.acm.org/doi/10.1145/35694832023LMLakmal Meegahapola et al.Mental Health Apps & Online Support CommunitiesContext-Aware ComputingUbiComp
Periscope: A Robotic Camera System to Support Remote Physical CollaborationWe investigate how robotic camera systems can offer new capabilities to computer-supported cooperative work through the design, development, and evaluation of a prototype system called Periscope. With Periscope, a local worker completes manipulation tasks with guidance from a remote helper who observes the workspace through a camera mounted on a semi-autonomous robotic arm that is co-located with the worker. Our key insight is that the helper, the worker, and the robot should all share responsibility of the camera view—an approach we call shared camera control. Using this approach, we present a set of modes that distribute the control of the camera between the human collaborators and the autonomous robot depending on task needs. We demonstrate the system’s utility and the promise of shared camera control through a preliminary study where 12 dyads collaboratively worked on assembly tasks and discuss design and research implications of our work for future robotic camera system that facilitate remote collaboration.2023PPPragathi Praveena et al.Human Robot InteractionCSCW
Situated Participatory Design: A Method for In Situ Design of Robotic Interaction with Older AdultsWe present a participatory design method to design human-robot interactions with older adults and its application through a case study of designing an assistive robot for a senior living facility. The method, called Situated Participatory Design (sPD), was designed considering the challenges of working with older adults and involves three phases that enable designing and testing use scenarios through realistic, iterative interactions with the robot. In design sessions with nine residents and three caregivers, we uncovered a number of insights about sPD that help us understand its benefits and limitations. For example, we observed how designs evolved through iterative interactions and how early exposure to the robot helped participants consider using the robot in their daily life. With sPD, we aim to help future researchers to increase and deepen the participation of older adults in designing assistive technologies.2023LSLaura Stegner et al.University of Wisconsin-MadisonDomestic RobotsAging-in-Place Assistance SystemsParticipatory DesignCHI
Quantified Canine: Inferring Dog Personality From WearablesBeing able to assess dog personality can be used to, for example, match shelter dogs with future owners, and personalize dog activities. Such an assessment typically relies on experts or psychological scales administered to dog owners, both of which are costly. To tackle that challenge, we built a device called ``Patchkeeper'' that can be strapped on the pet's chest and measures activity through an accelerometer and a gyroscope. In an in-the-wild deployment involving 12 healthy dogs, we collected 1300 hours of sensor activity data and dog personality test results from two validated questionnaires. By matching these two datasets, we trained ten machine learning classifiers that predicted dog personality from activity data, achieving AUCs in [0.63-0.90], suggesting the value of tracking psychological signals of pets using wearable technologies.2023LMLakmal Meegahapola et al.Idiap Research Institute, École Polytechnique Fédérale de Lausanne (EPFL)Human Pose & Activity RecognitionBiosensors & Physiological MonitoringCHI
Complex Daily Activities, Country-Level Diversity, and Smartphone Sensing: A Study in Denmark, Italy, Mongolia, Paraguay, and UKSmartphones enable understanding human behavior with activity recognition to support people’s daily lives. Prior studies focused on using inertial sensors to detect simple activities (sitting, walking, running, etc.) and were mostly conducted in homogeneous populations within a country. However, people are more sedentary in the post-pandemic world with the prevalence of remote/hybrid work/study settings, making detecting simple activities less meaningful for context-aware applications. Hence, the understanding of (i) how multimodal smartphone sensors and machine learning models could be used to detect complex daily activities that can better inform about people’s daily lives, and (ii) how models generalize to unseen countries, is limited. We analyzed in-the-wild smartphone data and ~216K self-reports from 637 college students in five countries (Italy, Mongolia, UK, Denmark, Paraguay). Then, we defined a 12-class complex daily activity recognition task and evaluated the performance with different approaches. We found that even though the generic multi-country approach provided an AUROC of 0.70, the country-specific approach performed better with AUROC scores in [0.79-0.89]. We believe that research along the lines of diversity awareness is fundamental for advancing human behavior understanding through smartphones and machine learning, for more real-world utility across countries.2023KAKarim Assi et al.École Polytechnique Fédérale de LausanneHuman Pose & Activity RecognitionContext-Aware ComputingCHI
Declarative Variables in Online Dating: a Mixed-Method Analysis of a Mimetic-Distinctive MechanismDeclarative variables of self-description have a long-standing tradition in matchmaking media. With the advent of online dating platforms and their brand positioning, the volume and semantics of variables vary greatly across apps. However, a variable landscape across multiple platforms, providing an in-depth understanding of the dating structure offered to users, has hitherto been absent in the literature. In this study, more than 300 declarative variables from 22 Anglophone and Francophone dating apps are examined. A mixed-method research design is used, combining hierarchical classification with an interview analysis of nine founders and developers in the industry. We present a new typology of variables in nine categories and a classification of dating apps, which highlights a double mimetic-distinctive mechanism in the variable definition and reflects the dating market. From the interviews, we extract three main factors concerning the economic and sociotechnical framework of coding practices, the actors’ personal experience, and the development methodologies including user traces that influence this mechanism. This work, which to our knowledge is the most extensive thus far on dating app declarative variables, provides a new perspective on the analysis of the intersection between developers and users of online dating, and one that is mediated through variables, among other components.2021JPJessica Pidoux et al.Connecting and Reaching OutCSCW
My Own Private Nightlife: Understanding Youth Personal Spaces from Crowdsourced VideoPrivate nightlife environments of young people are likely characterized by the particular ambiance, physical attributes, and activities but little is known about it. For instance, previous studies have documented ambiance and physical characteristics of homes using pictures from Foursquare or Airbnb, but there is a reasonable doubt that such staged data cannot reliably represent real-life situations. As a first attempt at describing the physical and ambiance features of homes using manual annotations and predicting ambiance characteristics using machine-extracted features, we used a unique dataset of 301 crowdsourced videos of home environments recorded in-situ by young people on weekend nights. Agreement among the five independent annotators was high for most features. Results of the annotation task revealed various patterns of youth home spaces features, such as the type of room attended (e.g., predominantly living room and bedroom), the number and gender of friends present, the type of ongoing activities (e.g., watching TV or computer alone; drinking, chatting and eating in the presence of others) and ambiance attributes with their correlations. Then, object and scene features of places, extracted by deep learning, were found to highly correlate with ambiances, while sound features mostly recognized ‘music’ and ‘speech’ only. Finally, the results of a regression task for predicting ambiances from those features showed that six of the ambiance categories can be inferred with R 2 in the [0.21, 0.69] range. Our work is quite novel with regard to the type of data (i.e., crowdsourced videos of real-life homes) and the analytical design (i.e., the combined use of manual annotation and deep learning to identify relevant features). This work potentially points to interesting ways to automatically predicting ambiances from videos of private environments at homes as a contribution to the multimedia community of researching ambiances at private residences.2019TPThanh-Trung PHAN et al.Youth and ResilienceCSCW