Wisdom of Crowds: A Human-Machine-Things Cooperative Scheduling Method for Heterogeneous Mobile CrowdsensingRelying on the development of crowdsourcing ideas and mobile crowd sensing (MCS) technology, many tasks that originally required a lot of manpower and material resources have been solved efficiently. However, with the development of urbanization, the traditional MCS systems have gradually been unable to cope with the demands of massive sensing tasks and high spatio-temporal sensing coverage. The challenges are as follows: 1) The scarcity of participants and the limitation of human motion rules lead to the existence of spatio-temporal blind spots in the process of collecting sensing data; 2) The single type of participants limits the sensing ability of the system and the types of data that can be collected, which affects sensing precision and quality. With the emergence of various intelligent sensing terminals in the city, the heterogeneous crowd sensing that integrates human, machine and things participants has become a new generation of sensing mode. In the spatio-temporal related sensing scenarioes, this article designs a new human-machine-things cooperative scheduling (HMT-CS) algorithm framework by comprehensively considering the diverse sensing skills, spatio-temporal trajectories, sensing costs of heterogeneous participants and system total budget constraints. The algorithm can match suitable heterogeneous participants for each task, which greatly improves the sensing quality, sensing fairness and overall utility of heterogeneous MCS systems. We combined multiple public real urban datasets to conduct an in-depth comparative analysis and comprehensive evaluation of the algorithm, and the results show that our method is superior to other baselines in all indicators.2024YLYimeng Liu et al.Session 2c: Blind Users and Collaborative SensingCSCW
EchoPFL: Asynchronous Personalized Federated Learning on Mobile Devices with On-Demand Staleness ControlLi 等人提出 EchoPFL 框架,通过异步本地训练和按需陈旧度控制实现移动端高效的个性化联邦学习,保护用户隐私。2024XLXiaochen Li et al.Context-Aware ComputingComputational Methods in HCIUbiComp
GrainSense: A Wireless Grain Moisture Sensing System based on Wi-Fi SignalsWang 等人开发 GrainSense 系统,利用 Wi-Fi 信号无接触测量谷物水分,实现仓储谷物湿度的实时无线监测。2024ZWZhu Wang et al.Context-Aware ComputingUbiquitous ComputingEcological Design & Green ComputingUbiComp
Grip-Reach-Touch-Repeat: A Refined Model of Grasp to Encompass One-Handed Interaction with Arbitrary Form Factor DevicesWe extend grasp models to encompass one-handed interaction with arbitrary shaped touchscreen devices. Current models focus on how objects are stably held by external forces. However, with touchscreen devices, we postulate that users do a trade-off between holding securely and exploring interactively. To verify this, we first conducted a qualitative study which asked participants to grasp 3D printed objects while considering its different interactivity. Results of the study confirm our hypothesis and reveal obvious change in postures. To further verify this trade-off and design interactions, we developed a simulation software capable of computing the stability of a grasp and its reachability. We conducted the second study based on the observed predominant grasps to validate our software with a glove. Results also confirm a consistent trade-off between stability and reachability. We conclude by discussing how this research can help designing computational tools focusing on hand-held interactions with arbitrary shaped touchscreen devices.2024KZKaixing Zhao et al.Northwestern Polytechnical UniversityHaptic WearablesShape-Changing Interfaces & Soft Robotic MaterialsHand Gesture RecognitionCHI
Understanding the Mechanism of Through-Wall Wireless Sensing: A Model-based Perspective"During the last few years, there is a growing interest on the usage of Wi-Fi signals for human activity detection. A large number of Wi-Fi based sensing systems have been developed, including respiration detection, gesture classification, identity recognition, etc. However, the usability and robustness of such systems are still limited, due to the complexity of practical environments. Various pioneering approaches have been proposed to solve this problem, among which the model-based approach is attracting more and more attention, due to the advantage that it does not require a huge dataset for model training. Existing models are usually developed for Line-of-Sight (LoS) scenarios, and can not be applied to facilitating the design of wireless sensing systems in Non-Line-of-Sight (NLoS) scenarios (e.g., through-wall sensing). To fill this gap, we propose a through-wall wireless sensing model, aiming to characterize the propagation laws and sensing mechanisms of Wi-Fi signals in through-wall scenarios. Specifically, based on the insight that Wi-Fi signals will be refracted while there is a wall between the transceivers, we develop a refraction-aware Fresnel model, and prove theoretically that the original Fresnel model can be seen as a special case of the proposed model. We find that the presence of a wall will change the distribution of Fresnel zones, which we called the ""squeeze effect"" of Fresnel zones. Moreover, our theoretical analysis indicates that the ""squeeze effect"" can help improve the sensing capability (i.e., spatial resolution) of Wi-Fi signals. To validate the proposed model, we implement a through-wall respiration sensing system with a pair of transceivers. Extensive experiments in typical through-wall environments show that the respiration detection error is lower than 0.5 bpm, while the subject's vertical distance to the connection line of the transceivers is less than 200 cm. To the best of our knowledge, this is the first theoretical model that reveals the Wi-Fi based wireless sensing mechanism in through-wall scenarios. https://dl.acm.org/doi/10.1145/3569494"2023HZHualei Zhang et al.Context-Aware ComputingUbiquitous ComputingUbiComp
sUrban: Stable Prediction for Unseen Urban Data from Location-based Sensors"Recent machine learning research on smart cities has achieved great success in predicting future trends, under the key assumption that the test data follows the same distribution of the training data. The rapid urbanization, however, makes this assumption challenging to hold in practice. Because new data is emerging from new environments (e.g., an emerging city or region), which may follow different distributions from data in existing environments. Different from transfer-learning methods accessing target data during training, we often do not have any prior knowledge about the new environment. Therefore, it is critical to explore a predictive model that can be effectively adapted to unseen new environments. This work aims to address this Out-of-Distribution (OOD) challenge for sustainable cities. We propose to identify two kinds of features that are useful for OOD prediction in each environment: (1) the environment-invariant features to capture the shared commonalities for predictions across different environments; and (2) the environment-aware features to characterize the unique information of each environment. Take bike riding as an example. The bike demands of different cities often follow the same pattern that they significantly increase during the rush hour on workdays. Meanwhile, there are also some local patterns in each city because of different cultures and citizens' travel preferences. We introduce a principled framework -- sUrban -- that consists of an environment-invariant optimization module for learning invariant representation and an environment-aware optimization module for learning environment-aware representation. Evaluation on real-world datasets from various urban application domains corroborates the generalizability of sUrban. This work opens up new avenues to smart city development." https://doi.org/10.1145/36108772023QWQianru Wang et al.Smart Cities & Urban SensingSustainable HCIUbiComp
AdaEnlight: Energy-aware Low-light Video Stream Enhancement on Mobile Devices"The ubiquity of camera-embedded devices and the advances in deep learning have stimulated various intelligent mobile video applications. These applications often demand on-device processing of video streams to deliver real-time, high-quality services for privacy and robustness concerns. However, the performance of these applications is constrained by the raw video streams, which tend to be taken with small-aperture cameras of ubiquitous mobile platforms in dim light. Despite extensive low-light video enhancement solutions, they are unfit for deployment to mobile devices due to their complex models and and ignorance of system dynamics like energy budgets. In this paper, we propose AdaEnlight, an energy-aware low-light video stream enhancement system on mobile devices. It achieves real-time video enhancement with competitive visual quality while allowing runtime behavior adaptation to the platform-imposed dynamic energy budgets. We report extensive experiments on diverse datasets, scenarios, and platforms and demonstrate the superiority of AdaEnlight compared with state-of-the-art low-light image and video enhancement solutions. https://doi.org/10.1145/3569464"2023SLSicong Liu et al.Generative AI (Text, Image, Music, Video)Biosensors & Physiological MonitoringContext-Aware ComputingUbiComp
Genie in the Model: Automatic Generation of Human-in-the-Loop Deep Neural Networks for Mobile Applications"Advances in deep neural networks (DNNs) have fostered a wide spectrum of intelligent mobile applications ranging from voice assistants on smartphones to augmented reality with smart-glasses. To deliver high-quality services, these DNNs should operate on resource-constrained mobile platforms and yield consistent performance in open environments. However, DNNs are notoriously resource-intensive, and often suffer from performance degradation in real-world deployments. Existing research strives to optimize the resource-performance trade-off of DNNs by compressing the model without notably compromising its inference accuracy. Accordingly, the accuracy of these compressed DNNs is bounded by the original ones, leading to more severe accuracy drop in challenging yet common scenarios such as low-resolution, small-size, and motion-blur. In this paper, we propose to push forward the frontiers of the DNN performance-resource trade-off by introducing human intelligence as a new design dimension. To this end, we explore human-in-the-loop DNNs (H-DNNs) and their automatic performance-resource optimization. We present H-Gen, an automatic H-DNN compression framework that incorporates human participation as a new hyperparameter for accurate and efficient DNN generation. It involves novel hyperparameter formulation, metric calculation, and search strategy in the context of automatic H-DNN generation. We also propose human participation mechanisms for three common DNN architectures to showcase the feasibility of H-Gen. Extensive experiments on twelve categories of challenging samples with three common DNN structures demonstrate the superiority of H-Gen in terms of the overall trade-off between performance (accuracy, latency), and resource (storage, energy, human labour). https://dl.acm.org/doi/10.1145/3580815"2023YWYanfei Wang et al.Human-LLM CollaborationAI-Assisted Decision-Making & AutomationAutoML InterfacesUbiComp
VPRNet: Voxel-based Efficient and Partial-to-Partial Point Cloud Registration on Mobile DevicesWith the popularity of embedded devices such as LIDAR sensors and depth cameras, the resulting point clouds become the main data format for representing the 3D world and spawn various smart mobile applications. A key technology for enabling these applications to furnish high-quality services is real-time point cloud registration on mobile devices, which synthesizes a complete model or a large-scale scene from multiple partial scans. It aims to deliver increasing sensing range, faster 3D reconstruction and more robust robot navigation. Unfortunately, the performance of these applications is limited by the scale and partial loss of raw point cloud frame. The existing solutions for point cloud registration are difficult to deploy on mobile devices due to their complex models and assumptions about point cloud pairs with large overlap, which cause significant delay and inaccuracy. This paper proposes VPRNet - the first voxel-based registration solution that can achieve real-time partial-to-partial registration with competitive registration quality while being more advantageous for large-scale point clouds on mobile devices. We conduct real-world experiments and extensive simulations cross various datasets and platforms to validate the efficacy of VPRNet and further compare the performance with state-of-the-art approaches.2023ZYZihao Yin et al.Context-Aware ComputingUbiquitous ComputingMobileHCI
Task Execution Quality Maximization for Mobile Crowdsourcing in Geo-Social NetworksWith the rapid development of smart devices and high-quality wireless technologies, mobile crowdsourcing (MCS) has been drawing increasing attention with its great potential in collaboratively completing complicated tasks on a large scale. A key issue toward successful MCS is participant recruitment, where a MCS platform directly recruits suitable crowd participants to execute outsourced tasks by physically traveling to specified locations. Recently, a novel recruitment strategy, namely Word-of-Mouth(WoM)-based MCS, has emerged to effectively improve recruitment effectiveness, by fully exploring users' mobility traces and social relationships on geo-social networks. Against this background, we study in this paper a novel problem, namely Expected Task Execution Quality Maximization (ETEQM) for MCS in geo-social networks, which strives to search a subset of seed users to maximize the expected task execution quality of all recruited participants, under a given incentive budget. To characterize the MCS task propagation process over geo-social networks, we first adopt a propagation tree structure to model the autonomous recruitment between the referrers and the referrals. Based on the model, we then formalize the task execution quality and devise a novel incentive mechanism by harnessing the business strategy of multi-level marketing. We formulate our ETEQM problem as a combinatorial optimization problem, and analyze its NP hardness and high-dimensional characteristics. Based on a cooperative co-evolution framework, we proposed a divide-and-conquer problem-solving approach named ETEQM-CC. We conduct extensive simulation experiments and a case study, verifying the effectiveness of our proposed approach.2021LWLiang Wang et al.Crowds and CollaborationCSCW
Human-Machine Cooperative Video Anomaly DetectionIt is still a challenge to detect anomalous events in video sequences in the field of computer vision due to heavy object occlusions, varying crowded densities and complex situations. To address this, we propose a novel human-machine cooperative approach which uses human feedback on anomaly confirmation to inform and enhance video anomaly detection. Specifically, we analyze the spatio-temporal characteristics of sequential frames of a video from the appearance and motion perspective from which spatial and temporal features are identified and extracted. We then develop a convolutional autoencoder neural network to compute an abnormal score based on reconstruction errors. In this process, a group of experts will provide human feedback to a certain proportion of classified frames to be incorporated into the model, and also the final judgment for the event anomalies for training and classification. The proposed approach is evaluated on 3 publicly available surveillance datasets, showing improved accuracy and competitive performance (93.7% AUC) with respect to the best performance (90.6% AUC) of the state-of-the-art approaches. The approach has not been previously seen to the best of our knowledge.2020FYFan Yang et al.Human-AI Collaboration / Images in AICSCW
CrowdNavi: Last-mile Outdoor Navigation for Pedestrians Using Mobile CrowdsensingNavigation services using digital maps make people's travel much easier. However, these services often fail to provide specific routes to those destinations that lack micro data in digital maps, such as a small laundry store in a shopping area. In this paper, we propose CrowdNavi, a last mile navigation service in outdoor environments using crowdsourcing based on the guider-follower model. First, we collect trajectories of guiders and images of reference objects along trajectories. To guide followers by reference objects along the route, we design a Semantic Crowd Navigation model to generate fine-grained maps by integrating guiders' data. Second, we design two score functions to fulfill two main requirements and plan hints. Last, we provide context-aware navigation for followers based on the fine-grained map and detect deviation in real-time. Real world experiments conducted in three different areas show that our proposed system in combination with images of reference objects is efficient.2018QWQianru Wang et al.Urban SpacesCSCW
To Cross or Not to Cross: Urgency-Based External Warning Displays on Autonomous Vehicles to Improve Pedestrian Crossing SafetyAutonomous vehicles (AV) may be able to show visual displays on their external surface to support pedestrian communication with the AV. Pedestrian crossing at uncontrolled locations is safety-critical and clear communication between the pedestrian and the AV is important in this situation. However, research to date has not been clear on how the AV should communicate with pedestrians. We designed two sets of warnings on AVs based on the perception of warning urgency. Each set consisted of three warnings that differed in color and flashing pattern and indicated distinct safety-related information. A survey was conducted to investigate how people make decisions, warnings within and outside of the driving context, and perceived warning compliance. Results showed that people were risk averse in crossing and cars with warning displays were perceived as more urgent. This paper contributes uniquely in exploring research-based approaches on designing warnings to improve pedestrian crossing safety.2018YLYeti Li et al.External HMI (eHMI) — Communication with Pedestrians & CyclistsAutoUI