Libertas: Privacy-Preserving Collaborative Computation for Decentralised Personal Data StoresData and their processing have become an indispensable aspect for our society. Insights drawn from collective data make invaluable contribution to scientific, societal and communal research and business. However, there are increasing worries about privacy issues and data misuse, prompting the emergence of decentralised personal data stores (PDS) like Solid. However, existing PDS frameworks face challenges in ensuring data privacy when performing collective computation to combine data from multiple users. At a glance, Secure Multi-Party Computation (MPC) offers input secrecy protection while performing collective computation without relying on any single party. However, issues emerge when directly applying MPC in the context of PDS, particularly due to key factors like autonomy and decentralisation. In this work, we discuss the essence of this issue, identify the potential solution, and introduce a modular system architecture, Libertas, to integrate MPC with PDS like Solid, without requiring protocol-level changes. We introduce the paradigm shift from an `omniscient' view to individual-based, user-centric view of trust and security, and discuss the threat model of Libertas. Two realistic use cases for collaborative data processing are used for evaluation, both for technical feasibility and empirical benchmark, highlighting its effectiveness in empowering gig workers and generating differentially private synthetic data. The results of our experiments underscore Libertas' linear scalability and provide valuable insights into compute optimisations, thereby advancing the state-of-the-art in privacy-preserving data processing practices. By offering practical solutions for maintaining both individual autonomy and privacy in collaborative data processing environments, Libertas contributes significantly to the ongoing discourse on privacy protection in data-driven decision-making contexts.2025RZRui Zhao et al.Designing for PrivacyCSCW
Access Denied: Meaningful Data Access for Quantitative Algorithm AuditsIndependent algorithm audits hold the promise of bringing accountability to automated decision-making. However, third-party audits are often hindered by access restrictions, forcing auditors to rely on limited, low-quality data. To study how these limitations impact research integrity, we conduct audit simulations on two realistic case studies for recidivism and healthcare coverage prediction. We examine the accuracy of estimating group parity metrics across three levels of access: (a) aggregated statistics, (b) individual-level data with model outputs, and (c) individual-level data without model outputs. Despite selecting one of the simplest tasks for algorithmic auditing, we find that data minimization and anonymization practices can strongly increase error rates on individual-level data, leading to unreliable assessments. We discuss implications for independent auditors, as well as potential avenues for HCI researchers and regulators to improve data access and enable both reliable and holistic evaluations.2025JZJuliette Zaccour et al.University of Oxford, Oxford Internet InstituteExplainable AI (XAI)AI Ethics, Fairness & AccountabilityAlgorithmic Transparency & AuditabilityCHI
Governance of Generative AI in Creative Work: Consent, Credit, Compensation, and BeyondSince the emergence of generative AI, creative workers have spoken up about the career-based harms they have experienced arising from this new technology. A common theme in these accounts of harm is that generative AI models are trained on workers' creative output without their consent and without giving credit or compensation to the original creators. This paper reports findings from 20 interviews with creative workers in three domains: visual art and design, writing, and programming. We investigate the gaps between current AI governance strategies, what creative workers want out of generative AI governance, and the nuanced role of creative workers' consent, compensation and credit for training AI models on their work. Finally, we make recommendations for how generative AI can be governed and how operators of generative AI systems might more ethically train models on creative output in the future.2025LKLin Kyi et al.Max Planck Institute for Security and PrivacyGenerative AI (Text, Image, Music, Video)AI Ethics, Fairness & AccountabilityCHI
The Interaction Layer: An Exploration for Co-Designing User-LLM Interactions in Parental Wellbeing Support SystemsParenting brings emotional and physical challenges, from balancing work, childcare, and finances to coping with exhaustion and limited personal time. Yet, one in three parents never seek support. AI systems potentially offer stigma-free, accessible, and affordable solutions. Yet, user adoption often fails due to issues with explainability and reliability. To see if these issues could be solved using a co-design approach, we developed and tested NurtureBot, a wellbeing support assistant for new parents. 32 parents co-designed the system through Asynchronous Remote Communities method, identifying the key challenge as achieving a "successful chat." As part of co-design, parents role-played as NurtureBot, rewriting its dialogues to improve user understanding, control, and outcomes. The refined prototype, featuring an Interaction Layer, was evaluated by by 32 initial and 46 new parents, showing improved user experience and usability, with final CUQ score of 91.3/100, demonstrating successful interaction patterns. Our process revealed useful interaction design lessons for effective AI parenting support.2025SVSruthi Viswanathan et al.University of OxfordHuman-LLM CollaborationParticipatory DesignCHI
"You are you and the app. There's nobody else.": Building Worker-Designed Data Institutions within Platform HegemonyInformation asymmetries create extractive, often harmful relationships between platform workers (e.g., Uber or Deliveroo drivers) and their algorithmic managers. Recent HCI studies have put forward more equitable platform designs but leave open questions about the social and technical infrastructures required to support them without the cooperation of platforms. We conducted a participatory design study in which platform workers deconstructed and re-imagined Uber's schema for driver data. We analyzed the data structures and social institutions participants proposed, focusing on the stakeholders, roles, and strategies for mitigating conflicting interests of privacy, personal agency, and utility. Using critical theory, we reflected on the capability of participatory design to generate bottom-up collective data infrastructures. Based on the plurality of alternative institutions participants produced and their aptitude to navigate data stewardship decisions, we propose user-configurable tools for lightweight data institution building, as an alternative to redesigning existing platforms or delegating control to centralized trusts.2023JSJake M L Stein et al.University of OxfordIoT Device PrivacyParticipatory DesignCHI
Exploring Design and Governance Challenges in the Development of Privacy-Preserving ComputationHomomorphic encryption, secure multi-party computation, and differential privacy are part of an emerging class of Privacy Enhancing Technologies which share a common promise: to preserve privacy whilst also obtaining the benefits of computational analysis. Due to their relative novelty, complexity, and opacity, these technologies provoke a variety of novel questions for design and governance. We interviewed researchers, developers, industry leaders, policymakers, and designers involved in their deployment to explore motivations, expectations, perceived opportunities and barriers to adoption. This provided insight into several pertinent challenges facing the adoption of these technologies, including: how they might make a nebulous concept like privacy computationally tractable; how to make them more usable by developers; and how they could be explained and made accountable to stakeholders and wider society. We conclude with implications for the development, deployment, and responsible governance of these privacy-preserving computation techniques.2021NANitin Agrawal et al.University of OxfordPrivacy by Design & User ControlPrivacy Perception & Decision-MakingSmart Home Privacy & SecurityCHI
Strangers in the Room: Unpacking Perceptions of 'Smartness' and Related Ethical Concerns in the HomeThe increasingly widespread use of 'smart' devices has raised multifarious ethical concerns regarding their use in domestic spaces. Previous work examining such ethical dimensions has typically either involved empirical studies of concerns raised by specific devices and use contexts, or alternatively expounded on abstract concepts like autonomy, privacy or trust in relation to 'smart homes' in general. This paper attempts to bridge these approaches by asking what features of smart devices users consider as rendering them 'smart' and how these relate to ethical concerns. Through a multimethod investigation including surveys with smart device users (n=120) and semi-structured interviews (n=15), we identify and describe eight types of smartness and explore how they engender a variety of ethical concerns including privacy, autonomy, and disruption of the social order. We argue that this middle ground, between concerns arising from particular devices and more abstract ethical concepts, can better anticipate potential ethical concerns regarding smart devices.2020WSWilliam Seymour et al.AI Ethics, Fairness & AccountabilitySmart Home Interaction DesignSmart Home Privacy & SecurityDIS
Informing the Design of Privacy-Empowering Tools for the Connected HomeConnected devices in the home represent a potentially grave new privacy threat due to their unfettered access to the most personal spaces in people's lives. Prior work has shown that despite concerns about such devices, people often lack sufficient awareness, understanding, or means of taking effective action. To explore the potential for new tools that support such needs directly we developed Aretha, a privacy assistant technology probe that combines a network disaggregator, personal tutor, and firewall, to empower end-users with both the knowledge and mechanisms to control disclosures from their homes. We deployed Aretha in three households over six weeks, with the aim of understanding how this combination of capabilities might enable users to gain awareness of data disclosures by their devices, form educated privacy preferences, and to block unwanted data flows. The probe, with its novel affordances—and its limitations—prompted users to co-adapt, finding new control mechanisms and suggesting new approaches to address the challenge of regaining privacy in the connected home.2020WSWilliam Seymour et al.University of OxfordPrivacy by Design & User ControlSmart Home Privacy & SecurityCHI
Self-Control in Cyberspace: Applying Dual Systems Theory to a Review of Digital Self-Control ToolsMany people struggle to control their use of digital devices. However, our understanding of the design mechanisms that support user self-control remains limited. In this paper, we make two contributions to HCI research in this space: first, we analyse 367 apps and browser extensions from the Google Play, Chrome Web, and Apple App stores to identify common core design features and intervention strategies afforded by current tools for digital self-control. Second, we adapt and apply an integrative dual systems model of self-regulation as a framework for organising and evaluating the design features found. Our analysis aims to help the design of better tools in two ways: (i) by identifying how, through a well-established model of self-regulation, current tools overlap and differ in how they support self-control; and (ii) by using the model to reveal underexplored cognitive mechanisms that could aid the design of new tools.2019ULUlrik Lyngs et al.University of OxfordChronic Disease Self-Management (Diabetes, Hypertension, etc.)Notification & Interruption ManagementCHI
Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-MakingCalls for heightened consideration of fairness and accountability in algorithmically-informed public decisions—like taxation, justice, and child protection—are now commonplace. How might designers support such human values? We interviewed 27 public sector machine learning practitioners across 5 OECD countries regarding challenges understanding and imbuing public values into their work. The results suggest a disconnect between organisational and institutional realities, constraints and needs, and those addressed by current research into usable, transparent and 'discrimination-aware' machine learning—absences likely to undermine practical initiatives unless addressed. We see design opportunities in this disconnect, such as in supporting the tracking of concept drift in secondary data sources, and in building usable transparency tools to identify risks and incorporate domain knowledge, aimed both at managers and at the 'street-level bureaucrats' on the frontlines of public service. We conclude by outlining ethical challenges and future directions for collaboration in these high-stakes applications.2018MVMichael Veale et al.University College LondonAI-Assisted Decision-Making & AutomationAI Ethics, Fairness & AccountabilityAlgorithmic Transparency & AuditabilityCHI
X-Ray Refine: Supporting the Exploration and Refinement of Information Exposure Resulting from Smartphone AppsMost smartphone apps collect and share information with various first and third parties; yet, such data collection practices remain largely unbeknownst to, and outside the control of, end-users. In this paper, we seek to understand the potential for tools to help people refine their exposure to third parties, resulting from their app usage. We designed an interactive, focus-plus-context display called X-Ray Refine (Refine) that uses models of over 1 million Android apps to visualise a person's exposure profile based on their durations of app use. To support exploration of mitigation strategies, emph{Refine} can simulate actions such as app usage reduction, removal, and substitution. A lab study of emph{Refine} found participants achieved a high-level understanding of their exposure, and identified data collection behaviours that violated both their expectations and privacy preferences. Participants also devised bespoke strategies to achieve privacy goals, identifying the key barriers to achieving them.2018MKMax Van Kleek et al.University of OxfordAlgorithmic Transparency & AuditabilityPrivacy by Design & User ControlIoT Device PrivacyCHI
`It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic DecisionsData-driven decision-making consequential to individuals raises important questions of accountability and justice. Indeed, European law provides individuals limited rights to `meaningful information about the logic' behind significant, autonomous decisions such as loan approvals, insurance quotes, and CV filtering. We undertake three experimental studies examining people's perceptions of justice in algorithmic decision-making under different scenarios and explanation styles. Dimensions of justice previously observed in response to human decision-making appear similarly engaged in response to algorithmic decisions. Qualitative analysis identified several concerns and heuristics involved in justice perceptions including arbitrariness, generalisation, and (in)dignity. Quantitative analysis indicates that explanation styles primarily matter to justice perceptions only when subjects are exposed to multiple different styles---under repeated exposure of one style, scenario effects obscure any explanation effects. Our results suggests there may be no `best' approach to explaining algorithmic decisions, and that reflection on their automated nature both implicates and mitigates justice dimensions.2018RBReuben Binns et al.University of OxfordExplainable AI (XAI)AI Ethics, Fairness & AccountabilityAlgorithmic Transparency & AuditabilityCHI