Not as easy as just update: Survey of System Administrators and Patching BehavioursPatching software theoretically leads to improvements including security critical changes, but it can also lead to new issues. For System Administrators (sysadmins) new issues can negatively impact operations at their organization. While mitigation options like test environments exist, little is known about their prevalence or how contextual factors like size of organization impact the practice of Patch Management. We surveyed 220 sysadmins engaged in Patch Management to investigate self-reported behaviors. We found that dedicated testing environments are not as prevalent as previously assumed. We also expand on known behaviours that sysadmins perform when facing a troublesome patch, such as employing a range of problem solving behaviours to inform their patching decisions.2024AJAdam D G Jenkins et al.King's College LondonPrivacy by Design & User ControlWorkplace Monitoring & Performance TrackingCHI
Twitter has a Binary Privacy Setting, are Users Aware of How It Works?Twitter accounts are public by default, but Twitter gives the option to create protected accounts, where only approved followers can see their tweets. The publicly visible information changes based on the account type and the visibility of tweets also depends solely on the poster's account type which can cause unintended disclosures especially when users interact. We surveyed 336 Twitter users to understand users’ awareness of account information visibility, as well as the tweet visibility when users interact. We find that our participants are aware of the visibility of their profile information and individual tweets. However, the visibility of followed topics, lists, and interactions with protected accounts is confusing. Only 31% of the participants were aware that a reply by a public account to a protected account’s tweet would be publicly visible. Surprisingly, having a protected account does not result in a better understanding of the account information or tweet visibility.2023DKDilara Kekulluoglu et al.PrivacyCSCW
Understanding Privacy Switching Behaviour on TwitterChanging a Twitter account’s privacy setting between public and protected changes the visibility of past tweets. By inspecting the privacy setting of more than 100K Twitter users over 3 months, we noticed that over 40% of those users changed their privacy setting at least once with around 16% changing it over 5 times. This observation motivated us to explore the reasons why people switch their privacy settings. We studied these switching phenomena quantitatively by comparing the tweeting behaviour of users when public vs protected, and qualitatively using two follow-up surveys (n=100, n=324) to understand potential reasoning behind the observed behaviours. Our quantitative analysis shows that users who switch privacy settings mention others and share hashtags more when their setting is public. Our surveys highlighted that users turn protected to share personal content and regulate boundaries while they turn public to interact with others in ways the protected setting prevents.2022DKDilara Kekulluoglu et al.University of EdinburghPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
Recruiting Participants With Programming Skills: A Comparison of Four Crowdsourcing Platforms and a CS Student Mailing ListReliably recruiting participants with programming skills is an ongoing challenge for empirical studies involving software development technologies, often leading to the use of crowdsourcing platforms and computer science (CS) students. In this work, we use five existing survey instruments to explore the programming skills, privacy and security attitudes, and secure development self-efficacy of participants from a CS student mailing list and four crowdsourcing platforms (Appen, Clickworker, MTurk, and Prolific). We recruited 613 participants who claimed to have programming skills and assessed recruitment channels regarding costs, quality, programming skills, as well as privacy and security attitudes. We find that 27% of crowdsourcing participants, 40% of crowdsourcing participants who self-report to be developers, and 89% of CS students answered all programming skill questions correctly. CS students were the most cost-effective recruitment channel and rated themselves lower than crowdsourcing participants about secure development self-efficacy.2022MTMohammad Tahaei et al.University of BristolCrowdsourcing Task Design & Quality ControlComputational Methods in HCICHI
A Case Study of Phishing Incident Response in an Educational OrganizationMalicious communications aimed at tricking employees are a serious threat for organisations, necessitating the creation of procedures and policies for how to quickly respond to ongoing attacks. While automated measures provide some protection, they cannot completely protect an organisation. In this case study, we use interviews and observations to explore the processes staff at a large University use when handling reports of malicious communication, including how the help desk processes reports, who they escalate them to, and how teams who manage protections like the firewalls and mail relays use reports to improve defences. We found that the process and work patterns are a distributed cognitive process requiring multiple distinct teams with narrow system access, and tactic knowledge. Sudden large campaigns were found to overwhelm the help desk with reports, greatly impacting staff's workflow and hindering effective application of mitigation's and the potential for learning. We detail potential improvements to the current ticketing system, and reflect on ITIL, the framework of best practices that informed the full process.2021KAKholoud Althobaiti et al.Privacy and SecurityCSCW
Security Notifications in Static Analysis Tools: Developers' Attitudes, Comprehension, and Ability to Act on ThemStatic analysis tools (SATs) have the potential to assist developers in finding and fixing vulnerabilities in the early stages of software development, requiring them to be able to understand and act on tools' notifications. To understand how helpful such SAT guidance is to developers, we ran an online experiment (N=132) where participants were shown four vulnerable code samples (SQL injection, hard-coded credentials, encryption, and logging sensitive data) along with SAT guidance, and asked to indicate the appropriate fix. Participants had a positive attitude towards both SAT notifications and particularly liked the example solutions and vulnerable code. Seeing SAT notifications also led to more detailed open-ended answers and slightly improved code correction answers. Still, most SAT (SpotBugs 67%, SonarQube 86%) and Control (96%) participants answered at least one code-correction question incorrectly. Prior software development experience, perceived vulnerability severity, and answer confidence all positively impacted answer accuracy.2021MTMohammad Tahaei et al.University of EdinburghExplainable AI (XAI)Algorithmic Transparency & AuditabilityPrivacy by Design & User ControlCHI
Privacy Champions in Software Teams: Understanding Their Motivations, Strategies, and ChallengesSoftware development teams are responsible for making and implementing software design decisions that directly impact end-user privacy, a challenging task to do well. Privacy Champions---people who strongly care about advocating privacy---play a useful role in supporting privacy-respecting development cultures. To understand their motivations, challenges, and strategies for protecting end-user privacy, we conducted 12 interviews with Privacy Champions in software development teams. We find that common barriers to implementing privacy in software design include: negative privacy culture, internal prioritisation tensions, limited tool support, unclear evaluation metrics, and technical complexity. To promote privacy, Privacy Champions regularly use informal discussions, management support, communication among stakeholders, and documentation and guidelines. They perceive code reviews and practical training as more instructive than general privacy awareness and on-boarding training. Our study is a first step towards understanding how Privacy Champions work to improve their organisation's privacy approaches and improve the privacy of end-user products.2021MTMohammad Tahaei et al.University of EdinburghPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
I don't need an expert! Making URL phishing features human comprehensibleJudging the safety of a URL is something that even security experts struggle to do accurately without additional information. In this work, we aim to make experts' tools accessible to non-experts and assist general users in judging the safety of URLs by providing them with a usable report based on the information professionals use. We designed the report by iterating with 8 focus groups made up of end users, HCI experts, and security experts to ensure that the report was usable as well as accurately interpreted the information. We also conducted an online evaluation with 153 participants to compare different report-length options. We find that the longer comprehensive report allows users to accurately judge URL safety (93% accurate) and that summaries still provide benefit (83% accurate) compared to domain highlighting (65% accurate).2021KAKholoud Althobaiti et al.The University of Edinburgh, Taif UniversityAlgorithmic Transparency & AuditabilityPrivacy Perception & Decision-MakingCHI
RepliCueAuth: Validating the Use of a Lab-Based Virtual Reality Setup for Evaluating Authentication SystemsEvaluating novel authentication systems is often costly and time-consuming. In this work, we assess the suitability of using Virtual Reality (VR) to evaluate the usability and security of real-world authentication systems. To this end, we conducted a replication study and built a virtual replica of CueAuth [52], a recently introduced authentication scheme, and report on results from: (1) a lab-based in-VR usability study (N=20) evaluating user performance; (2) an online security study (N=22) evaluating system's observation resistance through virtual avatars; and (3) a comparison between our results and those previously reported in the real-world evaluation. Our analysis indicates that VR can serve as a suitable test-bed for human-centred evaluations of real-world authentication schemes, but the used VR technology can have an impact on the evaluation. Our work is a first step towards augmenting the design and evaluation spectrum of authentication systems and offers ground work for more research to follow.2021FMFlorian Mathis et al.University of Glasgow, University of EdinburghPrivacy by Design & User ControlPasswords & AuthenticationCHI
Understanding Privacy-Related Questions on Stack OverflowWe analyse Stack Overflow (SO) to understand challenges and confusions developers face while dealing with privacy-related topics. We apply topic modelling techniques to 1,733 privacy-related questions to identify topics and then qualitatively analyse a random sample of 315 privacy-related questions. Identified topics include privacy policies, privacy concerns, access control, and version changes. Results show that developers do ask SO for support on privacy-related issues. We also find that platforms such as Apple and Google are defining privacy requirements for developers by specifying what "sensitive" information is and what types of information developers need to communicate to users (e.g. privacy policies). We also examine the accepted answers in our sample and find that 28% of them link to official documentation and more than half are answered by SO users without references to any external resources.2020MTMohammad Tahaei et al.University of EdinburghPrivacy by Design & User ControlPrivacy Perception & Decision-MakingCHI
What is this URL's Destination? Empirical Evaluation of Users' URL ReadingCommon anti-phishing advice tells users to mouse over links, look at the URL, and compare to the expected destination, implicitly assuming that they are able to read the URL. To test this assumption, we conducted a survey with 1929 participants recruited from the Amazon Mechanical Turk and Prolific Academic platforms. Participants were shown 23 URLs with various URL structures. For each URL, participants were asked via a multiple choice question where the URL would lead and how safe they feel clicking on it would be. Using latent class analysis, participants were stratified by self-reported technology use. Participants were strongly biased towards answering that the URL would lead to the website of the organization whose name appeared in the URL, regardless of its position in the URL structure. The group with the highest technology use was only minorly better at URL reading.2020SASara Albakry et al.University of EdinburghPrivacy by Design & User ControlPrivacy Perception & Decision-MakingDark Patterns RecognitionCHI