This is an old revision of the document!


Will You Trust This TLS Certificate? Perceptions of People Working in IT [ACSAC 2019]

   Authors: Martin Ukrop, Lydia Kraus, Vashek Matyas and Heider Wahsheh

 Primary contact: Martin Ukrop <mukrop@mail.muni.cz>

 Conference: ACSAC 2019

Pre-print PDF   Artifacts   Presentation   BiBTeX

@InProceedings{2019-acsac-ukrop,
  Title         = {Will You Trust This TLS Certificate? Perceptions of People Working in IT},
  Author        = {Martin Ukrop and Lydia Kraus and Vashek Matyas and Heider Ahmad Mutleq Wahsheh},
  BookTitle     = {to appear at 35rd Annual Computer Security Applications Conference (ACSAC'2019)},
  Year          = {2019},
  Publisher     = {ACM},
  crocsweb      = {https://crocs.fi.muni.cz/papers/acsac2019},
  Keywords      = {usablesec, Red-Hat},
}

Abstract

Flawed TLS certificates are not uncommon on the Internet. While they signal a potential issue, in most cases they have benign causes (e.g., misconfiguration or even deliberate deployment). This adds fuzziness to the decision on whether to trust a connection or not. Little is known about perceptions of flawed certificates by IT professionals, even though their decisions impact high numbers of end users. Moreover, it is unclear how much does the content of error messages and documentation influence these perceptions.

To shed light on these issues, we observed 75 attendees of an industrial IT conference investigating, different certificate validation errors. Furthermore, we focused on the influence of re-worded error messages and redesigned documentation. We find that people working in IT have very nuanced opinions regarding the tested certificate flaws with trust decisions being far from binary. The self-signed and the name constrained certificates seem to be over-trusted (the latter also being poorly understood). We show that even small changes in existing error messages and documentation can positively influence resource use, comprehension, and trust assessment. Our conclusions can be directly used in practice by adopting the re-worded error messages and documentation.

The content of this research was partially covered at the DevConf 2019 talk that can be seen below.

The artifacts contain the full experimental setup (as described in Section 2.1 of the paper) and the complete anonymized dataset underlying the evaluation presented in Sections 3 and 4.

The experimental setup contains the documents accompanying the task: the informed consent, pre-task questionnaire, task description, trust scales, and the list of questions posed during the post-task interview (all in PDFs). We further include the custom website with certificate validation documentation for the “redesigned” condition (static HTML). While working on the task, participants in the “redesigned” condition could access this website via a link that was in the redesigned error messages. Furthermore, we provide the software with which the participants interacted.  It contains the displayed error messages and validated certificates. These things are available both individually and incorporated in a snapshot of a virtual machine used at the experiment (importable directly into VirtualBox).

The collected data is presented in a single dataset (SPSS format; you can use PSPP as a free alternative). It includes the analysis syntax files to obtain the numerical results presented in the paper. For each participant, the dataset contains: 1) pre-task questionnaire answers, 2) reported trust ratings, 3) sub-task timing, 4) information on whether they browsed the Internet and 5) the interview codes assigned. Note that we do not publish the interview transcripts to preserve participant privacy.

Go to artifacts repository (gDrive)