Overview

User trust is their belief that a system will support the user's tasks or goals appropriately in a given situation.   Different aspects of user trust correlate with their adoption of a system, how they interact with it (or not), and how much they comply with it.  The goal of human-system interaction (HSI) is not to maximize trust - no system is perfect and over-trust can be as dangerous as under-trust.  The system should be made as trustworthy as possible but user's should only trust it in so far as that trustworthiness is achieved.   User trust has complex interactions with user interface design, culture, age, attitude toward risk, situation awareness, [training], [workload], and [usability]. {add hyperlinks to pages for those in brackets.} Trust is not a property of a system but a contextualized attitude of the user toward a specific system given a set of tasks/goals in a particularly environment.

A common way to measure user trust is through a survey that measures an individual's conscious belief in their perception, knowledge, and expectations. Other ways include physiology and behaviors but only surveys probe what the user is consciously aware of.  Best practice is to use a combination of methods.  The Test Science team at IDA has developed, and is in the process of validating, The Trust in Automated Systems Test (TOAST).  Additional scales which are already validated or undergoing validation but have already produced promising results are also provided, below.

Summary of Endorsed Scales

Scale NameAcronymAdvantagesDisadvantagesSubscalesNumber of Items
Trust in Automated Systems TestTOASTConstruct subscalesCurrently undergoing validationUnderstanding, Performance9

Trust in Automated Systems Test (TOAST)

TOAST provides a quick and easily administered assessment of whether people believe they understand the system and feel confident in its performance.

Administration

Instruct the respondent to read each statement carefully and indicate the extent to which they agree or disagree with each statement.

Survey

Key: U = Understanding subscale. P = Performance subscale

Scoring

Each subscale is scored separately, resulting in two scores: one for Understanding and one for Performance. To calculate a subscale score, take the average of the responses to the items in that subscale. This process can be expressed formulaically as:

Interpretation

Higher Performance scores indicate that the user trusts the system to help them perform their job duties.  Higher Understanding scores indicate the user's confidence that their trust is well calibrated.

Reference

Wojton, H.M., Porter, D., Lane, S.T., Bieber, C., & Madhavan, P. (2020). Initial validation of the trust of automated systems test (TOAST). Journal of Social Psychology, 160(6), 735-750.

Additional Trust Scales

Razin, Y and Feigh, K.M. (2023). Converging Measures and an Emergent Model: A Meta-Analysis of Human-Automation Trust Questionnaires. [Manuscript submitted for publication]. School of Aerospace Engineering. Georgia Institute of Technology.

  • A meta-analysis of human-AI, human-computer, human-automation, and human-robot trust questionnaires, including cross-survey subscale mapping and quality of many popular trust surveys

Merritt, S.M. (2008). Affective Processes in Human-Automation Interactions. Human Factors, 53(4), 356 – 370.

  • This paper includes a short (6 item) “Trust” scale that loads onto a single factor and is well validated, including for repeated measures. Questionnaire items can be found in the Appendix (p.367). Note: Trust here maps to Performance trust in TOAST.  

Schaefer, K.E (2016). Measuring Trust in human-robot interactions: development of the trust perception scale. In Robust intelligence and trust in autonomous systems (pp. 191-218). Boston, MA: Springer US.

  • This paper includes two different scales – a 14 item single factor and a 40 item multi-factor scale. In this paper the surveys appear on p.213-4 (the 14 item scale indicated by the b superscript) and are followed by Instructions for Use.  These scales have some preliminary published validation and more has been ongoing.

Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on management information Systems (TMIS), 2(2), 1-25.

  • This paper includes a well-validated multi-factor scale, available in Appendix B, pg. 18. However, the scale is long at 39 items and is really for capturing not just trust but many factors and antecedents to better understand why the user (dis)trusts a particular technology.