HuRST: Satisficing trust in human-robot teams

Project Overview
"Satisficing Trust in Human-Robot Teams" (HuRST) is an interdisciplinary research project that emerged from the EPSRC Robots for National Security Sandpit in May 2022. The project brings together leading researchers from the University of Birmingham, UCL, Loughborough University, and the University of Bristol, with expertise spanning robotics, machine learning, human factors, human-computer interaction, law, and criminology.
​
The Core Challenge
The development of human-robot systems in the emergency services promises enhanced safety and effectiveness. However, it raises critical questions about effectiveness, reliability, legality, and ethics. The central research question driving HuRST is: Is it possible to develop and measure the levels of trust needed in robots and humans, so that such teams behave in a trustworthy manner both in terms of functionality as well as within legal, ethical, and operational constraints?
​
The Satisficing Trust Concept
A key innovation of HuRST is the concept of "satisficing trust", recognizing that in practice, trust is not a binary issue. Instead, satisficing trust refers to achieving a sufficient level of trust over time as mission requirements and circumstances change. This nuanced approach acknowledges that trust in human-robot teams must be dynamic and context-dependent, adapting to evolving operational needs.
My Role
I have contributed to the HuRST project through software development and experimental design. I conducted a computer-based study investigating temporal trust dynamics in human-robot teaming during high-stakes scenarios. The research examined how trust recovers when robots decline to immediately respond to user requests and whether providing explanations for such refusals helps restore trust over time. In this study, 38 participants engaged in an interactive firefighting game alongside a robot teammate, where trust violations occurred and were followed by different communication strategies.
​
The findings demonstrated that while both baseline and explanation conditions produced significant initial drops in trust, the provision of explanation led to substantially better trust recovery over time. Participants who received explanations from the robot recovered to near pre-violation trust levels, whereas those in the baseline condition showed only partial recovery. These results have direct implications for designing robot communication protocols in emergency scenarios.
​
Building on this research, I am currently designing and conducting an advanced experimental study using the Bristol Digital Futures Institute's Reality Emulator, a cutting-edge VR CAVE environment. This study simulates a high-rise building scenario where participants must locate a gas leak while working with a robot teammate. This immersive virtual environment allows for much more realistic and ecologically valid testing of human-robot team dynamics compared to screen-based studies, enabling deeper investigation of trust, communication, and collaboration under realistic spatial and temporal constraints.
​
​
​
​
​
​