Medium-Level of Trust
At a medium level of trust, you may have some idea about how well the system works for a test population, but you may not know the characteristics of that population and so it is unclear whether you should apply the system to you individual patient. At this point, you would want to seek out more information about the system and how it was trained. A system that does not report to the user its level of certainty can be a reason to hold off on having a high level of trust in a system. Additionally, even if a system initially performs well, not being able to audit the system should suggest not having a high level of trust as the level of system performance may not remain high over time.
High-Level of Trust
To have a high level of trust in an AI system, you should have confidence in how well if functions generally and for your patient specifically. The output should show you the level of certainty from the system, and experts should be able to audit the system overtime to ensure that it continues to function well. It must also be recognized that having a high level of trust does not always match the system performance. Automation bias is a type of bias where people may implicitly trust an automated system even if there is no real evidence that the system is trustworthy.
Low-Level of Trust
A low level of trust can imply either a system that you know very little about or a system which performs poorly. For systems that you know very little about, it can be a starting point when adopting new systems for use and trust in the system can grow as you learn about the system. If the system is performing poorly, it should not be used. However, due to issues with a lack of transparency, sometimes it can be difficult to know when a system is performing poorly. The inability to audit a system is a reason to hold lower levels of trust in the system, especially if there was no evidence that a system was performing well in the first place.