Skip to main content
Enterprise AI Analysis: Propensity Matters – An Empirical Analysis on the Importance of Trust for the Intention to Use Artificial Intelligence

Enterprise AI Analysis

Propensity Matters – An Empirical Analysis on the Importance of Trust for the Intention to Use Artificial Intelligence

This study explores the critical role of trust in Artificial Intelligence (AI) adoption, specifically focusing on how perceived trustworthiness and an individual's 'propensity to trust' influence the intention to use AI systems like ChatGPT. Leveraging a conceptual path model, the research uncovers that while trustworthiness factors and user predisposition significantly impact trust in AI, the overall influence of trust on the intention to use AI is weaker than anticipated, suggesting new directions for enhancing human-AI interaction.

Executive Impact: Key Findings at a Glance

Understand the core metrics driving enterprise AI adoption and user interaction, directly from the research.

0 Participants in Study
0% Variance in Trust in AI Explained
0% Variance in Intention to Use AI Explained

Deep Analysis & Enterprise Applications

Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.

Understanding Trust as a Foundation

The study begins by grounding its analysis in established definitions of trust, emphasizing it as an attitude where a trustor willingly becomes vulnerable to a trustee in situations of uncertainty (Lee & See, 2004). This vulnerability implies taking a risk, especially when direct monitoring or control of the trustee is absent. The paper distinguishes trust from confidence, noting that while both involve expectations, trust uniquely incorporates the acceptance of risk. In an increasingly complex technological landscape, trust functions as a heuristic, simplifying decision-making and substituting for direct control, particularly relevant for understanding human-AI interaction.

The Evolving Landscape of Trust in AI

Trust is acknowledged as a critical construct for human-machine interaction and essential for AI adoption. The research highlights that while existing models of trust in automation (Lee & See, 2004) serve as a basis, the autonomous capabilities and 'black-box' nature of AI systems like ChatGPT introduce novel risks and uncertainties. This necessitates a re-evaluation of how trust translates to AI. The study references the adaptation of trust into models like TAM-2 for AI acceptance, and underscores that a user's inherent "propensity to trust" significantly shapes their reliance on a system, alongside the AI's demonstrable trustworthiness in performance, process (transparency), and purpose (benevolence).

Study Approach and Key Statistical Results

This empirical investigation utilized a path model, based on Lee and See's (2004) trust in automation model and extended by Solberg et al. (2022) and Körber (2019), with "intention to use" as the endogenous variable. ChatGPT, as the most popular AI system, served as the unit of analysis, with data collected from 105 students. The model was rigorously tested using robust fit indices after re-specification. Key findings include:

  • Performance and Purpose significantly influenced Trust in AI.
  • Propensity to Trust showed a significant, albeit smaller, direct effect on Trust in AI.
  • Crucially, Propensity to Trust also directly influenced all three perceived trustworthiness factors (Performance, Process, Purpose).
  • Trust in AI significantly influenced Intention to Use, but the effect was weaker than hypothesized.

Challenging Assumptions and Future Directions

The study's discussion points to several critical insights. Contrary to prevailing assumptions, particularly regarding black-box models and Explainable AI (XAI), the "process" factor (related to transparency and explainability) did not significantly influence trust in AI within the ChatGPT context. This challenges the notion that XAI is always paramount for trust. The findings underscore the central role of an individual's propensity to trust, which not only directly influences overall trust but also shapes how users perceive the AI's trustworthiness factors. The weaker-than-expected link between trust in AI and intention to use prompts a re-evaluation of whether trust, or perhaps 'confidence,' is the most appropriate construct for predicting AI adoption. Ultimately, the research suggests a shift in focus for trustworthy AI development from explainability towards prioritizing reliability and performance.

Enterprise Process Flow: AI Trust & Usage Path

User Propensity to Trust
Perceived AI Trustworthiness
Overall Trust in AI
Intention to Use AI
β = 0.550 Strongest Direct Effect: Propensity to Trust on AI Performance Perception (p < .001)

Comparison: Established Trust vs. AI-Specific Findings

Factor Traditional Trust Concepts AI Trust (Study Findings with ChatGPT)
Transparency/Explainability (XAI)
  • ✓ Often seen as critical for trust, especially for black-box models. (Shin, 2021; Gebru et al., 2022)
  • ✓ Enhances user understanding and reduces uncertainty.
  • ✓ No significant influence on trust in this black-box AI (ChatGPT), contrary to assumptions. (Discussion)
  • ✓ Users prioritize performance over internal algorithmic transparency for opaque systems.
Propensity to Trust
  • ✓ Influences trust, dependent on individual experiences and personality. (Körber, 2019)
  • ✓ An inherent user disposition affecting initial trust.
  • ✓ Identified as a central role, directly influences perceived trustworthiness factors (Performance, Process, Purpose) and overall trust in AI. (Results, Discussion)
  • ✓ More significant than previously assumed in shaping AI trust.
Trust vs. Confidence
  • ✓ Trust involves risk-taking & vulnerability, distinct from control-based confidence. (Mayer et al., 1995)
  • ✓ Relies on belief in positive intent and ability.
  • ✓ Questioned if 'trust' is the most adequate construct for AI; 'confidence' (control-based expectation) might be more relevant, given the low-risk context. (Limitations, Discussion)
  • ✓ Implies a need to re-evaluate theoretical constructs for AI interaction.

Case Study: Trusting ChatGPT, a Black-Box AI

This study specifically utilized ChatGPT as the unit of analysis, representing a prevalent black-box AI model. While previous research often emphasized the importance of transparency and explainability (XAI) for building trust in AI, our findings, particularly in the context of ChatGPT, indicate a significant divergence. For this opaque system, perceived process transparency did not statistically influence trust in AI, challenging conventional wisdom. Instead, factors like performance and the user's inherent propensity to trust played a more dominant role. This suggests that for black-box AI, users may prioritize observable outcomes and their own predisposition over internal algorithmic transparency, redefining what 'trustworthy AI' means in practice and shifting the focus towards reliability and demonstrable ability.

Calculate Your Potential AI ROI

Estimate the efficiency gains and cost savings your enterprise could achieve by strategically integrating AI, informed by insights into human-AI trust and adoption.

Estimated Annual Savings $0
Annual Hours Reclaimed 0

Your Path to Trustworthy AI Implementation

Leverage our expertise to integrate AI effectively and build user trust, grounded in research-backed strategies.

AI Trust Audit & Strategy

Assess current human-AI interaction challenges and design a trust-centric AI adoption strategy based on your specific enterprise context and the latest research findings.

Performance-Driven AI Integration

Implement AI solutions with a focus on clear, demonstrable performance and reliability, aligning with the study's emphasis on tangible outcomes over opaque processes.

Propensity-Aware User Training

Develop training programs that consider inherent user propensity to trust, fostering appropriate reliance and addressing individual differences in AI acceptance.

Continuous Trust Monitoring & Optimization

Establish systems to continuously monitor user trust and AI performance, iterating on design and interaction to ensure sustained and effective AI integration.

Ready to Build Trustworthy AI in Your Enterprise?

Book a complimentary consultation with our AI strategists to explore how these insights can drive your organization's AI success.

Ready to Get Started?

Book Your Free Consultation.

Let's Discuss Your AI Strategy!

Lets Discuss Your Needs


AI Consultation Booking