Based on surveys and interviews with AI users in Indonesian service-sector organizations, the study finds that transparency, explainability, and reliability significantly increase trust in AI systems, while perceived risk reduces it. Crucially, trust acts as a bridge between AI technology and real organizational outcomes. Without trust, even advanced AI tools fail to influence decisions.
Why Trust Has Become the Key AI Challenge
Organizations worldwide are investing heavily in artificial intelligence to support decisions ranging from customer service and logistics to strategic planning. In theory, AI can process vast amounts of data faster and more accurately than humans. In practice, many systems remain underused.
In Indonesia, these challenges are amplified by uneven digital literacy, hierarchical workplace cultures, and strong reliance on experience-based judgment. Service organizations, which depend on fast and accurate information, are among the earliest adopters of AI-based information systems. Yet adoption alone does not guarantee impact.
The study highlights a critical issue facing both developed and developing economies: AI systems do not deliver value simply by being accurate. They must also be trusted by the people expected to use them. This insight is increasingly relevant as governments and businesses promote AI-driven decision-making while also facing concerns about algorithmic bias, accountability, and ethical risk.
How the Researchers Examined Trust in AI
The research team used a mixed-methods approach to capture both measurable patterns and real-world experience. Quantitative data were collected through a survey of 60 employees who regularly use AI-based information systems in Indonesian service organizations. These results were then deepened through in-depth interviews with six key informants involved in managing or relying on AI for decisions.
In simple terms, the researchers examined:
· How users perceive AI system features such as transparency, explainability, and reliability
· How much risk users associate with AI-based decisions
· How these perceptions shape trust
· How trust influences decision quality and willingness to rely on AI
Statistical analysis was combined with thematic interview analysis to ensure the findings reflected both numbers and lived experience.
What Builds — and Breaks — Trust in AI Systems
The results show that trust in AI is shaped by a small number of clear and practical factors.
Key findings include:
· Perceived reliability is the strongest driver of trust. Users trust AI systems that produce consistent and accurate results over time.
· Transparency increases trust when users can see how data are processed and how recommendations are generated.
· Explainability matters when systems can clearly explain why a particular output or recommendation appears.
· Perceived risk reduces trust. Concerns about errors, bias, or ethical consequences make users hesitant to rely on AI.
Together, these factors explain 62 percent of the variation in user trust, a high figure for behavioral research. This shows that trust is not vague or subjective but closely tied to how AI systems are designed and communicated.
Interview participants reinforced these findings. Users reported greater confidence when they understood how systems worked and saw consistent results. At the same time, fears of hidden bias or invisible mistakes quickly undermined trust.
Trust Improves Decisions — and Sustains AI Use
Trust does not stop at perception. The study finds that it directly affects how organizations use AI in practice.
When users trust AI systems:
· Decision-making quality improves, with faster, more data-driven choices
· Intention to rely on AI increases, making AI use sustainable rather than experimental
Statistical analysis shows that trust explains more than half of the improvement in both decision quality and willingness to keep using AI. Interviews confirm this pattern. Users who trusted AI were more likely to integrate recommendations into daily operations rather than treating the system as a secondary reference.
Most importantly, trust acts as a mediating factor. AI transparency, explainability, reliability, and risk perceptions do not directly improve decision outcomes on their own. Their influence passes through trust first. Without trust, technical strengths fail to translate into organizational value.
Implications for Organizations and Policymakers
The findings carry important implications for AI deployment strategies.
For organizations:
· Investing in explainable and transparent AI systems is as important as improving accuracy
· Reliability over time matters more than impressive one-off performance
· Risk communication and governance frameworks help protect user trust
For policymakers and regulators:
· AI guidelines should address human trust, not only technical standards
· Ethical AI frameworks can reduce perceived risk and improve adoption
· Workforce training should focus on understanding AI, not just using it
The study suggests that trust-centered AI design may be especially important in developing economies, where skepticism toward automated decision-making remains high.
Academic Perspective
According to the authors from Universitas Sulawesi Barat, AI systems only create organizational value when users believe in them. They emphasize that “the system itself is not enough — trust determines whether AI recommendations are actually used in decisions,” highlighting trust as the psychological mechanism that connects technology with human judgment
Author Profile
· Indra, is a lecturer at Universitas Sulawesi Barat, Indonesia, specializing in artificial intelligence, information systems, and organizational decision-making.
· Muh Fuad Mansyur, is affiliated with Universitas Sulawesi Barat and focuses on digital transformation and technology adoption in organizations.
· Adi Heri, is also a researcher at Universitas Sulawesi Barat, with expertise in information systems and service-sector technology management.
Source
Article title: The Dynamics of User Trust in Artificial Intelligence–Based Information Systems for Organizational Decision Making
Journal: Formosa Journal of Science and Technology
Publication year: 2026
DOI: https://doi.org/10.55927/fjst.v5i1.384
0 Komentar