Form Trust to Transparency: Understanding the Influence of Explainability on AI Systems

Authors

  • Renato Costa Department of Computer Engineering, Pontifical Catholic University of Rio de Janeiro, Brazil

Abstract

This paper explores the crucial role of explainability in enhancing the trustworthiness of artificial intelligence (AI) systems. As AI systems become more integrated into critical sectors such as finance, healthcare, and autonomous vehicles, understanding their decision-making processes is essential for ensuring their reliability and gaining user trust. This study examines various dimensions of explainability, including transparency, interpretability, and accountability, and analyzes how they influence trustworthiness from both technical and user perspectives. The findings highlight the need for robust explainability frameworks to foster trust and facilitate the broader adoption of AI technologies.

Downloads

Published

2023-04-15

Issue

Section

Articles