Benchmarking Explainability Methods: A Framework for Evaluating Model Transparency and Interpretability

Authors

  • Jorge Navarro Department of Information Technology, Pontifical Catholic University of Peru, Peru

Abstract

The rise of machine learning models has brought about a need for interpretability and transparency, especially in critical domains. This paper presents a comprehensive benchmarking study of various explainability methods used in machine learning. We evaluate the performance, strengths, and weaknesses of popular techniques, including LIME, SHAP, and integrated gradients. Our goal is to provide a comparative analysis to guide practitioners in selecting appropriate methods for different applications.

Downloads

Published

2023-07-18

Issue

Section

Articles