Benchmarking Explainability Methods: A Framework for Evaluating Model Transparency and Interpretability
Abstract
The rise of machine learning models has brought about a need for interpretability and transparency, especially in critical domains. This paper presents a comprehensive benchmarking study of various explainability methods used in machine learning. We evaluate the performance, strengths, and weaknesses of popular techniques, including LIME, SHAP, and integrated gradients. Our goal is to provide a comparative analysis to guide practitioners in selecting appropriate methods for different applications.
Downloads
Published
Issue
Section
License
Copyright (c) 2023 Academic Journal of Science and Technology
This work is licensed under a Creative Commons Attribution 4.0 International License.