Saliency Maps
Ashwin Srinivasan
With complex deep neural networks being developed to solve numerous tasks, the explainability of predictions from these networks is an extremely important issue for real-world use cases. Many approaches have been attempted in the past to tackle this issue. Saliency maps have emerged as popular tools for highlighting features in the inputs which are relevant for the predictions made by a learned model. However, solely relying on visual assessment of features in the input space can be misleading. In this article, we will first discuss the numerous saliency map techniques, how they work, and understand how their assessments can be misleading. We will then move our attention to understanding "concepts" using a technique called testing with concept activation vectors.