APP Centre for AI Research

Simplification Blogs

Saliency Maps

Ashwin Srinivasan

With complex deep neural networks being developed to solve numerous tasks, the explainability of predictions from these networks is an extremely important issue for real-world use cases. Many approaches have been attempted in the past to tackle this issue. Saliency maps have emerged as popular tools for highlighting features in the inputs which are relevant for the predictions made by a learned model. However, solely relying on visual assessment of features in the input space can be misleading. In this article, we will first discuss the numerous saliency map techniques, how they work, and understand how their assessments can be misleading. We will then move our attention to understanding "concepts" using a technique called testing with concept activation vectors.

Why do we need Machine Learning that can be trusted?

Ashwin Srinivasan

Neural Networks are black boxes unlike GOFAI

The nuts and blots of Spiking Neural Networks

Ashwin Srinivasan

Neural Networks are black boxes unlike GOFAI

Safety in Reinforcement Learning

Ashwin Srinivasan

Neural Networks are black boxes unlike GOFAI

Human Level Learning

Ashwin Srinivasan

Neural Networks are black boxes unlike GOFAI

Knowledge Distillation

Ashwin Srinivasan

Neural Networks are black boxes unlike GOFAI