Skip to content

Latest commit

 

History

History
8 lines (5 loc) · 713 Bytes

File metadata and controls

8 lines (5 loc) · 713 Bytes

NLP Transformers' Interpretability

The purpose of this repository is to demonstrate how to use NLP explanation/interpretability tools. In this project, I use the stance detection task, but you can change it to your own custom NLP task if you wish. This repository will be updated in the future, but for now, I just use SHAP as an explanation tool.

Model Explanation (SHAP)

The result of SHAP explanation on the Persian stance detection A red area increases the probability of that class, and a blue area decreases it (SHAP).