TEACHER is an open source Python Library that incorporates several state-of-the-art explainability techniques that can be used for model interpretation and explanation. The objective of the library is to be extensible with new tools and algorithms while keeping compatibility with the most used machine learning models such as scikit-learn.
This project was started in 2020 as the Ph.D. Thesis of Guillermo Tomás Fernández Martín, whose advisors are José Antonio Gámez Martín and José Miguel Puerta Callejón.
Website: https://xai-teacher.readthedocs.io/en/latest/
Teacher requires:
* Python (>=3.9)
* scikit-learn
* scikit-fuzzy
* matplotlib (for plotting functions)
* deap (for compatibility with the LORE algorithm)
* imblearn (for compatibility with the LORE algorithm)
IMPORTANT Install scikit-fuzzy from their GitHub as the PyPi version is obsolete:
pip install git+https://github.com/scikit-fuzzy/scikit-fuzzy
If you already have a working installation, you can install teacher with
pip install -U teacher-xai
The documentation includes more detailed instructions.
For detailed instructions on how to use teacher, please refer to the API Reference
The following list summarizes the models and explainers currently supported
- Fuzzy Factuals and Counterfactuals: Explainer obtained from a fuzzy tree that can be used for global or local explanations
- LORE: Local explainer generated from a neighborhood
- FLARE: Fuzzy local explainer generated from a neighborhood
The following list summarizes the metrics and scores that can be extracted from the explainers
- Coverage: How many instances are covered by the rules forming the explanation
- Precision: How many of the instances covered by the rules forming the explanation are properly classified
- Fidelity: How good is the local explainer at mimicking the global classifier in the neighborhood
- L-fidelity (Rule fidelity): How good is the local explainer at mimicking the global classifier in the instances of the neighborhood covered by the factual explanation
- Cl-fidelity: How good is the local explainer at mimicking the global classifier in the instances of the neighborhood covered by the counterfactual explanation (To be implemented)
- Hit: Does the local explainer match the global classifier result? (To be implemented)
- C-hit: Does the local explainer match the global classifier result for tan instance built from the counterfactual rule? (To be implemented)
- Fuzzy Factuals and counterfactuals(Fernandez et al., 2022)
- Documentation https://xai-teacher.readthedocs.io/en/latest/
- Experiments: https://github.com/Kaysera/teacher-experiments
- LORE (Guidotti et al., 2018)
- Documentation and examples: https://doi.org/10.1109/MIS.2019.2957223
- FLARE (Fernandez et al., 2023 preprint)