Overview
- I see better understanding of neural networks, their decision making and their training processes as a key requirement for applying them in the real world.
- I’m especially interested in going beyond just single instances and explaining the global behavior of complex deep-learning models.
PyPremise
PyPremise allows to easily identify patterns or explanations of where a machine learning classifier performs well and where it fails. It is independent of any specific classifier or architecture. It has been evaluated both on NLP text tasks and data with binary features. For a recent Visual Question Answering model, it, e.g., identifiers that the model struggles with counting, visual orientation and higher reasoning questions.
You can check out our Python library on Github.
Publications
-
Michael A. Hedderich, Jonas Fischer, Dietrich Klakow, and Jilles Vreeken
In Abstract at BlackboxNLP @EMNLP’23, 2023
-
Michael A. Hedderich, Jonas Fischer, Dietrich Klakow, and Jilles Vreeken
In International Conference on Machine Learning (ICML), 2022
-
-
Marius Mosbach, Anna Khokhlova, Michael A. Hedderich, and Dietrich Klakow
In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP, Nov 2020
Fine-tuning pre-trained contextualized embedding models has become an integral part of the NLP pipeline. At the same time, probing has emerged as a way to investigate the linguistic knowledge captured by pre-trained models. Very little is, however, understood about how fine-tuning affects the representations of pre-trained models and thereby the linguistic knowledge they encode. This paper contributes towards closing this gap. We study three different pre-trained models: BERT, RoBERTa, and ALBERT, and investigate through sentence-level probing how fine-tuning affects their representations. We find that for some probing tasks fine-tuning leads to substantial changes in accuracy, possibly suggesting that fine-tuning introduces or even removes linguistic knowledge from a pre-trained model. These changes, however, vary greatly across different models, fine-tuning and probing tasks. Our analysis reveals that while fine-tuning indeed changes the representations of a pre-trained model and these changes are typically larger for higher layers, only in very few cases, fine-tuning has a positive effect on probing accuracy that is larger than just using the pre-trained model with a strong pooling method. Based on our findings, we argue that both positive and negative effects of fine-tuning on probing require a careful interpretation.