Sitemap
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Pages
Posts
Explaining Non-Parametric Additive Models
Published:
This fourth blog post discusses Non-Parametric Additive Models, and more specifically Explainable Boosting Machines (EBMs for short). EBMs are state-of-the-art interpretable models invented by Rich Caruana from Microsoft Research. We will see how these models provide built-in explanations for their decisions, how they can be edited to be more intuitive, and how they can explain disparities between demographic subgroups.
Explaining Parametric Additive Models
Published:
In this third blog post, we discuss Parametric Additive Models, which are simply Linear Models applied on univariate basis functions rather than the original features. Like with Linear Models, viewing local explainability as a relative concept (explaining a prediction relative to a baseline) is necessary to get rid of mathematical inconsistencies.
Explaining Linear Models
Published:
In this second blog post, we will introduce linear models and how to interpret/explain their predictions. Although such models are rarely the most performant, understanding how to explain them is the first step toward explaining more complex models.
The Disagreement Problem in Explainability
Published:
In this first blog post, we will discuss post-hoc explanation methods and whether they are just another black-box on top of the Machine Learning model.
portfolio
Portfolio item number 1
Published:
Short description of portfolio item number 1
Portfolio item number 2
Published:
Short description of portfolio item number 2
publications
How to certify machine learning based safety-critical systems? A systematic literature review
Published in Automated Software Engineering, 2022
A review on the challenges of certifying machine learning components in critical systems.
Recommended citation: Tambon, F., Laberge, G., An, L., Nikanjam, A., Mindom, P. S. N., Pequignot, Y., ... & Laviolette, F. (2022). How to certify machine learning based safety-critical systems? A systematic literature review. Automated Software Engineering, 29(2), 38.
Fooling SHAP with Stealthily Biased Sampling
Published in ICLR, 2023
Manipulating Shapley values by cherry-picking the reference samples
Recommended citation: Laberge, G., Aïvodji, U., Hara, S., Marchand, M., & Khomh, F. (2023, May). Fooling SHAP with Stealthily Biased Sampling. In The Eleventh International Conference on Learning Representations.
Partial Order in Chaos: Consensus on Feature Attributions in the Rashomon Set
Published in Journal of Machine Learning Research, 2023
Computing the consensus of local/global feature importance across all models in the Rashomon Set
Recommended citation: Laberge, G., Pequignot, Y., Mathieu, A., Khomh, F., & Marchand, M. (2023). Partial Order in Chaos: Consensus on Feature Attributions in the Rashomon Set. Journal of Machine Learning Research, 24(364), 1-50.
Tackling the XAI Disagreement Problem with Regional Explanations
Published in AISTATS, 2024
Compute explanations on regions defined by a decision tree
Recommended citation: Laberge, G., Pequignot, Y., Marchand, M., & Khomh, F. (2024, May). Tackling the XAI Disagreement Problem with Regional Explanations. In International Conference on Artificial Intelligence and Statistics (AISTATS) (Vol. 238).
Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods
Published in TMLR, 2024
Explore the Transparency-Accuracy tradeoff
Recommended citation: Ferry, J., Laberge, G., Aivodji, U. (2024). Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods. Transactions on Machine Learning Research, 2835-8856.
talks
teaching
Teaching experience 1
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Teaching experience 2
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.