Posts by Tags

Additive Models

Explaining Non-Parametric Additive Models

12 minute read

Published:

This fourth blog post discusses Non-Parametric Additive Models, and more specifically Explainable Boosting Machines (EBMs for short). EBMs are state-of-the-art interpretable models invented by Rich Caruana from Microsoft Research. We will see how these models provide built-in explanations for their decisions, how they can be edited to be more intuitive, and how they can explain disparities between demographic subgroups.

Explaining Parametric Additive Models

10 minute read

Published:

In this third blog post, we discuss Parametric Additive Models, which are simply Linear Models applied on univariate basis functions rather than the original features. Like with Linear Models, viewing local explainability as a relative concept (explaining a prediction relative to a baseline) is necessary to get rid of mathematical inconsistencies.

Black Boxes

The Disagreement Problem in Explainability

6 minute read

Published:

In this first blog post, we will discuss post-hoc explanation methods and whether they are just another black-box on top of the Machine Learning model.

Categorical Features

Explaining Linear Models with Categorical Features

9 minute read

Published:

This fifth blog post demonstrates how Linear Models can be adapted to work with Categorical Feature, that is, features that are not naturally represented with numbers. Machine Learning models require numerical input features to work properly and so one Categorical Features must be preprocessed. While One-Hot-Encoding is the go-to practice when a linear model is used downstream, the interpretation of the resulting model coefficients is not trivial. We advocate viewing linear models fitted on One-Hot-Encoded features as a particular instance of Parametric Additive Models, which we know how to explain faithfully.

Contrastive Question

Explaining Parametric Additive Models

10 minute read

Published:

In this third blog post, we discuss Parametric Additive Models, which are simply Linear Models applied on univariate basis functions rather than the original features. Like with Linear Models, viewing local explainability as a relative concept (explaining a prediction relative to a baseline) is necessary to get rid of mathematical inconsistencies.

Explaining Linear Models

9 minute read

Published:

In this second blog post, we will introduce linear models and how to interpret/explain their predictions. Although such models are rarely the most performant, understanding how to explain them is the first step toward explaining more complex models.

Disagreement Problem

The Disagreement Problem in Explainability

6 minute read

Published:

In this first blog post, we will discuss post-hoc explanation methods and whether they are just another black-box on top of the Machine Learning model.

Explainability

Explaining Linear Models with Categorical Features

9 minute read

Published:

This fifth blog post demonstrates how Linear Models can be adapted to work with Categorical Feature, that is, features that are not naturally represented with numbers. Machine Learning models require numerical input features to work properly and so one Categorical Features must be preprocessed. While One-Hot-Encoding is the go-to practice when a linear model is used downstream, the interpretation of the resulting model coefficients is not trivial. We advocate viewing linear models fitted on One-Hot-Encoded features as a particular instance of Parametric Additive Models, which we know how to explain faithfully.

Explaining Non-Parametric Additive Models

12 minute read

Published:

This fourth blog post discusses Non-Parametric Additive Models, and more specifically Explainable Boosting Machines (EBMs for short). EBMs are state-of-the-art interpretable models invented by Rich Caruana from Microsoft Research. We will see how these models provide built-in explanations for their decisions, how they can be edited to be more intuitive, and how they can explain disparities between demographic subgroups.

Explaining Parametric Additive Models

10 minute read

Published:

In this third blog post, we discuss Parametric Additive Models, which are simply Linear Models applied on univariate basis functions rather than the original features. Like with Linear Models, viewing local explainability as a relative concept (explaining a prediction relative to a baseline) is necessary to get rid of mathematical inconsistencies.

Explaining Linear Models

9 minute read

Published:

In this second blog post, we will introduce linear models and how to interpret/explain their predictions. Although such models are rarely the most performant, understanding how to explain them is the first step toward explaining more complex models.

Explainable Boosting Machines

Explaining Non-Parametric Additive Models

12 minute read

Published:

This fourth blog post discusses Non-Parametric Additive Models, and more specifically Explainable Boosting Machines (EBMs for short). EBMs are state-of-the-art interpretable models invented by Rich Caruana from Microsoft Research. We will see how these models provide built-in explanations for their decisions, how they can be edited to be more intuitive, and how they can explain disparities between demographic subgroups.

Linear Models

Explaining Linear Models with Categorical Features

9 minute read

Published:

This fifth blog post demonstrates how Linear Models can be adapted to work with Categorical Feature, that is, features that are not naturally represented with numbers. Machine Learning models require numerical input features to work properly and so one Categorical Features must be preprocessed. While One-Hot-Encoding is the go-to practice when a linear model is used downstream, the interpretation of the resulting model coefficients is not trivial. We advocate viewing linear models fitted on One-Hot-Encoded features as a particular instance of Parametric Additive Models, which we know how to explain faithfully.

Explaining Linear Models

9 minute read

Published:

In this second blog post, we will introduce linear models and how to interpret/explain their predictions. Although such models are rarely the most performant, understanding how to explain them is the first step toward explaining more complex models.

Post-hoc Explainers

The Disagreement Problem in Explainability

6 minute read

Published:

In this first blog post, we will discuss post-hoc explanation methods and whether they are just another black-box on top of the Machine Learning model.