What Is Meant By Explainable AI (XAI)? Definition And Examples

What Is Meant By Explainable AI (XAI)? Definition And Examples

Explainable AI (XAI) describes the challenge of understanding why an artificial intelligence algorithm makes a decision. The question of “why” should be answered in an understandable and interpretable way.

As a solution, some algorithms act transparently or have to be made explainable in retrospect (post-doc). We define XAI, show the importance of explainability and some possible solutions.

Definition of explainable AI (XAI)

Explainable AI (XAI) describes the question of whether artificial intelligence can be explained. With the increasing use of artificial intelligence, the “internal mechanics” of AI also increases. This question – “how does AI work?” or “how does the AI ​​arrive at this result” is the basis of XAI. 

Explainable AI (XAI) deals with the challenge that the result of algorithms should be explainable.

In general, the problem of explainable AI deals with so-called “black box” algorithms such as neural networks and deep learning. With this type of AI, both the functionality of the algorithm and the final values ​​of the parameters are known, but not why this result was achieved.

With millions of parameters adjusted during a training process, the weighting can no longer be reproduced in a larger picture. 

Consequently, the relationships between why the weight has a certain value and how this contributes to the overall model can no longer be explained. This is the core of Explainable AI: Why is artificial intelligence outputting a result.

Examples of the importance of explainable AI

Explainable AI is playing an increasingly important role in several areas. Roughly, it’s always about understanding how and why decisions are made. If you want to use this knowledge to interpret the results, you need a “transparent” algorithm. The following topics play a more precise role:

OPTIMIZING THE ALGORITHM

The better you understand how a model was constructed, the easier it is to improve a model. Iterative improvement through more data, higher variance, better training material, or the like is part of the standard process in data science. These tasks are easier to do so that you can easily understand the current model.

CONFIDENCE IN THE RESULTS

One of the main questions about black-box models is “can we trust these results?”. The traceability of calculations, in particular, has a certain safety factor.

This is no longer the case with a multi-layered deep learning model, which is why some data scientists even refrain from such algorithms.

EFFECT ON SUBSEQUENT PROCESSES

One of the core aspects of advanced analytics is that you want to understand processes and improve them. To do this, data is analyzed, primarily to identify levers for improvement.

In a black-box model, however, you are limited to the output. Thus, one cannot derive potential for improvement, which makes these inexplicable models unattractive.

EXPLAINABLE AI IN THE COURSE OF THE ETHICS OF AI

Another aspect of why explainable AI is gaining in relevance is the question of ethics in applying artificial intelligence. A recruitment model that discriminates based on gender or skin color is often cited as a simple example.

Not because it was influenced to do so, but simply because the training data has a bias on these factors.

Now the challenge is to check and correct such “errors” in the model optimization. From a purely ethical point of view, algorithms used centrally should also ensure traceability, especially when they make decisions about or concerning people. Consequently, the requirement for XAI is also central in this point: making the models understandable.

Solution approaches for XAI.

There are two categories of solution approaches to ensure that artificial intelligence can be explained: Ante-Hoc and Post-Hoc. Ante-Hoc means “before,” so models can be interpreted from the ground up. Post-hoc approaches try to make black-box models explainable in retrospect.

ANTE-HOC XAI: TRANSPARENT MODELS

There are some inherently interpretable models. The idea in all of them is to quantify the calculation and parameters directly and to keep them at an interpretable level. A distinction is usually made between the following categories:

  • Explainable classic models: Well-known models in data science are, for example, regressions or decision trees and random forests. Here, for example, the explainable variance of linear regression is used to understand the influencing factors.
  • Generative Additive Models (GAMs): GAMs allow the weighting of each input variable to be identified. As a result, a heatmap visualization is often used, making the results particularly accessible to people.
  • Hybrid models: In hybrid systems, rule-based methods are often combined with machine learning methods. The attempt is made to solve individual sub-tasks of non-transparent models, while the interpretation is solved using transparent methods.

POST-HOC XAI: EXPLANATION OF BLACK-BOX MODELS

The challenge of Post-Hoc XAI is to make a black box model quantifiable afterward. Various methods are used here, which are either “logged” during the training or, for example, run through the entire model again to quantify it.

The following methods are commonly used to explain black-box models: 

  • LIME: The “Local Interpretable Model-Agnostic Explanations,” in English “local, interpretable, model-agnostic explanations,” have the self-claim to make all models explainable. The idea is to create an existing model (“local”) understandable for a person (“interpretable”). It should act without knowledge of a specific model (“model-agnostic”). In practice, for example, a linear classifier is switched to the neural network results to make them interpretable. Although this reduces the accuracy of the model, it allows an interpretation in line with XAI.
  • Counterfactual method: The “Counterfactual Method” uses the fact that the output of a model is the direct result of the input to make AI explainable. In concrete terms, this means that input elements (for example, an attribute or an image) are manipulated in a targeted manner until a change in the output (e.g., different classification classes) can be observed. If you repeat this method systematically, you can determine which subtleties in the input explain the outcome.
  • Layer-wise Relevance Propagation (LRP): While the counterfactual approach manipulates the information, LRP tries to guarantee the explainability through a “backpropagation,” i.e., backward distribution. For this purpose, the output is returned to the weighted nodes from the previous layer in a neural network. This makes it possible to identify the most important node-edge combinations and thus to mark the greatest influence of certain parts of the input.
  • Partial Dependence Plot (PDP): This method was developed in 2001 by JH Friedman and showed the effect features have on the model’s output. More precisely, a PDP can show whether the relationship between target and element is linear, monotonic, or complex. Put, one or two input features are plotted with the output on a graph, and it is very easy to explain the interdependence between these entities.
  • Rationalization: Approaches in which black box machines (e.g., a robot) can justify their actions themselves are also particularly interesting. This requires a deeper computing layer that logs why action is triggered and explains this information to people.
  • Other methods: In addition to these well-known methods, there is a whole range of different approaches for explainable AI, for example, Individual Conditional Expectation (ICE), Accumulated Local Effects (ALE), Feature Interaction, Permutation Feature Importance, Global Surrogates, Scoped Rules, Shapley Values, Shapley Additive exPlanations (SHAP) and some more.

Summary of the article “Explainable AI.”

The goal of “explainable” or “interpretable” AI is that people can understand why an algorithm came to a result. Some methods fall into the explainable AI category (linear regression or a decision tree) or those that have to be made answerable post-hoc, i.e., afterward.

Examples of this are, above all, neural networks or very complex systems, as is common in reinforcement learning. In all cases, it is important to use explainable AI to make the decision-making process answerable in an artificial technique to answer ethical questions and make the model accessible.

The Tech Spree

Leave a Reply

Your email address will not be published. Required fields are marked *