What Is Explainable AI (XAI)?

With each expansion and due to the ever-increasing complexity, the field of AI becomes more and more complicated and less tangible.

Due to groundbreaking successes in this field, especially machine learning, the research behind artificial intelligence arouses immense interest. Developing AI so that programs can learn independently and find solutions to complex problems remains one of the most critical research fields despite the increasing complexity. It is all the more important to keep the level of understanding for the decisions and results of artificial intelligence as high as possible.

This is precisely where Explainable Artificial Intelligence (XAI) comes in: Users want and should understand how the AI ​​of a program works and how the results achieved are to be evaluated. Otherwise, there is not a basis of trust in the respective digital calculations. The transparency created by Explainable AI is therefore of enormous importance for accepting artificial intelligence. What exactly is behind this approach?

What Is Explainable Artificial Intelligence (XAI)?

Explainable Artificial Intelligence means “explainable artificial intelligence.” The term is a neologism that has been used in machine learning research and discussions since 2004. A generally helpful definition of what precisely Explainable AI is does not exist today. However, the XAI program of DARPA (Defense Advanced Research Projects Agency) defines the goals of explainable artificial intelligence with the following requirements:

There should be explainable models without having to forego the high learning performance. It should also be made possible for the future user to understand the emerging generation of artificially intelligent partners, to trust them to an appropriate extent, and to be able to deal and work with them efficiently.

What Is The Goal Of XAI?

Artificial intelligence has long ceased only to affect researchers and scientists. Instead, it is already an integral part of everyday life. It is, therefore, all the more critical that the modularity of artificial intelligence is not only brought closer to designers and direct users: In particular, decision-makers must be made as understandable as possible about how AI works to create the basis for trust in the technology.

Some well-known companies are already setting a good example in this regard: In 2017, the Nvidia group published an article on its website in the developer blog with the topic Explaining How End-to-End Deep Learning Steers a Self-Driving Car. In this article, four developers explain how their artificial intelligence learns for autonomous driving. The company describes its research results transparently and uses several easily understandable examples to show how artificial intelligence learns things.

In the same year, Accenture published the guide Responsible AI: Why we need Explainable AI, The technology service provider addresses aspects such as ethics and trust about machines (especially in the area of ​​autonomous driving).

What Methods Does Explainable AI Include?

There are various methods and approaches to create transparency and understanding for artificial intelligence. We have summarized the most important for you in the following paragraphs:

Layer-wise relevance propagation (LRP) was first described in 2015. This is a technique for determining the features of input vectors that contribute most to the output of a neural network.

The counterfactual method (“counterfactual method”) describes how data input (text, images, diagrams, etc.) is changed in a very targeted manner after a result has been obtained. It is then observed to what extent the output result has changed.

Local Interpretable Model-Agnostic Explanations (LIME) is an explanatory model with a holistic approach. It wants to explain every machine classifier and the resulting forecast. This should also make the data and procedures understandable for non-specialist users.

Rationalization is a process that is explicitly used in AI-based robots. The machine is designed so that it can explain its actions independently.

Where And When Is Explainable Artificial Intelligence Used?

Since artificial intelligence is already being used in many areas, it is essential to ensure transparency. AI is being used more and more frequently, especially in some industry and service areas branches. This is why accountability is essential here.

There are some areas of application or industries on which there is a particular focus :

  • antenna design
  • High-frequency trading (algorithmic trading)
  • Medical diagnostics
  • Autonomous driving (self-driving motor vehicles)
  • Neural Network Imaging
  • training in military strategies

Anyone who has ever used the parking assistant knows how skeptical and tense one sits behind the wheel and is utterly amazed that something like this can work. As a user, you would like to know how a car can park independently. Therefore, it is understandable that there is a particular interest in transparency and explanation of how AI works, especially in the areas mentioned above.

Also Read: Networked, Intelligent And Complex: AIoT

Recent Articles

Related Stories