AI algorithms often are perceived as black boxes making inexplicable decisions. Explainability (also referred to as “interpretability”) is the concept that a machine learning model and its output can be explained in a way that “makes sense” to a human being at an acceptable level. Certain classes of algorithms, including more traditional machine learning algorithms, tend to be more readily explainable, while being potentially less performant. Others, such as deep learning systems, while being more performant, remain much harder to explain. Improving our ability to explain AI systems remains an area of active research.
Unlike traditional software, it may not be possible to point to any “if/then” logic to explain the outcome of a machine learning model to a business stakeholder, regulator, or customer. This lack of transparency can lead to significant losses if AI models – misunderstood and improperly applied – are used to make bad business decisions. This lack of transparency can also result in user distrust and refusal to use AI applications.
Certain use cases – for instance, leveraging AI to support a loan decision-making process – may present a reasonable financial services tool if properly vetted for bias. But the financial services institution may require that the algorithm be auditable and explainable to pass any regulatory inspections or tests and to allow ongoing control over the decision support agent. European Union regulation 679 gives consumers the “right to explanation of the decision reached after such assessment and to challenge the decision” if it was affected by AI algorithms.
C3 AI software incorporates multiple capabilities to address explainability requirements. These include, for example, automated generation of “evidence packages” to document and support model output, as well as the ability to deploy “interpreter modules” that can deduce what factors the AI model considered important for any particular prediction.