Explainable artificial intelligence isn’t enough we need understandable AI

Reading Time: < 1 minute

From insurance claims and loans to medical diagnoses and employment, enterprises are using AI and machine learning (ML) systems with increasing frequency. However, consumers became increasingly wary of AI . as example, within the realm of insurance, a mere 17% of consumers trust AI to review their insurance claims because they can’t comprehend how these recorder systems reach their decisions.

Explainability for AI systems is practically as old because of the field itself. In recent years, academic research has produced many promising XAI techniques, and a variety of software companies have emerged to supply XAI tools to the market. The issue, though, is that each one of those approaches views explainability as a purely technical problem. actually, the necessity for explainability and interpretability in AI may be a much larger business and social problem—one that needs a more comprehensive solution than XAI offers.

XAI Only Approximates the recorder 

It is perhaps easiest to know how XAI works through an analogy. So, consider another black box: the human mind. We all make decisions, and we’re more or less conscious of the explanations behind those decisions (even when we’re asked to elucidate them!). Now imagine yourself (the XAI) observing another person’s (the original AI model) actions and inferring the rationale behind those actions. How well does that generally work for you?

Explainable to Whom?

Another quick thought experiment: Imagine the imperfect explanations of XAI were, instead, perfect. Now, invite someone who isn’t a knowledge scientist to review the model’s decisions—say, an executive responsible of a billion-dollar line of business who must decide whether to greenlight a high-impact ML model.

Leave a Reply

Your email address will not be published. Required fields are marked *