Why the ‘why way’ is the right way to restoring trust in AI
As so many more organizations now rely on AI to deliver services and consumer experiences, establishing a public trust in the AI is crucial as these systems begin to make harder decisions that impact customers.
By Dr. Alain Briançon, VP of Data Science and CTO for Cerebri AI.
“Why? - Because I am your mother, that’s why.” - My mom/your mom/everyone’s mom.
Artificial Intelligence is growing in sophistication, autonomy, and market reach offering transformational opportunities for businesses and their customers. AI relies on the collection and smart processing of personal information to function. However, the privacy scandals of social media and recent breaches of consumer data have eroded consumer confidence: not only around data usage but the implications of its omnipotence.
We live in an age of maximum customer empowerment and, as a result, maximum business anxiety. Today customers use multiple channels to interact with your brand, spend more and have access to more information about you, than ever before. Customers are two swipes away from a list of reasons why they should switch to your competitor.
Banks, global electronic and vehicle OEMs, service providers and Governments used to wait and react to customer interactions, once their customers visited their stores, dealerships, branches, or offices. Today’s world is one where customers expect proactive experiences. The enterprise cannot afford to cast a broad promotional net, sit back and hope for the best. They must observe and adapt to each customer.
In this era, organizations cannot afford to be perceived as rigid or dogmatic. For these reasons, many organizations have been relying on AI. Besides enabling the obvious benefits in personalization, AI is also inherently creating ‘looseness’ or ‘fuzziness’, because it uses multiple factors, events, variables and features. When leveraging ‘fuzziness’, brands and businesses need to tread carefully and avoid harming consumer goodwill… and manage, the all-important, trust.
The goodwill consequences of AI deployment have been minimal because so far, most visible use cases of AI have been in the realm in “consumer lifestyle” (CLS). There are many types of AI systems affecting CLS; there are recommendations (‘you should like this’ or ‘take the second exit on the roundabout’); interpretations (‘this is a picture of your aunt Helen in your picture collection’); recognition (‘this what you said’) and more.
Things change as AI deployment reaches the “Consumer Life Critical” (CLC) stage: when hard decisions are made by businesses about their customers, which affect those customers’ life patterns. As a gigabyte of prevention is worth a terabyte of cure, AI must protect itself against an impending backlash of consumer concerns and resulting politician’s reactions.
THE key defense here is explainability. Unlike dealing with our mothers, in any customer facing-activity – whether it is retail, finance, healthcare or hiring and firing - when AI is used to classify types/genres and restricts choice and transactions that impact CLC, goodwill must be earned, and trust kept.
Explaining explainability
Within the European Union, any AI decisions that have an impact on customers have a legal requirement to be ‘explainable’ (GDPR Recital 71). In the US, the Equal Credit Opportunity Act, Title 12, requires sharing the reasons for an adverse action. Regardless of the regulatory landscape, AI outcomes must be explained.
Explainability must not be designed as a single audience add-on. Data scientists use machine learning solutions to create value, increased engagement and drive better key performance indicators (KPI) for their business. Too often, data scientists slap feature importance on, as a substitute for explainability. “This is what drove the decision” is a cheap line. As an aside, care should be considered for continuous versus discrete variables; because, at times, bad encoding is really what “drove the decision.”
When dealing with regulators, at times, machine learning models are mapped to decision trees (albeit, at times, very large and deep ones). Decision trees are deemed "interpretable." A subject matter expert can look at it and trace the path of decisions the AI made to arrive at a decision. But is interpretability the same as explainability? Explaining explainability can be challenging. Those steps are necessary, but far from enough:
Some decision trees might just as well be a forest (picture courtesy Alain Briancon).
Data scientists’ designs must be built in terms of business impact, not model impact. If you can’t put a $ measurement or market impact measurement to your results, you have not done enough for quality and explainability.
Explainability must be designed as a quality metric AND an audit method. Model quality is important. Every model deployed should be measured along with more than a dozen technical performance metrics for model quality. The impact of missing data, data source quality, expected or affected customers, the usage of variables and features, along the lifecycle of deployment and valuation of impact should all be part of the design from the get-go.
While Subject Matter Experts reviews are the first line of defense to ensure the rationale behind AI-powered decisions, the explanation must be outside the language of AI geeks.
Explaining is quality control on Cerebri Values CX platform (picture courtesy Alain Briancon)
A sound level of understanding needs to be adapted to the CEOs, IT, SMEs, and, of course, customers. For CEOs and CTOs, understanding the impact of missing and imputed data, protected and private attributes, the balance of multiple KPIs should be integral to the rollout of AI across their businesses. Care needs to be taken to ensure model training is unbiased, fair (whether its supervised, unsupervised or utilizes reinforcement learning) and avoids forcing a fit to preconceived behaviors.
Large-scale organizations often gather customer data in one “customer journey per function”, and then analyze these journeys for insights that drive engagement and financial results. However, true customer journeys cut across sales, marketing, support, and all other functions that touch the customer. That means that any AI-driven decisions impact multiple departments and P&L centers.
Goodwill within an organization is important as well. Robing Q4 product sales with Q3 service sales might be ok if everyone knows about it.
The importance of UX
Audit means inspection and traceability of decisions. It implies a friendly user interface integrated with normal AI operation. Design for explainability can require trading some performance for explainability. Data scientists must strive for best in class design, and then pull back performance enough to provide explainability.
Focusing on explainability as a quality metric has additional benefits that compensate for potential performance issues. Especially when dealing with systems that leverage customer journeys, in contrast with factor-based or demographic based systems, which only look at static variables.
Explaining is inherently a causal interaction. New techniques are emerging to deal with causality, that in turn improve the performance of models based on customer journeys. They include SHAPLEY analysis, do logic, interventional logic, counterfactual analysis, Granger causality, graph inference. These techniques can be used for feature engineering for modeling and improve modeling significantly.
There are significant benefits in building explainability and interpretability into an AI system. Alongside helping to address business pressures, adopting good practices around accountability and ethics improves the confidence in AI, thus hasten the deployment for CLS applications. An enterprise will be a stronger position to foster innovation and move ahead of its competitors in developing and adopting new AI-driven capabilities and insight.
For AI to be adopted thoroughly, the backlash against the obvious abuses of privacy from social media concerns (in the western world) and the ‘slap happy’ approach to data security have to be worked out. To succeed in the long term, AI must be impact/outcome centric. That means stakeholder explanation centric. Above all, AI must be customer-centric and that means explaining embedded from the beginning.
“Why? - Because I am your customer, that is why.”
Bio: Dr. Alain Briançon, Chief Technology Officer and VP Data Science of Cerebri AI, is a serial entrepreneur and inventor (over 250 patents worldwide) with a vast experience in data science, enterprise software, and mobile space.
Related: