Latest Posts

Stay in Touch With Us

For Advertising, media partnerships, sponsorship, associations, and alliances, please connect to us below


+91 40 230 552 15

540/6, 3rd Floor, Geetanjali Towers,
KPHB-6, Hyderabad 500072

Follow us on social

Why Explainable AI Must be Part of Your AI Solution

  /  Artificial Intelligence   /  Why Explainable AI Must be Part of Your AI Solution

Why Explainable AI Must be Part of Your AI Solution

In February 2021, Zillow began using its Artificial Intelligence (AI) generated ”Zestimate” to make initial offers on homes. The idea behind this portion of Zillow’s business, Zillow Offers, was to buy up homes for resale. But this arm of the business collapsed less than a year later because it was not able to sell off enough of the properties that it had purchased. One of the issues with this high-stakes application of AI was the inaccuracy of the Zestimate system’s predictions of prices 3 to 6 months into the future.

Wait, you’re thinking, isn’t this in conflict with the promise of AI—that automated predictive power can make business decisions airtight? The reality is that predictive AI models, even developed by experts with the best of intentions, are vulnerable to everything from gross miscalculations to misuse to other unforeseen consequences.

Zillow might have prevented the mishap—or, better yet, achieved great success—

with a platform based on Explainable AI. Here’s a primer on how AI transparency lets users better interpret what their data models are saying. And how it’s key for responsible practices as the use of no-code AI blows up—and rapidly transforms aspects of society.


No-code AI is about to blow up adoption

The latest trend in applied AI is “no-code AI,” which aims to put the power of data scientists into more hands. Tools to build your own predictive systems without writing code are sprouting up and every day Joes and Jills are beginning to uncover all the potential use cases.

As Eye on A.I. podcaster Craig S. Smith puts it, “Eventually, the broader public will be able to create AI-enabled software in much the same way that teenagers today can create sophisticated video effects that would have required a professional studio a decade or two ago.”[1]

No-code AI is going to lead to many novel applications of artificial intelligence over the next few years. It will accelerate adoption and reliance on the technology. It’s on everyone involved in building and using AI to assure that the changes that result are beneficial and that unintended consequences are minimized.

There are important underpinnings for the broader public to understand. In a world where no-code AI is becoming more accessible for both enterprises and consumers, proper and responsible use must be aligned with creative intentionality. Explainable AI is the way to achieve this.


What is Explainable AI and why does it matter?

Would you trust a doctor who recommended a prescription but couldn’t say why? How would you feel if they used a bunch of medical jargon to explain? What if the doctor couldn’t point to trials or examples of the treatment being effective for similar situations? Trusting an AI system and feeling comfortable with its recommendations work the same way.

Using AI to generate visualizations is like being able to see your own medical records through the eyes of a doctor. Think of it as taking all your relevant measurements like weight, heart rate, and lab test results and providing the insight that comes with medical school, years of experience, and access to research. Then see how you and your current state of health look in context with potentially millions of others.

What this means for medicine is the ability to get more clarity around a predicted outcome or recommended action based on everything that has been learned from similar people in similar health. Providers can show a patient how each condition or detail in their case may be playing a role in a prediction, not just give a statistical probability of how many years someone has to live.

Explainable AI (XAI) is a set of processes and methods that allows human users to comprehend and question the results created by machine learning algorithms.[2] Ultimately, it enables humans to trust all the data crunching done by AI, because people can understand what they are based on.

Here’s another example to illustrate. Consider a bank making predictions about the probability of a customer defaulting on their credit card. If the probability of default gets too high, the customer management team may be alerted to review the case more closely or make contact with the customer. Ideally, an AI platform provides explanations of these predictions, because first, the customer will want to know specifics of why they are being flagged. And second, the people who are expected to take action should do so based on the data and calculations that were used to drive them, so they take the right action for that particular case. Beyond that, we should leave it to the human experts consuming these recommendations to exercise judgment and even overrule the model at times.

[1] The New York Times



Example of explanation and context behind the predicted probability of default. The explanation points to which factors had a significant impact on the prediction and why.*


Reality checking AI predictions

A practical AI platform allows you to model scenarios using different values because situations can change from when data was originally collected, or there may be a good likelihood that it will change. This lets decision-makers properly assess situations and take action accordingly.

Consider a bank making predictions about the probability of a customer defaulting on their credit card. If the probability of default gets too high, actions may be taken to review the case more closely or make contact with the customer. AI platforms need to be able to provide explanations of these predictions in terms of the data and calculations that were used to drive them so that the people who are expected to take action can take the right action for that particular case.

AI Platforms

What-if scenario planning enables users to change input values and observe how an AI prediction would change. In this case, the user has changed the Estimated Home Value.*

Model Predicts

Impact explanation of an AI workflow. While the probability of default has increased by almost 10%, built-in guardrails in the system warn that the model was not trained to handle this scenario.*

Why a prediction has been made (context) and whether the scenario is within the expected use of the AI model are key items to evaluate before you take action on it. Whether the AI was built by a professional data scientist or a rookie analyst leveraging no-code AI, it would be irresponsible—and financially risky—to omit these types of checks and blindly consume AI outputs.


Responsible AI is a requirement, not an option

With no-code AI making it possible for anyone with data to build predictive systems, there is growing potential for more mishaps like Zillow Offer, and more pressure for guardrails around the risks. Adaptive AI systems, which continuously update as new data comes in, are especially in need of sustained monitoring and review. They are susceptible to data drift that compromises the accuracy of predictions.

In fact, the data used to train AI models could unintentionally contain bias that will then flow into corrosive predictive results. For example, in 2017, Google apologized after its Natural Language Processing-based analysis tool gave negative sentiment scores for words like “gay” and “homosexual” while maintaining a neutral score for “straight”. An application originally meant to democratize NLP applications instead exposed how important it is to review training data for pre-existing biases. Even at an organization where measures were being taken to avoid this. So it’s critical to thoroughly explore the data used to create your models with both data scientists and subject matter experts before putting them into production. This is a task that is made easier by no-code AI, which lets you use AI assistance to look at the data the same way a model would.

Explainable AI enables responsible data use, and this can help with burgeoning corporate responsibility and regulatory requirements around the technology. The European Union’s tough General Data Protection Regulation (GDPR) mandates ethical responsibility in the use of AI, with requirements for both data privacy and artificial intelligence. Violations can mean hefty fines in the tens of millions in euros.

And in the U.S., at least 17 states considered AI-related bills or resolutions in 2021, with some passed into law in Alabama, Colorado, Illinois, and Mississippi. Expect more regulation requiring organizations to explain just what’s in the black box behind their models as AI gets incorporated into more operations.

These risks and regulations don’t have to be a deterrent. It is possible to foster full trust in AI outputs and deliver exactly what is promised without compromising security or data privacy. What’s important is to set yourself up to create and use AI in a responsible manner on the right platform.


A practical AI platform makes things explainable and transparent

The simplicity and accessibility of no-code AI substantially reduce the barriers to starting a new experimental application of AI. But then what? Businesses still struggle to get beyond the initial use case because no-code AI alone doesn’t mean your solution is meeting other practical needs.

Data visualizations and explanations are two such needs. They provide the necessary context and rigor that make AI trustworthy and responsible. A practical AI platform delivers on these and offers the capacity to scale, as no-code AI increases the volume of projects.

Returning to our Zillow Offers example, imagine if XAI had been paired with visualizations and made available to consumers of the Zillow Offers and Zestimate system. These insights could have brought transparency to bad assumptions or biased data. Stakeholders would have had a more complete picture of the story playing out in this predictive ecosystem. The information could have served as a crucial aid for decisions about buying, selling, and flipping homes. And this likely would have led to a much more successful outcome for the business.

AI-generated predictions need to be presented with sound reasoning and with transparency about the intended use of the application. Those explanations need to be presented in a manner that’s easily interpretable by common users of the system. If a no-code AI system is intended to be used by anyone, then the model and predictions should be easy to follow for everyone.

*All screenshots are from the Virtualitics AI Platform and contain fictitious data.

Author: Aakash Indurkhya