Machine Learning Models and the “Black Box Problem”

Wallaroo.AI
4 min readNov 30, 2022

--

Machine Learning is based on hidden layers of nodes and processes, each layer processing and passing data forward to the next. One of the biggest issues facing AI/ML is this lack of explainability. As models become more sophisticated they begin to evolve from simple linear models to non-linear combinations, occasionally of other more complicated models. This is known as the “Black Box Problem” in machine learning. This is because most out-of-the-box machine learning systems only make the inputs and outputs of your model observable.

The hiding of internal computations within a model’s many operational layers creates what is often referred to as a “black box”. This can make the use of AI problematic in highly regulated industries like healthcare and financial services, but even on a basic level, the lack of explainability can make it difficult for data scientists to identify sources of bias in their data. But beyond the data scientist working on the model, each ML system has a variety of stakeholders that will need different levels of visibility into what models are doing and why. For example, compliance teams will want to understand which variables are driving model predictions in order to ensure they comply with regulations, particularly in fields such as retail financial services for products like loan approvals.

Figure 1: Stakeholders and the MLOps ecosystem

MLOps and the “Black Box Problem”

An ML model suitable for production applications typically have so many coefficients that it can become too difficult to diagnose, making it a near-impossible task to untangle the various threads and discern what is driving predictions. Model explainability is the idea of utilizing various tools and methods, within a model deployment platform to explain the effects of different features on specific predictions provided to the end user. As the utilization of data science through model deployment continues to grow, ML engineers are becoming increasingly focused on finding suitable solutions to the “black box problem.”

A platform where basic functionality is built with explainability in mind is a key element to helping support model transparency. To understand these models as they become more complex you need to take a more indirect approach, such as using Shapley Values, which can estimate the effect of a specific feature on a specific prediction. This kind of explainability regarding a model’s behavior, using real-world data, can help data teams identify odd behavior within the model inside the environment it runs in.

SHAP and Explainable Models

The SHAP model is commonly employed to analyze advanced models that utilize computational algorithms. By identifying information regarding the influence of various assigned attributes within the model, SHAP can provide clear explainability for a model’s projections. Based on a given input, SHAP can assess the relative importance of the input information to the prediction. SHAP can even be employed to grant transparency and explainability for models that are deployed for analyzing large amounts of complex data both efficiently and at scale.

However, SHAP is not the only explainability solution being used, as methods to provide explainability after the predictions have been made are also a popular topic in some machine learning circles. A series of procedures known as “post-hoc explainability” make an effort to explain inferences by reverse engineering the predictions. Even using a “black box” model, “post-hoc explainability” can identify the details regarding a prediction after the prediction is made to provide transparency. The main negative aspect of using this approach of course is the inability to provide real-time explainability to a model in production.

This lack of real-time explainability is why Wallaroo decided to work with SHAP-based model explainability within our platform. It is more suited to providing real-time user explainability and far more useful data to analysts monitoring models in production. To successfully integrate SHAP model explainability into an MLOps deployment system you need to run a SHAP analysis on a model/pipeline with predictions being made in production. The Wallaroo platform has an intuitive UI designed for doing just that and submitting SHAP job requests for analyzing results. You can then leverage explainability to understand the effects of specific features on whatever predictions you want to analyze with minimal impact on your production systems.

To harness explainability with your ML models using SHAP, reach out to us at deployML@wallaroo.ai and speak with one of our experts and schedule a demo. You can also join the Wallaroo community by trying Wallaroo: Community Edition to give our platform a try.

--

--

Wallaroo.AI
Wallaroo.AI

Written by Wallaroo.AI

90% of AI projects fail to deliver ROI. We change that. Wallaroo solves operational challenges for production ML so you stay focused on business outcomes.