Bias in AI and Steps to Remediate Its Impact

Wallaroo.AI
4 min readAug 22, 2022

--

Franz Kline — Sabra

As businesses and governments turn to AI and Machine Learning, the underlying assumption is that since these models are based on mathematical relationships in the data, these algorithms, unlike humans, are not biased.

However, models will codify biases contained in the training data. Consider:

  • Facial recognition software continues to be more accurate in correctly identifying white, male faces, meaning it disproportionately misidentifies minorities, including misidentifying black politicians as criminals.
  • Models for recommending healthcare management during the early days of COVID19 outbreak led to discriminatory decisions and some patients receiving worse care “because the datasets used to train the algorithms reflected a record of historical anomalies and inequities” (via HBR).
  • In financial services, we see evidence of historical redlining impacting current lending algorithms leading to greater denial of minority applicants which can’t be explained by risk variables like credit score or income.

In short, if training data is coming from a biased environment, then the model will reflect those biases in the inferences it generates. In law enforcement in particular, where they are trying to correct for human biases by turning to data-centric approaches, historical biases for which types of crimes were pursued (e.g, property crimes but not white collar crimes) and which areas had heaviest police presence (e.g, looking for drugs in inner city but not in professional centers) means crime prediction algorithms are being trained on biased datasets. For example, if police are more likely to re-arrest a black criminal compared to a white criminal who committed the same crime, then the algorithm will learn that color is more predictive than crime type, and think black people are “riskier”.

Concrete Steps to Minimize the Harm of Biased Algorithms in Production

So what can be done? Even defining “bias” is difficult and can open to interpretation for different industries or products. However, the worst time to act to remove bias in algorithms is once lawyers and regulators are involved. Based on our experiences in academia and industry, we’ve come up with three recommendations to either reduce the likelihood of biased algos from going live or help the business detect bias more quickly.

Before we proceed we need to apply a few caveats:

  • There is no easy automation. Data scientists will need to work closely with the line-of-business to apply domain specific knowledge and definitions around bias and risk. Zip codes might be good for assessing flood risk in homeowners insurance but not credit risk for loans.
  • The techniques we outline below are great starts for flagging results that need further causal analysis techniques but in and of themselves will likely not give definitive answers about whether they are the results of biased models resulting from biased datasets.
  • There can be confounding factors that can mislead the analysis of one group by another. For example, in the 1970s, Professor Julia Hall Bowman Robinson analyzed the lower admission rates of female applicants compared to males for graduate programs. What she determined was that discrepancy was primarily the result of which departments women applied to, which was hidden by the overall rate of admission (Simpson’s Paradox).

With all that said, there are still standard processes that all data teams in all industries can implement to reduce the likelihood and impact of biased models:

1) Understanding the Variable(s) Driving Results

Data scientists can (and should) use algorithms like SHAP analysis on their models before deployment to explain the relationship between features and predictions. In theory, using these techniques can highlight that perhaps lending approvals are weighing zip code more heavily than income and credit scores. But keep in mind that while SHAP can tell you that the model is using feature X, it cannot tell if feature X is (directly) problematic or that it is not a proxy for or influenced by something that is problematic regarding bias.

2) Shadow Testing Before Going Into Production

Training data may not always completely capture what’s going on in the real world. So instead of building an algorithm based on historical data and then taking it right to production, we recommend testing frameworks like shadow deployment to allow data teams to see what the outcomes of the algorithm would look like in the real world but without impacting the business. They can then compare how that algorithm approves or denies individuals across different populations and flag significant disparities for further causal analysis, using something like SHAP to figure out which factors to study further for hidden bias.

3) Quickly Detecting Divergent Outcomes

Data scientists can set up benchmarks and monitoring tasks (what we call “assays”) to automatically alert data teams of anomalies, data drift, or when results break out of the expected distribution. These model observability capabilities depend on how you set up the models in production (e.g., are the models regionalized to the extent you can see if certain neighborhoods are harmed or is it less granular?), so this is a collaboration between data scientists and line-of-business leaders to identify areas of concern ahead of time.

***

By its nature, machine learning is about having programs sift through volumes and variety of data that a human being can’t and finding patterns within the data. Through these techniques, we can bring transparency to formerly black box algorithms and better understand their impact before it gets too broad.

You can check out Wallaroo Lab’s Responsible ML and AI Policy here.

--

--

Wallaroo.AI
Wallaroo.AI

Written by Wallaroo.AI

90% of AI projects fail to deliver ROI. We change that. Wallaroo solves operational challenges for production ML so you stay focused on business outcomes.

No responses yet