Apple Card Gender Bias? It Didn’t Have to Be That Way.
If you think smart world-class companies don’t face challenges when using machine learning to automate credit decisions, just ask Apple and Goldman Sachs. Based on a flurry of angry tweets and high-profile accusations, the New York Department of Financial Services launched an investigation into potential gender discrimination by algorithms that evaluate Apple Card applicants.
Whether real or perceived, gender bias can damage the reputation of tech darlings like Apple, even if their credit decisioning process is wholly managed by someone else — in this case, Goldman. It doesn’t help when one of the accusers is a former Apple co-founder.
Enova Decisions understands how to build accurate credit models — without bias. Enova Decisions is part of Enova International, a publicly traded financial services company (NYSE:ENVA) that has extended over 22 billion dollars in credit online to over 5 million customers worldwide. Enova Decisions’ team of decision scientists are well-versed in analyzing data for consistency, availability, and predictability in context of protected classes.
Unfortunately, the question of whether bias is intentional doesn’t matter. The law demands that all consumers, regardless of gender are treated equally in lending decisions: “any algorithm that intentionally or not results in discriminatory treatment” violates the law.
Despite the hype, ML models aren’t to blame.
Even tech giants don’t always “get it right” when it comes to machine learning (ML) algorithms. And with books like Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy becoming a New York Times best seller, it’s no wonder that companies are questioning whether or not to embrace ML. However, what many naysayers fail to realize is that bias — and the discrimination that results from it — isn’t inherent in machine learning but an output of the model development process.
For example, bias can creep into a model if the training data set isn’t representative of the actual population. Or, taken a step further, the training data set itself may be influenced by prejudice within the population. As the computer science adage goes, garbage in, garbage out. In the best case, the model will result in inaccurate predictions. In the worst case, the model will result in discrimination towards protected classes.
Enova Decisions proactively mitigates bias throughout the lifecycle of a machine learning model. During the model design phase, Enova Decisions inspects the training data and tests model outcomes for disparate impact. Once in production, Enova Decisions routinely monitors model outputs for such impacts.
Demystifying the black box.
Identifying bias is one challenge. Understanding why bias is happening is another. While intent does not matter in the eyes of the law, it does for lenders and companies in highly-regulated industries. However, ML models are often called “black boxes” for their lack of transparency.
Fortunately, new techniques like SHapley Additive exPlanation (SHAP) are being developed to help data scientists explain the predictions made by machine learning models. These same techniques are used by Enova Decisions’ decision scientists to capture explanations for every decision. Enova Decisions clients can access these explanations through Enova Decisions’ decision management software-as-a-service, Enova Decisions Cloud.
Through proactive risk mitigation and transparency, Enova Decisions enables businesses to not only make more accurate credit decisions, but also make those decisions in compliant with fair lending standards.
Contact us now to learn how Enova Decisions can help your business make smarter, unbiased credit decisions.