Will AI Revolutionize Regtech? Part 4: If You Do the Right Thing

Next month, the NSCP National Conference will feature a session titled “Will AI Revolutionize the Financial Services Industry?” This is the last in a series of related blog posts on AI and its impact on the industry. Part 1 covered the current state of AI development in fintech, Part 2 highlighted financial services firms currently leveraging AI in their daily operations, and Part 3 offered practical steps your firm can take to start an AI project. Now let’s turn to ethical implications of AI development and implementation.
One of the most common ethical issues resulting from AI models is bias. Recent articles and research papers have shown AI biases exist in Facebook’s ad-serving algorithm, facial recognition software (including Amazon’s Rekognition), and other AI applications. These biases often manifest in racial discrimination, although gender and religious discrimination is also present—as we saw in 2019 when Apple Card was investigated for gender bias in granting higher lines of credit to men than women, even when applicants were part of the same household.
Poor AI Strategy Dehumanizes People
The widespread adoption of AI in all industries, including financial services, will continue to transfer tasks from humans to machines. After all, AI should serve people by automating tasks, which in turn can make jobs more fulfilling. While the transfer of administrative and simple repetitive tasks doesn’t raise major ethical concerns, many firms are pushing the boundaries of AI development into areas that require subjective judgment and personal communication. Poor handling of these “social” tasks results in the dehumanization of workers.
For example, Bloomberg recently reported how Amazon is starting to leverage AI to manage human resources. Amazon leverages AI to hire, rate, and fire “millions of people with little or no human oversight.” Amazon’s increasing use of machines to handle human resources reflects Jeff Bezos’ personal belief that “machines make decisions more quickly and accurately than people, reducing costs and giving Amazon a competitive advantage.” But at what cost?
The cost is a disconnect with the real world, a complex world populated by humans and other animals that have spent thousands of years learning to deal with constantly evolving environments. To cope with this complex world, people have developed judgement. The Bloomberg article suggests that Amazon’s AI strategy fails to emphasize human value. Instead, it simplifies all living activity into precise, mathematical formulae. But are the models behind the machines capable of evaluating, in real time, constant streams of changing inputs like road conditions, weather, accidents, and actions by other living creatures? Given that self-driving cars are still years or decades away, the answer is no, not right now.
Where does this leave us? Michael Spencer, editor in chief of The Last Futurist blog, writes that “AI is not being regulated properly. Without fair rules, the use of AI at work could lead to widespread discrimination and unfair treatment—especially for those in insecure work and the gig economy. Blue collar and temp jobs are under threat by AI in ways we are just beginning to understand.” To push back against AI’s potential for dehumanization, lawmakers are starting to demand oversight and transparency of how algorithms are created, maintained, and implemented. For example, Senator Chris Coons (D-DE) introduced the Algorithmic Fairness Act in December 2020 to interrogate the effects of algorithms on our quality of life and equal access to various opportunities.
Unintended Consequences
In some cases, even despite the best intentions, algorithms designed to help humans can also go wrong. For example, hospitals are using algorithms to screen somewhere between 100 and 200 million patients a year and assess risk across a variety of conditions. Readmissions within 30 days of a procedure are a high source of preventable cost in healthcare, so naturally it makes sense to try to predict who is at greatest risk for readmission.
One hospital built an algorithm for just this purpose, taking into consideration large scales of data including diagnoses, prescriptions, procedures, and long-term healthcare outcomes. However, black patients who were assigned the same score as white patients had worse outcomes. Another way of thinking about this is that healthier white patients were unintentionally prioritized over sicker black patients.
How did this happen? As always, the answer is “complicated.” People of color were statistically more represented in neighborhoods with less access to care. While race and demographics were not part of this algorithm, treatment patterns were statistically different in this subgroup and contributed to algorithm biases.
Given that data science ranks as one of the lowest-scoring STEM disciplines in terms of diversity, it’s no surprise that AI development teams have such a hard time spotting bias. This is especially true among startup companies focused on developing, scaling, and selling products. In their struggle to survive, these young companies often fail to consider the impact they’re having on their social environment. Worse, some AI companies are willing to pair with authoritarian governments to implement products that suppress liberty, equality, freedom, and well-being, all in the name of law and order. But who controls this new world order—humans or machines?
Humanizing AI
While poor AI strategy has the potential to dehumanize people, efforts are underway to humanize AI. The Artificial Intelligence Report recently posted an article claiming that “artificial intelligence will be the most prolific inventor of the 21st century.” The article noted that while the U.S. Patent and Trademark Office has rejected attempts to grant patents to an artificial intelligence system, Australia has granted such patents. As noted in the article, “[a]llowing patents for inventions churned out by tireless machines with virtually unlimited capacity, without the further exercise of any human ingenuity, judgment, or intellectual effort, may simply incentivize large corporations to build ‘patent thicket generators’ that could only serve to stifle, rather than encourage, innovation overall.” The concern is that big data companies get bigger and become more dominant, redefining the definition of a monopoly.
Principles for Developing Trusted AI
Given AI’s potential for abuse and dehumanization, some commentators have called for core principles to guide all AI development and related regulation. These principles require that AI technology should
- promote human well-being and safety and the public interest;
- ensure transparency, explainability, and intelligibility;
- foster responsibility and accountability;
- ensure inclusiveness and equity; and
- be responsive and sustainable.
The bottom line is that safety, accuracy, and efficacy should be included in each use case. Further, independent quality controls should be in place to monitor the ongoing impact of AI development.
A recent New York Times article noted that the Federal Trade Commission is warning companies “against the sale of A.I. systems that were racially biased or could prevent individuals from receiving employment, housing, insurance or other benefits.” The National Institute of Standards and Technology has also published a proposal for reducing bias in AI. Developing trusted AI starts with a basic understanding of some core principles.
Trusted AI
With trusted AI, companies don’t have to choose between avoiding AI or accepting its risks wholesale. When AI and humans act in harmony, you’ll not only reduce the barriers to adopting game-changing technology, you’ll also amplify your economic value. Whether you are designing your own AI-powered solution, or partnering with other vendors, you can instill accountability by asking questions about fairness, transparency, and privacy.

This concludes our blog post series on AI and its impact on the financial services industry. But let’s keep the conversation going! Please share your feedback and ideas for future blog posts. We’d love to hear from you.