Artificial Intelligence and Financial Regulation: The Challenge of Balancing Promise and Peril
By Shayna Olesiuk, Director of Banking Policy
“The rise of AI threatens to worsen wealth inequality, weaken worker power, and fortify a tech oligarchy. What does progress even mean, if it doesn’t include better lives for people who work? What is the point of greater efficiency, if the money being saved isn’t going anywhere except into shareholders’ bank accounts?” - Ted Chiang, Writer
“The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.”
- Bill Gates, Technologist and Philanthropist
The optimists, the pessimists, and most of those in between are right about Artificial Intelligence (“AI”) to some degree: their views reflect the promise and peril of AI. The challenge we all face is balancing those views, ideally in a way that enables Main Street Americans and the country to get the benefits of AI while avoiding the potential catastrophic applications and implications. Innovation—related to AI or anything else—is the fuel that drives our economy, wealth creation and rising living standards. However, bad actors, short term thinking, or just development gone awry can set that all back as the trust and confidence of the public, investors, markets, and governments are undermined if not destroyed. The challenge is finding a balance that maximizes AI’s benefits while mitigating its dangers. This is a delicate balancing act that requires thoughtful regulation and proactive oversight.
The Dual Nature of AI
AI’s potential to drive efficiency and innovation is immense. In the financial sector, AI can streamline operations, reduce costs, and enhance compliance and risk management. These benefits are already being realized, with AI improving trading strategies, customer interactions, and back-office functions. However, AI also introduces substantial risks that include the potential for fraud, bias, discrimination, and systemic threats to financial stability.
A 2023 Executive Order highlighted these dual aspects: AI holds extraordinary promise but poses serious risks if used irresponsibly. As AI technology evolves, it becomes increasingly important to address these risks with robust and forward-looking regulatory measures. The goal is to avoid the pitfalls of over-regulation, which could stifle innovation, while preventing under-regulation that might lead to significant societal harms.
Make sure to check out our latest analysis: Recent Supreme Court Decisions and Their Harmful Impact on Main Street Americans.
The Case for Proactive Regulation
In the face of these challenges, a laissez-faire approach to AI development is neither appropriate nor sustainable. The Silicon Valley mantra of “move fast and break things” is ill-suited for AI, given the stakes involved. The consequences of unchecked AI could include severe financial instability and consumer harm. Therefore, the AI community and regulators need to collaborate on creating a regulatory framework that supports innovation while ensuring public safety.
In a recent fact sheet our team made several key recommendations for achieving this balance:
Coordination and Communication
A unified definition and taxonomy for understanding AI within the financial industry are crucial and will help us understand AI’s applications and risks comprehensively. Regulators should work together to communicate the benefits and potential dangers of AI clearly to the public and industry stakeholders. A shared vocabulary will also prove useful as the financial regulators collaborate with each other. This will facilitate informed decision-making and appropriate regulatory responses.
Funding and Expertise
Regulators need significantly increased funding to build the necessary expertise for overseeing AI. The financial sector is investing billions in AI, and regulators must keep pace by continuously developing their own capabilities. This includes hiring AI professionals, enhancing training programs, and establishing dedicated units within regulatory agencies that focus on AI oversight. For instance, the SEC’s creation and expansion of its Cyber Unit exemplify regulatory steps that add value but require adequate resources.
Data Challenges
Data is at the heart of AI, but current datasets often suffer from biases and privacy issues. Addressing these challenges is essential for the fair and ethical use of AI. Regulators should focus on improving data quality, ensuring transparency, and protecting consumer privacy. This includes tackling various forms of bias—such as selection bias and recency bias—that can skew AI’s decision-making processes and results.
Pre-Approval and Testing
Implementing a pre-approval process for AI systems, particularly those deployed in consumer-facing applications, is vital. This should include rigorous testing for efficacy, reliability, and security. A regulatory pre-approval system, similar to the SEC’s review of securities offerings, can ensure AI systems meet high standards before they are used in the financial sector.
Regulatory Standards and Enforcement
Strong enforcement mechanisms are needed to punish and deter violations of AI regulations. This includes increasing penalties for non-compliance and holding companies accountable for their AI systems' results and impacts. Regulatory bodies should also focus on executive and board-level accountability, not defaulting to broader entity actions or shifting blame to lower-level employees.
Public Transparency
Transparency around AI regulations and enforcement actions is crucial for building public trust. Regulators should enhance public access to information about AI-related compliance and enforcement, ensuring that consumers are informed about how their data is being used and protected. Creating a public registry of companies with AI-related violations could help in this regard, similar to the CFPB’s registry of recidivist companies.
The Bottom Line
In summary, the transformative potential of AI must be harnessed with care and foresight. Effective regulation is key to ensuring that AI’s benefits are realized while minimizing its risks. This requires a collaborative effort between the public and private sectors, with a focus on proactive and informed regulation. By taking these steps, financial regulators can help ensure that AI contributes positively to the economy and society, fostering innovation while safeguarding stability and fairness.
Learn more about our work on AI in our recent comment letter and fact sheet.