
IESE Insight
How artificial intelligence is transforming finance
AI is making financial systems more efficient, but it may also make them more fragile.
Artificial intelligence is already transforming the way financial institutions, central banks and capital markets assess credit risk, track economic activity and execute trades.
But the AI-enabled advances in efficiency, precision and inclusion may create new vulnerabilities in the financial system in terms of stability, equity and governance, which makes regulatory responses imperative. If policymakers rise to the challenge, AI can be harnessed to improve the financial system’s performance; if they do not, the same AI technologies may undermine the very foundations on which financial trust depends.
A new report, Artificial Intelligence in Finance, co-authored by Thierry Foucault, Leonardo Gambacorta, Wei Jiang and IESE’s Xavier Vives, and published by the London-based Centre for Economic Policy Research (CEPR), looks at how AI is altering global financial systems and what the implications are for financial stability, competition and policy design.
Grounded in the production and use of information, finance is especially ripe for disruption by AI. AI technologies are no longer ancillary — they are moving into the core of financial intermediation, asset management, payment systems and regulatory oversight. Banks are starting to deploy generative AI in various capacities, and most expect its use to intensify.
The incorporation of AI into finance is redefining roles, information structures and institutional dynamics. With AI, financial decisions that once relied on human judgment, such as creditworthiness assessments, order execution and even supervisory analysis, are increasingly being shaped or made by algorithms that continuously learn and update based on high-dimensional data.
This change is not only about speed or automation; it is about the qualitative transformation of decision-making, incentive alignments and risk transmission channels within the financial system. It also creates new forms of dependence — on software, data infrastructure and external service providers — that are reshaping the architecture of financial institutions and markets.
Crucially, the gains promised by AI — greater efficiency, broader access, better forecasting — are not evenly distributed and may come at the cost of new fragilities. The opacity of AI models raises challenges for accountability and governance; the ability of dominant firms to harness AI at scale threatens competition and inclusion; and the homogeneity of model design may amplify systemic shocks. These concerns are particularly acute in finance, where error propagation, behavioral correlation and expectation sensitivity are central features of market dynamics. As with past innovations, AI may solve some long-standing problems while simultaneously generating novel externalities and vulnerabilities.
AI changes in finance
Specifically, these are some of the fundamental transformations induced by AI, and the policy challenges they raise:
- A paradigm shift in financial intermediation. AI-driven models significantly outperform traditional credit-scoring methods, unlocking broader credit access — particularly for underserved borrowers — and enabling faster loan approvals. However, these benefits are tempered by persistent concerns over fairness and price discrimination, and potential diminished transparency in credit markets.
- Data-driven disruption in capital markets. The proliferation of alternative data and AI-powered trading models is reshaping market dynamics, enhancing liquidity and forecast accuracy. At the same time, the convergence of strategies raises the specter of synchronized behavior, flash crashes and systemic instability.
- Governance and accountability in the age of AI. AI challenges traditional corporate governance and legal frameworks, especially when decisions emerge from opaque, self-learning systems. Calls for model interpretability, traceability and new standards for hybrid human-machine oversight are gaining urgency.
- Emerging systemic risks. Increasing reliance on opaque AI systems threatens to concentrate market power, obscure systemic vulnerabilities and weaken traditional levers of monetary policy. Financial shocks may potentially be amplified by model correlation, infrastructural dependencies and reduced discretion in smart contracts.
9 policy recommendations to regulate AI in finance without undermining innovation
A central theme of the report is the urgent need for agile and adaptive regulatory responses. Some policy recommendations:
- Revise legal definitions of accountability, incorporating the new reality that decisions are being made by systems that learn and evolve independently of direct instruction. Without robust interpretability requirements or embedded traceability mechanisms, financial institutions risk deploying systems whose behavior they cannot fully explain, let alone govern.
- Foster hybrid human-machine governance, for example in smart contracts, which are automated agreements that self-execute based on real-time data inputs and which may require human override options and other provisions.
- Recalibrate regulations redefining equal access to information by standardizing corporate disclosures, promoting fair use of alternative data and increasing prompt transparency in trading.
- Develop safety standards and systematic tests for algorithms in securities markets to prevent distortions such as algorithmic collusion.
- Anticipate scenarios in which AI becomes a locus of strategic contest. The future of AI in finance will be shaped by broader geopolitical forces. The fragmentation of digital governance regimes across the U.S., European Union and China may impede global standard-setting, while the concentration of compute infrastructure and model expertise in a handful of firms and jurisdictions raises concerns about economic sovereignty and resilience.
- Foster mechanisms for international coordination. Cross-border data flows, foundational model access and platform interoperability will increasingly become matters of financial diplomacy.
- Explore new frameworks for evaluating AI’s systemic importance, analogous to those developed for global systemically important banks.
- Carry out scenario-based planning to anticipate emergent threats and evaluate institutional resilience in the face of AI-driven disruptions.
- Build technical capacity within regulatory agencies. Consider setting up AI safety agencies (like the AI Security Institute in the U.K.) in charge of setting standards, testing algorithms used in markets and conducting research on these algorithms.
The success of financial governance will depend in part on how well regulators balance innovation and control of potential market failures. Overregulation may stifle productive uses of AI, while underregulation risks creating systemic blind spots. This balancing act requires adaptive mechanisms for revisiting assumptions, updating rules and engaging with a broader ecosystem of stakeholders.