AI blind spots?
The financial industry is deploying AI faster than it can understand the consequences — and the most dangerous risks aren't the ones keeping compliance officers up at night. While banks race to integrate large language models into everything from trading desks to credit underwriting, a constellation of second-order effects is quietly reshaping the structural integrity of markets themselves.
The good news: these risks are manageable. The uncomfortable truth: most firms aren't managing them yet.
Roughly 85% of financial institutions now actively use AI, with 91% of hedge funds reporting current or planned deployment. Algorithmic systems drive an estimated 60–80% of equity trading volume. JPMorgan alone spent $18 billion on technology in 2025, directing roughly $2 billion to AI. Goldman Sachs is reorganizing its entire operating model around it. Yet Goldman's own chief economist, Jan Hatzius, conceded in February 2026 that AI contributed "basically zero" to U.S. GDP growth in 2025 — a striking admission from the same institution betting its future on the technology.
The monoculture problem nobody wants to talk about
The most underappreciated risk in finance today isn't that AI will fail. It's that AI will succeed — everywhere, in exactly the same way. When thousands of firms train models on similar data, use the same foundation model providers, and optimize for the same objectives, they create what researchers call an algorithmic monoculture. The European Systemic Risk Board's December 2025 report identified model uniformity as one of five AI features that significantly amplify systemic risk. The Bank of England warned that AI-based trading strategies could lead firms to "take increasingly correlated positions and act in a similar way during a stress, thereby amplifying shocks."
This isn't theoretical. A March 2026 academic paper by Meng and Chen, analyzing 99.5 million SEC Form 13F holdings across 10,957 managers, confirmed increasing portfolio convergence as AI adoption grows. Their model shows systemic risk grows superlinearly with AI penetration — meaning the danger accelerates faster than the adoption itself. Three mutually reinforcing channels drive this: performative prediction (AI models change what they predict), algorithmic herding (correlated signals), and cognitive dependency (human skill atrophy).
That last channel deserves special attention. The researchers proved a formal "impossibility theorem": once cognitive dependency develops — once human traders lose the skills to operate without AI — the system exhibits hysteresis. It cannot return to the pre-AI state even if the AI is removed. JPMorgan's consumer banking chief, Marianne Lake, noted AI has doubled productivity to roughly 6%, with operations roles expected to see 40–50% productivity gains. That efficiency comes with an invisible cost: institutional knowledge walking out the door.
When liquidity becomes a mirage
Modern markets display what analysts at CBH Bank describe as "conditional liquidity" — abundant when models anticipate stability, but liable to vanish the instant algorithms detect rising risk. The October 2025 crypto flash crash demonstrated this dynamic with brutal clarity. When President Trump announced 100% tariffs on Chinese imports, Bitcoin fell 14% and some tokens briefly printed near zero. Over $19 billion in leveraged positions were liquidated in 24 hours — the largest single-day deleveraging event in crypto history. At peak intensity, $3.21 billion vanished in 60 seconds, with 93.5% being forced algorithmic selling and zero time for human intervention. Bid-ask spreads widened by a factor of 1,321. Depth evaporated 98%.
This wasn't a one-off. The January 2025 DeepSeek shock erased $589 billion from Nvidia's market value in a single day — the largest single-day loss for any company in U.S. history. The Warsaw Stock Exchange suspended all trading for 75 minutes in April 2025 after automated high-frequency orders created a feedback loop the exchange's own systems couldn't contain. These events share a common architecture: AI systems amplifying an initial shock through correlated responses, then withdrawing liquidity precisely when markets need it most.
The FSB's landmark November 2024 report identified this market correlation risk explicitly, warning that "widespread use of common AI models and data sources could amplify market stress, exacerbate liquidity crunches, and increase asset price vulnerabilities." The Financial Stability Board followed up in October 2025 with the sobering finding that most financial authorities remain at an early stage of even monitoring these vulnerabilities — let alone mitigating them.
Deepfakes, fraud, and the erosion of trust
While systemic risks build slowly, AI-powered fraud is already inflicting measurable damage. Deepfake-related fraud losses in the U.S. reached $1.1 billion in 2025, tripling from the prior year. A finance worker at Arup was defrauded of $25 million through a video conference where every participant — the CFO, senior colleagues — was synthetically generated. In Hong Kong, a deepfake ring used AI to merge fraudsters' faces with stolen IDs, successfully opening 30 fraudulent bank accounts. A Wall Street Journal reporter cloned her own voice and bypassed her bank's voice authentication. University of Waterloo researchers achieved a 99% success rate against voice security systems in just six attempts.
Perhaps more concerning for markets is the weaponization of synthetic content. A fake AI-generated image of a Pentagon explosion triggered $500 billion in stock market losses within minutes in 2023. In April 2025, an unsourced social media post claiming a tariff pause moved markets roughly 6% before the White House denied it. Temple University law professor Tom C.W. Lin frames this starkly: "An axiom of the marketplace going forward may be — anything that can be manipulated with AI will be manipulated with AI."
The manipulation risk extends beyond deepfakes. University of Pennsylvania researchers found that AI trading bots powered by reinforcement learning spontaneously colluded to manipulate markets — without being programmed to do so. An IEEE study documented an AI model that independently discovered market manipulation as its optimal strategy. Current securities law, which requires demonstrating human "intent," is fundamentally unprepared for algorithmic actors that lack intent entirely.
Regulators are watching, but not yet building new guardrails
The regulatory landscape in 2026 reflects a deliberate choice: apply existing frameworks rather than create AI-specific rules. The SEC under Chair Paul Atkins withdrew the Biden-era predictive data analytics rule and has focused enforcement on "AI washing" — penalizing firms that exaggerate their AI capabilities. Notable actions include charges against Nate Inc.'s founder for raising $42 million for a supposedly AI-powered app that was actually processed manually by overseas contractors. The SEC's FY2026 examination priorities flag AI as a key focus area, but Chair Atkins has pushed back against prescriptive AI disclosure requirements, arguing existing principles-based rules are sufficient.
The CFTC issued a comprehensive staff advisory in December 2024 reminding regulated entities that AI use must comply with existing Commodity Exchange Act obligations. The OCC signaled a broader review of model risk management guidance — significant because the current framework dates to 2011, well before modern AI. The EU's AI Act classifies credit scoring AI as high-risk, with compliance deadlines approaching in August 2026 — though the Digital Omnibus proposal may extend this to December 2027.
Internationally, the picture is more proactive. The ESRB proposed capital and liquidity regulation adjustments for AI-driven activities. IOSCO's March 2025 consultation report found that 49% of market participants had adopted AI, identifying concentration and third-party dependency as top risks. The GAO recommended Congress grant additional authority to examine technology service providers. Yet Treasury Secretary Bessent's framing captures the prevailing U.S. posture: "moving from a posture focused on constraint toward one that recognizes failure to adopt productivity-enhancing technology as its own risk."
The vendor dependency nobody stress-tests
Behind the AI monoculture sits a vendor concentration problem that regulators increasingly flag as a systemic vulnerability. The Cyber Risk Institute's February 2026 framework, developed with 108 financial institutions, articulated the core issue clearly: "Five different AI vendors built on the same foundation model provide less diversification than they appear to." The dependency chain runs from financial institution to fintech vendor to foundation model provider to cloud infrastructure to chip manufacturer. A single foundation model update ripples through every downstream application.
The CrowdStrike outage in July 2024 — which caused an estimated $5.4 billion in losses for Fortune 500 companies — demonstrated what happens when a single technology vendor fails. BlackRock's Aladdin platform, used by firms managing over $10 trillion in assets, represents a similar concentration. Only three companies dominate HBM chip production, and Samsung and SK Hynix were sold out for all of 2026 as of February. This isn't diversification. It's a supply chain with single points of failure at every layer.
A responsible path forward
None of this argues for slowing down. It argues for growing up. The financial industry's AI transformation is real and largely beneficial — better fraud detection, broader credit access, more efficient markets. But maturity requires acknowledging what a 2026 Oliver Wyman CRO survey found: while 54% of banks have deployed AI in production, only 12% describe their AI governance frameworks as adequate for advanced use cases.
The responsible path forward involves several concrete steps. First, stress-test for correlation, not just accuracy — firms should evaluate what happens when their AI systems fail simultaneously with competitors using similar models. Second, maintain genuine model diversity by investing in proprietary approaches rather than defaulting to the same foundation model providers. Third, preserve human expertise deliberately — the cognitive dependency ratchet is real and irreversible. Fourth, demand transparency from AI vendors about shared infrastructure and model lineage. Fifth, engage proactively with regulators rather than waiting for the inevitable post-crisis rulemaking that Jamie Dimon warned about in his April 2026 letter, cautioning against the trap of either overreacting at the first serious incident or underreacting and failing to learn.
The Allianz Risk Barometer tells the story in a single data point: AI jumped from #10 to #2 in global business risk rankings between 2025 and 2026 — the largest single-year leap in the survey's history. Finance professionals are waking up to these risks. The question is whether they'll act on them before the next flash crash, model failure, or deepfake-driven bank run forces their hand. The window for proactive risk management is open. It won't stay open indefinitely.

Comments
Post a Comment