Mythos reveals banks are vaulting...bugs
The new frontier of AI is about to unzip everyone’s software vulnerabilities. How should banks, fintechs, and protocols prepare?
Financial institutions have always been in the business of warehousing risk: credit, duration, liquidity, operational risk. But guess what? It turns out the biggest risk on the balance sheet is the bank’s own software vulnerabilities; the platforms and protocols aren’t just revamping the exchange of value, but of attack vectors. The vaults and smart contracts aren’t just full of money and credit. They’re overflowing with bugs. You’re banking bugs.
This may not be news, but this week’s leap in AI means that inventory of software holes and glitches has just changed from being an IT/CISO to-do checklist to a balance-sheet event.
On Monday, April 7, Anthropic published its “Claude Mythos Preview”, unveiling details of its next-frontier large-language model, which will not be made public in its current form, yet. No matter, this is the current state of AI, and other labs, and not just American ones, will soon have similar capabilities.
The Mythos report (a ‘system card’ in Silicon Valley-speak) describes a model that can autonomously identify and develop exploits for previously unknown zero‑day vulnerabilities across every major operating system and browser Anthropic tested. And Mythos spots these in hours rather than the months it can take elite human red teams (in-house teams of ‘white hats’ who act like a real attacker). These include long‑standing bugs in hardened stacks that underpin critical infrastructure.
Finance’s fragility
For the financial sector, that matters because the same code is everywhere. Core banking systems, card processors, custodians, high‑frequency trading platforms, payments rails, cloud‑hosted fintech apps, DeFi nodes – they all sit atop this shared substrate of operating systems, compilers, crypto libraries, and virtualization layers. The sector has spent decades layering controls, processes, and capital buffers on top of that substrate without ever being able to – using a financial term – mark to market its fragility.
Mythos‑class models change that. They remove the human‑time bottleneck from vulnerability discovery. This is great for companies that can act quickly to patch things. It’s a disaster if you can’t. Anthropic says it’s quietly disclosed thousands of vulnerabilities to vendors. But that’s just the start.
Bank technologies have long understood their institutions house ‘technical debt’, the accumulated work and risk needed in software from choosing quicker, easier, and cheaper solutions rather than building robustly. Although the debt is high, the infrastructure has been sufficiently walled, gated, and moated to remain defensible. Now the economics of machine speed render the castle both suddenly vulnerable, but also allows defenders to modernize those firmaments.
But vulnerabilities that go unaddressed are now a liability, financially, legally, and by reputation.
Repricing risk
Sporadic samples of the risk book must now be treated as full-book, real-time marking. The book, in this case, being the entire software and infrastructure stack. Period penetration tests and compliance checklists are about to be repriced.
That repricing is going to happen first in DeFi. Smart contracts, bridges, and wallets are now honey pots. Immutable contracts with large TVL (total value locked) have been built without accounting for this kind of threat. The entire ecosystem reuses key aspects of data: smart contract builders, standard automated-market-making templates, oracle interfaces, layer-2 bridge designs, which could create a systemic risk to many protocols.
Relying on Big 4 audits won’t cut it now. Serious teams will move toward continuous AI-augmented auditing pipelines. If crypto trades 24/7, audits will also be non-stop and real-time. Market players will demand transparency around disclosing Mythos-class or equivalent models against a protocol’s stack.
The emphasis on shipping front-end product and UX (which is typical across TradFi and enterprise generally) will have to include a narrative around security, which implies a cultural shift. New products that are embedded with persistent, Mythos-level protections are going to be key to revenue growth.
These days digital assets have become institutionalized, and the bitcoin bros have learned to recite the letters ‘K-Y-C’, so this represents an acceleration rather than a deviation, but there remain plenty of slapdash teams, yield-farm casinos and meme protocols that will now be found out.
Quantum qualified
Crypto loves to talk about how AI is perfectly suited for its mission of agentic finance. Still true! But the industry is already a seeping cesspit of hacks and fraud (see: Drift). The next year is going to be wild. But crypto is just the thin edge of the financial wedge.
This will also transform the new fashion around worrying about quantum computing and its ability to smash cryptography. The reality of quantum computing is unclear, but it’s still a story told in years, if not decades. AI has shrunk that timeline to months.
Mythos doesn’t attack the math of cryptography, but it holes the software envelope around it. Anthropic points to vulnerabilities around widely used cryptographic components that govern how keys are generated, stored, and used in the real world – where the practical risk to financial institutions lives. Quantum risk still remains but mitigation strategies about designing quantum algorithms to safeguard data is going to be folded into the immediate challenges of AI.
Red team in a box
The good news is that Mythos is also a senior red team in a box. It can run massive enterprise-network simulations, chain exploits, escape sandbox environments, and figure out how bad guys infiltrate or exfiltrate systems. It’s pretty awesome.
But it means banks and cybersecurity experts can no longer work around episodes in the calendar. Tests, exercises and assessments were never precise but they were built against human hackers, not AI. And the resources required to attack an institution or protocol are now available to almost everyone.
And security isn’t just about the firm, the team, or the software. It’s now an industry risk, a systemic vulnerability. Shared dependencies such as open-source libraries, foundational middleware, monitoring agents, circuit breakers, and common subscription-software components are beyond the control of a given user, or vendor. Anthropic has launched a project called Glasswing to serve as a community response, and no doubt vendors and cyber specialists will introduce their own.
Every enterprise has been hacked. Boards and execs have learned to assume a determined attacker can penetrate their systems, at least to some extent. The playbook has been to shorten the distance between discovery and mounting a defense. That’s still true, but the timeframes have shrunk. We don’t yet know by how much. We’re going to find out.
Expect to hear “resilience” in every CEO and conference-stage speech. Okay, sure. What else? AI has already been challenging what constitutes value, to an enterprise or to a fintech startup. Mythos amplifies this. Having AI to detect fraud is fine, but it’s now hygiene. Enterprise value means proprietary data sets, regulatory licenses, and a governance structure that enables safe and durable human-to-machine systems.
What to ask
Surviving and thriving in this next chapter of AI will require investment into governance and systems, and not every team or institution can or will make that investment. Some may find their legacy systems too cumbersome to patch at Mythos speed. Others won’t make the cultural adjustment. Again, nothing new, but many institutions survived static mindsets because the pace of change was tolerable.
Boards, founders, and C-suite executives should ask themselves what they should expect to find should a Mythos-class model analyze the entire stack tomorrow: code, infrastructure, dependencies, vendor interfaces. How quickly can those bugs be squooshed?
They should ask where and when they depend on other companies to do the patching, like in an open-source library. Maybe the firm can, and should, organize assistance?
They should immediately review all key vendors, such as cloud providers, core banking platforms, card processors, custodians, and crypto services such as bridge providers and pricing oracles, with regard to their AI security posture and disclosures.
They should develop a PR strategy to control messaging around what vulnerabilities may exist, and find the balance between transparency and prudence. Communicating with regulators, counterparties, and customers, not to mention employees, will be important, and firms need an instruction kit around how to break the glass when there’s a fire.
And they should ensure they have governance around using their own AI tools. The nature of LLMs is they are black boxes. When a crisis hits, responses will be in machine time. What’s best practice now?


