In 2023, Silicon Valley Bank collapsed with $209 billion in assets. Interest rate risk was on their risk register. It had been there for years. The bank had an Asset Liability Management committee, complex interest rate models, and regular reporting to senior management.
None of it mattered. SVB failed because no one had identified the specific, compounding interaction between two concentrated vulnerabilities: a massive portfolio of long-duration fixed-rate securities and an overwhelmingly uninsured deposit base clustered within a single, networked industry.[1] The bank assessed interest rate risk in isolation. It assessed liquidity risk in isolation. It never identified the lethal transmission channel connecting the two.
This was not a failure of risk management. It was a failure of risk identification. And it is the single most common, most consequential, and most misunderstood weakness in the global banking system.
The Problem: Registers Without Discovery
Most banks do not have a risk identification process. They have a risk register maintenance process. These are fundamentally different activities, and the difference is not semantic.
In practice, the annual risk identification exercise at most institutions goes like this: the second line of defence distributes last year's risk register. Business leaders are asked to "refresh" the entries. They adjust a few probability and impact scores, archive a handful of obsolete operational risks, and return the document. The board reviews it. The regulator sees it. Everyone calls this "risk identification."
But nothing was actually identified. The entire exercise is an act of risk assessment applied to a static universe of known, historical variables. Identification — the act of discovering what risks exist, acknowledging the unknown, mapping emerging threats before they materialise — never happened.
ISO 31000:2018 draws a strict line between the two.[2] Risk Identification (Clause 6.4.2) requires divergent, investigative thinking: finding, understanding, and describing risks. Risk Analysis (Clause 6.4.3) uses convergent, analytical thinking: determining the likelihood and impact of those specific events. If you skip the first, the second operates on an incomplete data set. You are building sophisticated models to predict the trajectory of risks you already know about, while remaining blind to the ones that will actually hurt you.
This is the inherited risk register pathology. The register was built from scratch once — during a GRC system implementation or a regulatory remediation — and has never been fundamentally rebuilt since. It persists through bureaucratic inertia, cognitive anchoring, and the simple reality that no one wants to start from a blank page when a 200-row spreadsheet already exists.
The Evidence
What the regulators are finding
Global supervisors have been saying this for years, with increasing urgency and decreasing patience.
The ECB's 2024 SREP aggregate results issued qualitative measures across 97 European banks. The findings are damning. Credit risk deficiencies accounted for 29% of all measures, with the ECB calling out failures in early warning systems — fundamentally a risk identification problem, not a measurement one. Internal governance failures represented another 23%. And within the capital adequacy category, 15% of all new supervisory measures targeted specifically the methodologies banks use to discover risk.[3]
The 2025 SREP results confirmed these are systemic, not isolated. A full 20% of new qualitative measures in 2025 targeted Risk Data Aggregation and Risk Reporting (RDARR) deficiencies.[4] The ECB has explicitly stated its intention to escalate enforcement if these issues persist. You cannot identify a risk concentration if the concentration is scattered across incompatible data systems.
In the UK, the PRA has been equally direct. In its 2024-2025 supervisory priorities, the regulator observed that many firms incorrectly conclude that climate-related risk is not material — but this conclusion is based on never having properly identified, mapped, or sized the exposures in the first place.[5] The risk was not assessed as immaterial. It was never identified at all.
In the US, the OCC has moved beyond warnings to punitive consent orders. When Capital One suffered a massive data breach in 2019, the resulting $80 million civil money penalty was imposed not because the bank lacked firewalls, but because internal audit had failed to identify control weaknesses and gaps in the cloud operating environment.[6] The OCC's Heightened Standards (12 C.F.R. Part 30, Appendix D) now mandate that front-line business units take absolute accountability for identifying the risks they generate — specifically to prevent the structural failure where the first line assumes risk identification is solely the second line's job.
The data aggregation bottleneck
Underpinning all of this is a brute-force mechanical problem: most banks cannot aggregate their own risk data. BCBS 239, the Basel Committee's principles for effective risk data aggregation, was published in 2013 in direct response to the financial crisis — when global banks discovered they could not calculate their total exposure to individual counterparties because data was trapped in silos.[7]
More than a decade later, the 2023 BIS progress report found that out of 31 global systemically important banks, only two were fully compliant.[8] The rest are still running on fragmented IT landscapes, legacy systems, and manual processes. A bank that cannot aggregate its data cannot identify its concentrations. The inherited risk register becomes a psychological coping mechanism for institutions whose data architecture is too primitive to support genuine risk discovery.
Unidentified versus ignored: two different diseases
It matters a great deal whether a failure occurred because a risk was genuinely unidentified or because it was identified and ignored. The medicine for each is completely different.
Credit Suisse's $5.5 billion Archegos loss in 2021 is frequently cited as a risk identification failure. It was not. The Paul, Weiss independent report was categorical: the risks were identified and conspicuous.[9] The systems showed extreme single-issuer concentrations. The top five issuers represented three to seven days of total market trading volume. The identification phase worked. What failed was management's willingness to act on it — a culture of success bias and relationship override that allowed a profitable client to operate outside normal risk parameters.
SVB is the opposite case. The interaction between its duration-heavy securities portfolio and its concentrated, uninsured, digitally-connected deposit base was never conceptualised as a risk scenario. No one modelled what happens when mark-to-market losses trigger a Twitter-fuelled run by a homogeneous depositor base with the ability to move billions in hours. The risk was not on the register because the identification process never asked the right questions.
Greensill Capital represents a third variant: risk hidden behind product complexity. Partner banks assessed Greensill's supply chain finance exposures as low-risk, short-term trade receivables. Their identification processes never pierced the structure to discover the underlying reality — unsecured, long-term working capital to concentrated, risky entities, masked by securitisation vehicles and trade credit insurance wrappers.[10] When the insurance was withdrawn, the true credit risk materialised instantly.
Three failures. Three different identification breakdowns: a missing transmission channel, a cultural override of known information, and a complexity veil that was never penetrated. A static, inherited risk register would have caught none of them.
What Good Looks Like
If identification is to function as a genuine discovery discipline rather than an administrative exercise, three structural changes are required.
First, decouple identification from assessment procedurally. This is the single most important change. When you sit down to facilitate a risk identification workshop, participants must be forbidden from discussing probability, impact, or velocity. The moment someone says "that's a low-likelihood event," discovery stops. The cognitive mode shifts from divergent exploration to convergent analysis. The sole output of the identification session is a comprehensive inventory of vulnerabilities — including the implausible, the uncomfortable, and the novel. Assessment comes later, in a separate session, with separate tools.
Second, mandate periodic blank-slate discovery. The Fed's SR 15-18 requires firms subject to Category I standards to evaluate their material risks at least quarterly — not merely re-assess existing scores, but actively re-identify.[3] At least annually, business leaders should be asked to articulate the risks inherent in their current operations from a blank page. No inherited register. No anchoring. Instead, prompt them with forward-looking questions: What new third parties are we dependent on that we were not a year ago? What macroeconomic assumption is our current profitability entirely dependent upon? What happens if it breaks? The EON methodology's dual-track identification process — top-down SWIFT workshops combined with bottom-up specialist sub-processes, followed by mandatory reconciliation — is built specifically for this purpose.
Third, use reverse stress testing as an identification tool. Traditional scenario analysis tests known risks against hypothetical shocks. Reverse stress testing starts from the outcome — institutional failure or severe capital depletion — and works backward to discover what chain of events could cause it. This forces the institution to find hidden correlations, networked vulnerabilities, and undocumented transmission channels that a forward-looking exercise would never surface. SVB's failure mode would have been discoverable through a well-designed reverse stress test.
What To Do Monday Morning
- Pull out your risk register and ask one question: when was the last time a genuinely new risk was added? Not a re-categorised version of an existing risk. Not a score change. A new risk that was not on the register a year ago. If you cannot point to one, you do not have an identification process. You have a maintenance process.
- Run your next risk workshop with assessment banned. No likelihood scores, no impact matrices, no heatmap colours. Spend the full session listing everything that could go wrong, including scenarios that feel implausible. Write every one down. Assess them in a separate session the following week.
- Commission a single reverse stress test. Pick one outcome — say, a 40% drawdown in CET1 capital within 90 days — and ask a small team to work backward from there. What combination of events could cause it? What are the transmission channels? Compare the risks they surface against your current register. The gaps will tell you where your identification process is blind.
- Check your data architecture against the register. For every material risk on your register, ask: could we aggregate our total exposure to this risk across all business lines within 24 hours? If the answer is no, you have a BCBS 239 problem that is masking an identification problem.
- Ask your front line who owns risk identification. If the answer is "the risk function," you have a structural gap. The OCC's Heightened Standards exist precisely because front-line units must own the identification of risks they generate. If your first line thinks this is someone else's job, the risks they create will never make it onto the register.