The Silence
In 2016, I was appointed Global Head of Risk Identification at a European G-SIB. A trillion-dollar balance sheet. Operations in more than fifty countries. Supervised by regulators on three continents. On my first day, I asked what I thought was a straightforward question: how does this institution currently identify its risks?
The answer was silence.
Not hostile silence. Confused silence. The kind that suggests the question itself doesn't make sense. After a long pause, someone mentioned the ICAAP. Someone else mentioned operational risk event reporting. A third person brought up model risk governance. All of these were real processes, and none of them answered the question.
The institution had systems for measuring risks, reporting risks, and managing risks. What it did not have was a process for identifying them. No documented methodology. No structured approach. No way to answer the question that every regulator eventually asks: how do you know this is the right list?
That silence told me everything I needed to know about the job ahead.
The First Thing That Broke
Before you can identify risks, you have to agree on what they're called. This sounds trivial. It is not.
In my first month, I convened a meeting to reconcile the risk registers across business units. What I expected was a straightforward alignment exercise. What I got was a two-hour argument about vocabulary. Treasury called it "funding liquidity risk." Corporate Banking called it "refinancing risk." The Markets division called it "rollover risk." They were describing the same exposure — the risk that the bank couldn't renew its short-term funding — but because they used different language, they managed it in different ways, reported it to different committees, and assessed it against different thresholds.
Building a common risk taxonomy was the first operational priority. Not because it was the most intellectually interesting problem — it wasn't — but because nothing else works without it. You cannot reconcile what you cannot compare. You cannot aggregate what you cannot name consistently.[1]
The Workshop That Failed
My first risk identification workshop, in late 2016, followed a pattern I'd later recognise as universal.
I had prepared carefully. I'd studied SWIFT and Delphi.[2] I'd designed a structured agenda, sent pre-reading materials two weeks in advance, booked a room with whiteboards and flip charts. I arrived confident that the session would produce a rigorous, comprehensive list of material risks.
It did not.
What it produced was a list of risks the participants were already comfortable discussing. Credit risk. Market risk. Operational risk. Liquidity risk. The categories were familiar. The descriptions were generic. The conversation was polite, structured, and shallow.
No one mentioned business model risk. No one mentioned strategic risk. No one mentioned counterparty concentration in prime brokerage. No one mentioned the risk that the institution's funded pension schemes might become materially underfunded. These were not hypothetical risks. They were real, they were material, and within five years, several of them would crystallise with severe consequences.
But they weren't identified in that workshop. Because the format — despite being based on a recognised technique — did not overcome the political, cultural, and cognitive barriers that prevent uncomfortable risks from being named.
I walked out knowing the entire approach had to be redesigned. The methodology that eventually worked — structured techniques, independent pre-workshop assessments, straw man seeding, rotating facilitation — was built from that failure. Not from a textbook. From a room full of senior bankers who spent two hours telling me what I already knew.
Two Printouts and a Question No One Could Answer
In early 2017, I sat in a conference room with two printouts on the table. One was the top-down material risk inventory — approximately twenty risks identified through workshops with senior management. The other was the consolidated bottom-up register — nearly two hundred risks submitted by business units, functions, and legal entities across the group.
My task was to reconcile them.
What I found was not alignment. It was incoherence. Of the twenty top-down material risks, twelve had no corresponding entry in the bottom-up register. Of the nearly two hundred bottom-up risks, forty-seven weren't covered by any top-down category. And when I tried to map the remainder, I found different definitions, different rating scales, different ownership structures, and different control frameworks for what were, in substance, the same exposures.
I asked the question that reconciliation is designed to answer: who owns this risk?
No one.
This is where most banks' risk identification processes die. They do top-down or bottom-up. Almost none iterate between the two. And the gap between those two lists — the risks that exist in one view but not the other — is precisely where the failures hide.[3]
Four years later, in March 2021, the collapse of Archegos Capital Management would cost Credit Suisse $5.5 billion.[4] The risk that materialised was precisely the kind of cross-counterparty, cross-product concentration that reconciliation is designed to surface. A single client had built leveraged equity derivative positions across multiple prime brokerage desks. Each desk monitored its own exposure. No one was monitoring the aggregate. It was a bottom-up risk invisible from the top, and a top-down risk invisible from the bottom.
The Copy-Paste Register and the Colour-Coded Spreadsheet
Several months into the role, I received the first round of bottom-up risk assessments from the business units. The templates had been designed carefully. The deadlines had been set with input from the first line. The submissions came in on time.
I opened the first one — a major trading division — and compared it against the prior year's submission.
It was identical. Not similar. Identical. Same risks. Same descriptions. Same ratings. The only thing that had changed was the date.
Of the twelve submissions I reviewed that day, nine were either unchanged from the prior year or contained only cosmetic updates. This is what compliance theatre looks like in risk identification. The process runs. The templates get filled. The deadlines get met. And the output tells the institution nothing it didn't already know — because no one is actually asking what has changed.[5]
Years later, when I arrived at a second institution — a UK-regulated international banking group — I found the inverse problem. The reconciliation documentation from the prior cycle was a merged spreadsheet with colour-coding but no analysis. Risks from the top-down list were highlighted in blue. Bottom-up in green. Both in yellow. There was no narrative explaining what the colours meant. No analysis of why twelve top-down risks had no bottom-up owner. No record of whether anyone had investigated the gaps.
The spreadsheet wasn't documentation. It was evidence that someone had completed a task.
What the Politics Actually Look Like
Let me be direct about something no textbook will tell you.
Standards will not tell you how to run a workshop with twenty senior bankers who do not want to be there. They will not tell you what to do when a business unit head tells you, in front of the Chief Risk Officer, that the risk you've identified doesn't exist. They will not tell you how to explain to a board member why risk identification cannot be reduced to a dashboard with three traffic lights.
I learned this through practice. No course prepared me for the moment when a business unit head told me, in front of twelve colleagues, that a risk I'd identified was not real. The risk was documented in the operational loss database. It had materialised twice in the prior three years. The issue wasn't technical. The issue was that naming the risk implied controls had failed, which implied someone was accountable, which made the conversation uncomfortable.
I didn't argue. I asked a question: "If this risk doesn't exist, how do we explain the three events in the loss database that correspond exactly to the scenario I described?"
Pause. Someone else offered: "Those were operational errors, not risks."
I asked: "What's the difference?"
No one answered. The risk stayed in the inventory.
Risk identification is as much a political exercise as a technical one. The most important risks are often the ones the institution doesn't want to hear. If your process isn't designed to surface those risks — and your governance structures aren't designed to protect the people who name them — then you don't have a risk identification process. You have a confirmation exercise.
What I'd Do Differently — and What You Can Do Monday Morning
After building this process twice and studying 179 bank failures,[6] here's what I know:
- Start with the taxonomy. Before you identify a single risk, get every business unit in a room and agree on what the risk categories are called. This will be the most tedious and most important meeting you run all year. If you skip it, every downstream process — workshops, reconciliation, reporting — will produce noise instead of signal.
- Collect independent assessments before every workshop. Ask each participant to submit their top five risks before the session. Anonymise them. Present them as the starting point. This breaks groupthink and creates permission to name the uncomfortable risks. The straw man seeding technique alone transformed the quality of my workshops more than any other single change.
- Reconcile top-down and bottom-up every single cycle. Put the two lists side by side. Find the gaps. Investigate every one. The risks that appear in only one list are the risks most likely to materialise unmanaged. If you're only doing top-down or only doing bottom-up, you have half a process.
- Return bad submissions. When a business unit sends you the same risk register as last year with a new date, send it back with questions. Not corrections — questions. "Has nothing changed in your risk environment in twelve months? If so, explain why." The quality of bottom-up identification improves dramatically once people understand that copy-paste will be challenged.
- Build for the regulator you'll meet in three years, not the one you met last year. Every regulator I've worked with — PRA, FINMA, the Fed — eventually asks the same question: walk me through your risk identification methodology. If you can answer that question with a documented, repeatable process, the rest of the conversation is manageable. If you can't, nothing else matters.
The Hard Truth
Building the process is the easy part.
Sustaining it — protecting it from commercial pressure, from political interference, from the institutional tendency to hear only the risks it's comfortable hearing — that's the hard part. And that's the part no methodology can solve on its own. It requires governance structures that protect the process from the business. It requires a CRO who will defend uncomfortable findings. It requires a board that wants to hear bad news before the regulator delivers it.
The methodology I've spent twenty years building works. But it works only when the institution allows it to work. I've seen what happens when it doesn't. I was pricing mortgage-backed securities at a German mortgage bank in 2007 when the world discovered what poor risk identification looks like.[7] I've studied 179 cases of banks that failed for the same reason.
The risk was always there. Someone usually knew. The question is whether the institution had a process that forced them to say it out loud — and a governance structure that forced someone to listen.