Welcome to our short and sweet crash course on Stress Testing Bank Capital. In this course we cover the following topics:
Within each topic we plan on covering the following dimensions.
In the world that we live in operating conditions generally get categorized as normal, abnormal and extreme. Stress testing refers to a process through we which we try and assess the impact of abnormal and extreme conditions on our processes, control systems and organizations.
Within financial services stress testing takes a second dimension where the focus shifts from assessing impact to identifying breaking points; the maximum amount of stress a financial institution would be able to bear before it breaks down and fails. The level of interconnectivity between financial markets and institutions has made this threshold of failure even more important since the failure of a single institution can trigger a deep and painful system wide crisis that can very easily turn into a regional or global contagion.
Regulators and shareholders are very interested in the margin of safety an organization should maintain. They are also aware that the right time to develop and test a strategy for managing and handling crisis is when operating conditions are normal and everyone can think clearly. Therefore in addition to identifying the threshold of failure, stress testing also serves as a tool for testing reactions and responses to crisis before it occurs.
Many of the process and controls in a financial institution are dependent on models that are based on assumptions. Stress testing also provides a framework for testing these assumptions as well as conditions under which assumptions will no longer hold and models breakdown.
Regulators use stress tests to identify weak institutions that can create a contagion effect in an economy and/or market.
To create a viable stress testing framework we need the following elements
Let’s start with the first step – metrics and measures that we use or are likely to focus on in our stress test. Within banking and insurance institutions, the primary focus is on stress testing capital. Given the role bank and insurance failures have played in the two great depressions our world has seen in the last 100 years, both regulators and shareholders are interested in the right amount of capital. The right amount to do what?
a) Operate under normal conditions – aka operating capital
b) Operate under stressed conditions – aka risk capital
c) Operate under extreme conditions – aka signaling capital
There is a fine line between (b) and (c) and banking regulation and capital requirements are a non-stop battle between shareholders’ interpretation of how much capital is needed to survive the occasional hiccups and the regulatory understanding of the amount of capital a bank should carry on its balance sheet before it should be allowed to solicit deposits from customers. Over the last two decades each group has created a model for optimal capital. Unfortunately what is optimal from a shareholders’ point-of-view is sub-optimal from a regulatory point-of-view.
Besides reviewing the basic and static capital adequacy framework regulators are also keen on testing capital adequacy under a range of scenarios. Some static, some a replay of historical crisis and others that are a simulation of complex interaction between all the market factors that impact the balance sheet of a bank.
There is also a big debate about the actual category of capital on which this analysis should be focused.
Should you build a separate stress testing process for each of these capital types or build one that stress tests the variation on an incremental basis? When you report stress testing results to regulators and shareholders do you report for a specific type or for different types for different audiences?
How do you go about building such a test? There are two primary approaches.
The first approach focuses on identifying the likely ranges of value expected to be seen in the market under current and future conditions. The objective is to create an extreme set of prices and then test the business and the underlying model. We tend to use value at risk measures calculated with the historical simulation approach to identify and create a set of extreme value for prices, rates and trailing correlations.
The second approach focuses on capital and then assesses the magnitude of shock it can comfortably bear. Variations of this approach link the size of the shock or threshold to probabilities giving an indicative likelihood of failure. Flipped on its head the approach allows us to create probability of shortfall models that display level of capital and probability of failure side by side. Once again we use a value at risk (VaR) based model to estimate the likelihood that a shock of a given magnitude can occur. A sophisticated board can then create a limit structure linked to the probability of failure of the bank. Within the insurance industry a variation of this approach helps us calculate the probability of ruin.
Given the myriad exposures of banks and insurance companies, this process needs to be reported for:
a) All investment positions across bonds, currencies, equities and commodities – market risk
b) All funding and financing choices – interest rate mismatch or gap
c) All liquidity structure choices – liquidity gap
d) All credit positions – credit risk
e) Operational elements – operation risk
f) Correlation effects within the first four
This brings up a number of challenges.
How do you maintain a dataset that cuts across the entire operations of a financial institution? Who is responsible? How do you measure and track risk correlations between your investment portfolio, your funding choices and your credit decisions? We know that there is a relationship but how do we measure it? Even if you could measure it, how do you translate all of this in a framework that allows you to calculate probabilities?
From what we have seen the above process is anything but simple. How do we report it in a simplified manner to shareholders and board members who may not necessarily have the inclination or the background to understand complex statistical and computational finance concepts? Finally how do you create a robust and stable model that gives consistent results across normal as well as extreme conditions?
Within the banking sector there has been a fair bit of work done in creating simple as well as fairly complex stress tests. From using data sets from extreme and turbulent events to sophisticated Monte Carlo simulators that dry run the entire bank in an excel spread sheet. Twenty years ago we could only think and do static tests linked to static shocks. Today however our understanding of bank capital and the challenges it faces is much more sophisticated. More so because of the events linked to the recent financial crisis.
Also see – The evolution of bank stress test.
What is the best way of evaluating portfolio performance allocation strategies? Should we just compare risk, return or risk adjusted…
6 mins read Introducing Project Plain speak. Currently a work in progress Plain speak focuses on bringing intelligent financial reporting…
9 mins read What can we learn about oil markets from the last ten years? The next decade. The final…
5 mins read What does the data say about future direction of crude oil markets. We look at OPEC spare…