Browse By

Finance-Calculating Value at Risk – Caveats, qualifications, issues

Qualifications to using and Limitations of the Value at Risk (VaR) Metric

  • VaR is dependent on historical data and therefore may never fully capture severe shocks that exceed recent or historical norms. Also if the time period used in the calculation is a relatively stable one then the computed VaR will tend to understate the risk. If the time period showed more volatile data, however, VaR could end up being too high.
  • In evaluating risk and setting thresholds there is value in using a lower VaR threshold (95% instead of 99%) as an early warning tool. As the low threshold is more likely to be crossed with greater frequency than the higher threshold VaR it could serve as a signal to management to make queries that could uncover anomalies and previously unrecognized risk.
  • Multiple VaR methods may be used to assess risk from all available information and hence obtain challenging and different views of risk. Volatility estimates from the recent past (EWMA) could act as early warning signals of changes in current market conditions while estimates drawn from a longer historical interval (SMA or historical simulation) may be used to assess, identify stressful episodes in the past.
  • Generally the variance covariance approach does a reasonable job of assessing, over very short time periods, the VaR for portfolios that do not include options. The historical simulation approach works best for a risk source that is stable and for which there is a good, reliable and substantial historical database. For options or portfolios with nonlinear portfolios, or where the data is volatile and unstable and the normality assumption underlying VCV cannot apply, the Monte simulation method is the best approach.
  • VaR numbers should not be used in isolation. They must be back-tested against actual profit and loss numbers. Anomalies between P&L and VaR should alert management to query or probe them. This investigation could reveal areas for improvement in calculation/ reporting. On the other hand it could reveal the possibility of previously unrecognized risks or emerging stress conditions. P&L reporting must be of sufficient granularity to be useful in assessing risk for a given business line/ portfolio.
  • Since the underlying return distributions for the variance covariance and Monte Carlo simulation approaches are assumptions of what the actual distributions could be, a violation of these assumptions would result in incorrect estimates of the Value at Risk measure. Even the historical simulation approach, which is a non parametric approach, is based on the fact that we are assuming that the historical distribution of returns based on past data will be representative of the distribution going forward. If this is not the case, the VaR calculation would lead to inaccurate estimates
  • There is significant literature which evidences that returns of various risk factors are not normally distributed and that events considered outliers based on the normal distribution occur more frequently in reality. This could result in companies/ banks being underprepared for events considered to be extremely unlikely using a normal distribution.
  • The Value at Risk measure across risk factors also relies on, explicitly or implicitly on correlations between the risk factors. As we know correlations tend to vary greatly over time and do not remain static. If future correlations were to change between factors relative to what was experienced in the past then VaR measures based on past data would result in incorrect estimates of the true risk.
  • VaR should not be relied on as the sole measure of risk. By its definition Value at Risk only looks at risk from one angle- the maximum loss that could occur over a certain period with a certain degree of certainty. In doing so a great deal of valuable information on the risk distribution tends to be ignored. For example the extent of loss in the extreme ranges remains unknown.
  • When using VaR the user should not just stick to regulatory parameters like the 95% or 99% confidence levels. This has been shown to lead to suboptimal risk management decisions as the focus is not on intermediate losses, which fall below the probability threshold. These managers tend to do worse than their counterparts who also monitor the intermediate losses because they do not reign in their losses until the specified probability threshold or worse case scenario is crossed.
  • VaR measures have a tendency of being gamed by managers to meet the VaR risk constraint by selecting the approach or the look back period. For example using the historical simulation approach where the data set ignores the current sudden upsurge in the volatility of commodity prices, would result in a low Value at Risk number that would satisfy requirements but would fail one of the objectives of risk management which is to alert the company to a possibly deteriorating situation that could be unfolding.

2 thoughts on “Finance-Calculating Value at Risk – Caveats, qualifications, issues”

  1. Pingback: Master Class: Calculating Value at Risk (VaR): Course Guide
  2. Trackback: Master Class: Calculating Value at Risk (VaR): Course Guide
  3. Pingback: Middle Office Posts Index | Learning Corporate Finance
  4. Trackback: Middle Office Posts Index | Learning Corporate Finance

Comments are closed.

Comodo SSL