Value at risk jorion pdf


 

Philippe Jorion - Value at Risk - The New Benchmark for Managing Financial Risk 3rd Ed T. Aguiar Pinheiro. Loading Preview. Sorry, preview is currently. VALUE AT RISK: The New Benchmark for. Managing Financial Risk. THIRD EDITION. Answer Key to End-of-Chapter Exercises. PHILIPPE JORION. McGraw- . Value at Risk - Philippe Jorion - Ebook download as PDF File .pdf), Text File .txt ) or read book online.

Author:BELLE CHAPPIE
Language:English, Spanish, Dutch
Country:Thailand
Genre:Politics & Laws
Pages:704
Published (Last):11.03.2016
ISBN:681-1-23751-427-6
Distribution:Free* [*Register to download]
Uploaded by: SUZIE

77244 downloads 165460 Views 11.86MB PDF Size Report


Value At Risk Jorion Pdf

Request PDF on ResearchGate | On Jan 1, , Philippe Jorion and others published Value at Risk: The New Benchmark for Managing Financial Risk. PDF | The notion of "risk" is used in a number of sciences. The Faculty of Law studies the risk depending on its legality. The Accident Theory. Файл формата pdf; размером 11,34 МБ. Добавлен пользователем hamu_ok ; Отредактирован Jorion Р. Value at Risk: The.

This accord provides recommendations on banking regulations with regard to credit, market and operational risks. Its purpose is to ensure that financial institutions hold enough capital on account to meet obligations and absorb unexpected losses. For a financial institution measuring the risk it faces is an essential task. In the specific case of market risk, a possible method of measurement is the evaluation of losses likely to be incurred when the price of the portfolio assets falls. This is what Value at Risk VaR does. The portfolio VaR represents the maximum amount an investor may lose over a given time period with a given probability. Since the BCBS at the Bank for International Settlements requires a financial institution to meet capital requirements on the basis of VaR estimates, allowing them to use internal models for VaR calculations, this measurement has become a basic market risk management tool for financial institutions. Although the VaR concept is very simple, its calculation is not easy. The methodologies initially developed to calculate a portfolio VaR are i the variance—covariance approach, also called the Parametric method, ii the Historical Simulation Non-parametric method and iii the Monte Carlo simulation, which is a Semi-parametric method. As is well known, all these methodologies, usually called standard models, have numerous shortcomings, which have led to the development of new proposals see Jorion, The major drawback of this model is the normal distribution assumption for financial returns. Empirical evidence shows that financial returns do not follow a normal distribution. The second relates to the model used to estimate financial return conditional volatility. The third involves the assumption that return is independent and identically distributed iid.

Positions that are reported, modeled or priced incorrectly stand out, as do data feeds that are inaccurate or late and systems that are too-frequently down. Anything that affects profit and loss that is left out of other reports will show up either in inflated VaR or excessive VaR breaks. Inside the VaR limit, conventional statistical methods are reliable. Relatively short-term and specific data can be used for analysis. Probability estimates are meaningful, because there are enough data to test them.

In a sense, there is no true risk because you have a sum of many independent observations with a left bound on the outcome. A casino doesn't worry about whether red or black will come up on the next roulette spin. Risk managers encourage productive risk-taking in this regime, because there is little true cost. People tend to worry too much about these risks, because they happen frequently, and not enough about what might happen on the worst days.

Risk should be analyzed with stress testing based on long-term and broad market data. The risk manager should concentrate instead on making sure good plans are in place to limit the loss if possible, and to survive the loss if not. You expect periodic VaR breaks.

The loss distribution typically has fat tails , and you might get more than one break in a short period of time.

Moreover, markets may be abnormal and trading may exacerbate losses, and you may take losses not measured in daily marks such as lawsuits, loss of employee morale and market confidence and impairment of brand names. So an institution that can't deal with three times VaR losses as routine events probably won't survive long enough to put a VaR system in place. Three to ten times VaR is the range for stress testing. Institutions should be confident they have examined all the foreseeable events that will cause losses in this range, and are prepared to survive them.

Foreseeable events should not cause losses beyond ten times VaR. If they do they should be hedged or insured, or the business plan should be changed to avoid them, or VaR should be increased. It's hard to run a business if foreseeable losses are orders of magnitude larger than very large everyday losses. It's hard to plan for these events, because they are out of scale with daily experience. Of course there will be unforeseeable losses more than ten times VaR, but it's pointless to anticipate them, you can't know much about them and it results in needless worrying.

Better to hope that the discipline of preparing for all foreseeable three-to-ten times VaR losses will improve chances for surviving the unforeseen and larger losses that inevitably occur. VaR is the border. Within any portfolio it is also possible to isolate specific position that might better hedge the portfolio to reduce, and minimise, the VaR.

An example of market-maker employed strategies for trading linear interest rate derivatives and interest rate swaps portfolios is cited. Backtesting[ edit ] A key advantage to VaR over most other measures of risk such as expected shortfall is the availability several backtesting procedures for validating a set of VaR forecasts. Early examples of backtests can be found in Christoffersen , [30] later generalized by Pajhede , [31] which models a "hit-sequence" of losses greater than the VaR and proceed to tests for these "hits" to be independent from one another and with a correct probability of occurring.

A number of other backtests are available which model the time between hits in the hit-sequence, see Christoffersen , [32] Haas , [33] Tokpavi et al. Backtest toolboxes are available in Matlab [1] , or R —though only the first implements the parametric bootstrap method.

History[ edit ] The problem of risk measurement is an old one in statistics , economics and finance. Financial risk management has been a concern of regulators and financial executives for a long time as well. Retrospective analysis has found some VaR-like concepts in this history. But VaR did not emerge as a distinct concept until the late s. The triggering event was the stock market crash of This was the first major financial crisis in which a lot of academically-trained quants were in high enough positions to worry about firm-wide survival.

A reconsideration of history led some quants to decide there were recurring crises, about one or two per decade, that overwhelmed the statistical assumptions embedded in models used for trading , investment management and derivative pricing. These affected many markets at once, including ones that were usually not correlated , and seldom had discernible economic cause or warning although after-the-fact explanations were plentiful.

If these events were excluded, the profits made in between "Black Swans" could be much smaller than the losses suffered in the crisis. Institutions could fail as a result. It was hoped that "Black Swans" would be preceded by increases in estimated VaR or increased frequency of VaR breaks, in at least some markets. The extent to which this has proven to be true is controversial.

It was well established in quantitative trading groups at several financial institutions, notably Bankers Trust , before , although neither the name nor the definition had been standardized. There was no effort to aggregate VaRs across trading desks. Since many trading desks already computed risk management VaR, and it was the only common risk measure that could be both defined for all businesses and aggregated without strong assumptions, it was the natural choice for reporting firmwide risk.

Morgan CEO Dennis Weatherstone famously called for a " report" that combined all firm risk on one page, available within 15 minutes of the market close. Backtesting[ edit ] A key advantage to VaR over most other measures of risk such as expected shortfall is the availability several backtesting procedures for validating a set of VaR forecasts. Early examples of backtests can be found in Christoffersen , [30] later generalized by Pajhede , [31] which models a "hit-sequence" of losses greater than the VaR and proceed to tests for these "hits" to be independent from one another and with a correct probability of occurring.

A number of other backtests are available which model the time between hits in the hit-sequence, see Christoffersen , [32] Haas , [33] Tokpavi et al. Backtest toolboxes are available in Matlab [1] , or R —though only the first implements the parametric bootstrap method. History[ edit ] The problem of risk measurement is an old one in statistics , economics and finance. Financial risk management has been a concern of regulators and financial executives for a long time as well. Retrospective analysis has found some VaR-like concepts in this history.

But VaR did not emerge as a distinct concept until the late s. The triggering event was the stock market crash of This was the first major financial crisis in which a lot of academically-trained quants were in high enough positions to worry about firm-wide survival. A reconsideration of history led some quants to decide there were recurring crises, about one or two per decade, that overwhelmed the statistical assumptions embedded in models used for trading , investment management and derivative pricing.

These affected many markets at once, including ones that were usually not correlated , and seldom had discernible economic cause or warning although after-the-fact explanations were plentiful.

If these events were excluded, the profits made in between "Black Swans" could be much smaller than the losses suffered in the crisis. Institutions could fail as a result. It was hoped that "Black Swans" would be preceded by increases in estimated VaR or increased frequency of VaR breaks, in at least some markets. The extent to which this has proven to be true is controversial. It was well established in quantitative trading groups at several financial institutions, notably Bankers Trust , before , although neither the name nor the definition had been standardized.

Jorion Р. Value at Risk: The New Benchmark for Managing Financial Risk

There was no effort to aggregate VaRs across trading desks. Since many trading desks already computed risk management VaR, and it was the only common risk measure that could be both defined for all businesses and aggregated without strong assumptions, it was the natural choice for reporting firmwide risk. Morgan CEO Dennis Weatherstone famously called for a " report" that combined all firm risk on one page, available within 15 minutes of the market close.

Development was most extensive at J. Morgan , which published the methodology and gave free access to estimates of the necessary underlying parameters in This was the first time VaR had been exposed beyond a relatively small group of quants. Securities and Exchange Commission ruled that public corporations must disclose quantitative information about their derivatives activity. Major banks and dealers chose to implement the rule by including VaR information in the notes to their financial statements.

VaR is the preferred measure of market risk , and concepts similar to VaR are used in other parts of the accord. A famous debate between Nassim Taleb and Philippe Jorion set out some of the major points of contention. Taleb claimed VaR: [37] Ignored 2, years of experience in favor of untested models built by non-traders Was charlatanism because it claimed to estimate the risks of rare events, which is impossible Gave false confidence Would be exploited by traders In David Einhorn and Aaron Brown debated VaR in Global Association of Risk Professionals Review [20] [3] Einhorn compared VaR to "an airbag that works all the time, except when you have a car accident".

Market Risk - Jorion - - Major Reference Works - Wiley Online Library

He further charged that VaR: Led to excessive risk-taking and leverage at financial institutions Focused on the manageable risks near the center of the distribution and ignored the tails Created an incentive to take "excessive but remote risks" Was "potentially catastrophic when its use creates a false sense of security among senior executives and watchdogs. After interviewing risk managers including several of the ones cited above the article suggests that VaR was very useful to risk experts, but nevertheless exacerbated the crisis by giving false security to bank executives and regulators.

A powerful tool for professional risk managers, VaR is portrayed as both easy to misunderstand, and dangerous when misunderstood. Taleb in testified in Congress asking for the banning of VaR for a number of reasons.

One was that tail risks are non-measurable.

Another was that for anchoring reasons VaR leads to higher risk taking. For example, the average bank branch in the United States is robbed about once every ten years.

A single-branch bank has about 0. It would not even be within an order of magnitude of that, so it is in the range where the institution should not worry about it, it should insure against it and take advice from insurers on precautions. The whole point of insurance is to aggregate risks that are beyond individual VaR limits, and bring them into a large enough portfolio to get statistical predictability.

It does not pay for a one-branch bank to have a security expert on staff. As institutions get more branches, the risk of a robbery on a specific day rises to within an order of magnitude of VaR. At that point it makes sense for the institution to run internal stress tests and analyze the risk itself. It will spend less on insurance and more on in-house expertise. For a very large banking institution, robberies are a routine daily occurrence.

Value at risk

Losses are part of the daily VaR calculation, and tracked statistically rather than case-by-case. A sizable in-house security department is in charge of prevention and control, the general risk manager just tracks the loss like any other cost of doing business.

That means they move from the range of far outside VaR, to be insured, to near outside VaR, to be analyzed case-by-case, to inside VaR, to be treated statistically.