U.S. patent application number 13/011411 was filed with the patent office on 2011-05-26 for systems and methods for compound risk factor sampling with integrated market and credit risk.
This patent application is currently assigned to ALGORITHMICS SOFTWARE LLC. Invention is credited to Ben De Prisco, Ian Iscoe, Yijun Jiang, Helmut Mausser.
Application Number | 20110125673 13/011411 |
Document ID | / |
Family ID | 40932611 |
Filed Date | 2011-05-26 |
United States Patent
Application |
20110125673 |
Kind Code |
A1 |
De Prisco; Ben ; et
al. |
May 26, 2011 |
SYSTEMS AND METHODS FOR COMPOUND RISK FACTOR SAMPLING WITH
INTEGRATED MARKET AND CREDIT RISK
Abstract
Systems and methods for generating an integrated market and
credit loss distribution for the purpose of calculating one or more
risk measures associated with a portfolio of instruments are
disclosed. In at least one embodiment, compound risk factor
sampling is performed that comprises conditionally generating
multiple systemic credit driver samples for each market risk factor
sample generated per time step of a simulation. There are also
disclosed systems and methods for determining an optimal number of
sample values for each of the market risk factors, systemic credit
drivers, and optionally, idiosyncratic risk factors that would be
required in order to obtain an acceptable amount of variability in
the calculated risk estimates and/or to satisfy an available
computational budget.
Inventors: |
De Prisco; Ben; (Woodbridge,
CA) ; Iscoe; Ian; (Mississauga, CA) ; Jiang;
Yijun; (Toronto, CA) ; Mausser; Helmut;
(Toronto, CA) |
Assignee: |
ALGORITHMICS SOFTWARE LLC
Wilmington
DE
|
Family ID: |
40932611 |
Appl. No.: |
13/011411 |
Filed: |
January 21, 2011 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
12026781 |
Feb 6, 2008 |
7908197 |
|
|
13011411 |
|
|
|
|
Current U.S.
Class: |
705/36R |
Current CPC
Class: |
G06Q 40/06 20130101;
G06Q 40/00 20130101; G06Q 40/025 20130101; G06Q 40/04 20130101 |
Class at
Publication: |
705/36.R |
International
Class: |
G06Q 40/00 20060101
G06Q040/00 |
Claims
1. A computer-implemented method for generating an integrated
market and credit loss distribution for the purpose of calculating
one or more risk measures associated with a portfolio of
instruments by performing a simulation, wherein acts of said method
are performed by computer, said computer comprising at least one
computer processor and at least one memory, said method comprising:
identifying, by the at least one computer processor, at least a
first time horizon for said simulation; receiving, by the at least
one computer processor, data identifying X, wherein X is a vector
of scalar-valued market risk factor processes, each market risk
factor process defined by a start value, at least one function
representing a model, and zero or more parameters for the model;
receiving, by the at least one computer processor, data identifying
Y, wherein Y is a vector of scalar-valued credit driver processes,
each credit driver process defined by a start value, at least one
function representing a model, and zero or more parameters for the
model; receiving, by the at least one computer processor, data
comprising one or more co-variance matrices that define the joint
evolution of X and Y over said first time horizon; identifying, by
the at least one computer processor, a first parameter M, wherein
M>0, and a second parameter S, wherein S>1; wherein M defines
a desired number of market risk factor samples and S defines a
desired number of systemic credit driver samples for each of M
market risk factor samples; the at least one computer processor
generating MS conditional loss distributions for said first time
horizon to compute an unconditional loss distribution {circumflex
over (F)} for said first time horizon by performing acts
comprising: the at least one computer processor generating MS
scenarios, said MS scenarios defined by MS sets of X and Y values
(X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from 1 to
S; wherein said act of the at least one computer processor
generating MS scenarios comprises: for each m from 1 to M,
generating a sample, having index m, of a vector of normal random
variables represented by .XI.; for each m from 1 to M and for each
s from 1 to S, generating a random sample, having index ms, of
.DELTA.Y from a conditional distribution of .DELTA.Y derived from
the sample of the vector .XI. having index m and from at least one
of the one or more co-variance matrices, .DELTA.Y being an
increment of Y; computing said MS sets of X and Y values
(X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from 1 to
S, wherein X.sub.m is calculated as a value of X at the first time
horizon based on a previous value of X.sub.m, the at least one
function associated with X, and the sample having index m of the
vector .XI., and wherein Y.sub.ms is calculated as a value of Y at
the first time horizon based on a previous value of Y.sub.ms, the
at least one function associated with Y, and the random sample
having index ms of .DELTA.Y, and wherein if said first time horizon
comprises exactly one time step, said previous value of X.sub.m and
Y.sub.ms is the start value associated with X and Y respectively,
for all m from 1 to M, and for all s from 1 to S; for each of the
MS scenarios defined by MS sets of X and Y values
(X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from 1 to
S, analytically deriving a conditional loss distribution
F.sub.X.sub.m.sub.Y.sub.ms to generate said MS conditional loss
distributions for said first time horizon; and producing the
unconditional loss distribution {circumflex over (F)} for said
first time horizon as a mixture of the MS conditional loss
distributions for said first time horizon; and providing, by the at
least one computer processor, the unconditional loss distribution
{circumflex over (F)} for said first time horizon for calculating
one or more risk measures from said unconditional loss distribution
{circumflex over (F)}, said one or more risk measures for use in
evaluating risk associated with said portfolio.
2. The method of claim 1, further comprising: calculating said one
or more risk measures from said unconditional loss distribution
{circumflex over (F)}; and at least one of storing said one or more
risk measures in said at least one memory or outputting said one or
more risk measures.
3. The method of claim 1, wherein said first time horizon comprises
k time steps, each of said k time steps ending at time t.sub.k,
where k>1; wherein at least one of said one or more co-variance
matrices is associated with a k-th time step; wherein said method
further comprises, for each time step j, for j from 1 to k-1,
performing the following acts prior to said act of the at least one
computer processor generating N scenarios: for each m from 1 to M,
generating a sample, having index m, of a vector of normal random
variables represented by .XI.; for each m from 1 to M and for each
s from 1 to S, generating a random sample, having index ms, of
.DELTA.Y from a conditional distribution of .DELTA.Y derived from
the sample of the vector .XI. having index m and from at least one
of the one or more co-variance matrices, .DELTA.Y being an
increment of Y; computing said MS sets of X and Y values
(X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from 1 to
S, wherein X.sub.m is calculated as a value of X at time t.sub.j
based on a value of X.sub.m at time t.sub.j-1, the at least one
function associated with X, and the sample having index m of the
vector .XI., wherein Y.sub.ms is calculated as a value of Y at time
t.sub.j based on a value of Y.sub.ms at time t.sub.j-1, the at
least one function associated with Y, and the random sample having
index ms of .DELTA.Y, and wherein said value of X.sub.m and
Y.sub.ms at time t.sub.0 is the start value associated with X and Y
respectively, for all m from 1 to M, and for all s from 1 to S; and
wherein said method further comprises setting said previous value
of X.sub.m and Y.sub.ms for use in calculating X.sub.m and Y.sub.ms
at the first time horizon to the value of X.sub.m and Y.sub.ms at
time t.sub.k-1 respectively, for all m from 1 to M, and for all s
from 1 to S.
4. The method of claim 1, wherein said one or more risk measures
comprise at least one risk measure selected from the group
consisting of: a mean of said unconditional loss distribution
{circumflex over (F)}, a variance of said unconditional loss
distribution {circumflex over (F)}, a value at risk equaling a
specified p-quantile of said unconditional loss distribution
{circumflex over (F)}, an unexpected loss comprising a value at
risk equaling a specified p-quantile less a mean of said
unconditional loss distribution {circumflex over (F)}, and an
expected shortfall comprising an expected value of losses that
exceed a specified p-quantile of said unconditional loss
distribution {circumflex over (F)}.
5. The method of claim 1, wherein X comprises at least one process
each selected from the group consisting of: a Brownian motion with
drift, a Brownian motion without drift, an Ornstein-Uhlenbeck
process, a Hull-White process, a Geometric Brownian motion process,
and a Black-Karasinski process.
6. The method of claim 1, wherein Y comprises a Brownian motion
process, such that each .DELTA.Y is normally distributed.
7. The method of claim 1, wherein the conditional distribution of
.DELTA.Y derived from the sample of the vector .XI. having index m
and from the at least one of the one or more co-variance matrices
is represented by a mean m and at least one second co-variance
matrix.
8. The method of claim 1, wherein said analytically deriving the
conditional loss distribution F.sub.X.sub.m.sub.Y.sub.ms comprises
employing at least one technique selected from the group consisting
of: Law of Large Numbers, Central Limit Theorem, convolution and
Fast Fourier Transforms.
9. The method of claim 1, wherein said identifying the first
parameter M and the second parameter S comprises: identifying an
acceptable variance level for a selected one of said one or more
risk measures; computing a variance of estimates of said selected
one risk measure; determining M and S such that said variance is
within said acceptable variance level.
10. The method of claim 9, wherein said selected one risk measure
comprises a value at risk, wherein l.sub.p is a p-quantile, and
wherein said variance is computed according to the following
formula having coefficients .nu..sub.1.sup.0 and .nu..sub.2.sup.0:
Var ( l ^ p ) = 1 f ( l p ) 2 ( v 1 0 M + v 2 0 MS ) .
##EQU00021##
11. The method of claim 10, wherein:
.nu..sub.1.sup.0=Var(E[F.sub.X,Y(l.sub.p)|X]) and
.nu..sub.2.sup.0=E[Var(F.sub.X,Y(l.sub.p)|X)].
12. The method of claim 10, further comprising performing an
initial pilot simulation to estimate coefficients .nu..sub.1.sup.0,
.nu..sub.2.sup.0, and density f(l.sub.p) with M and S chosen to be
large.
13. The method of claim 1, wherein said identifying the first
parameter M and the second parameter S comprises: identifying a
time window available for said simulation of length T; and wherein
M and S are identified by solving an optimization problem.
14. The method of claim 13, wherein said one or more risk measures
comprise a value at risk, wherein c.sub.M is a processing time for
each market risk factor sample, wherein c.sub.S is a processing
time for each systemic credit driver sample, wherein
.nu..sub.1.sup.0 and .nu..sub.2.sup.0 are coefficients, and wherein
said optimization problem comprises: min M , S v 1 0 M + v 2 0 MS
##EQU00022## s . t . c M M + c S MS .ltoreq. T ##EQU00022.2## M
.gtoreq. 1 ##EQU00022.3## S .gtoreq. 1. ##EQU00022.4##
15. The method of claim 14, wherein:
.nu..sub.1.sup.0=Var(E[F.sub.X,Y(l.sub.p)|X]) and
.nu..sub.2.sup.0=E[Var(F.sub.X,Y(l.sub.p)|X)].
16. The method of claim 14, further comprising performing an
initial pilot simulation to estimate coefficients .nu..sub.1.sup.0
and .nu..sub.2.sup.0 with M and S chosen to be large.
17. A system for generating an integrated market and credit loss
distribution for the purpose of calculating one or more risk
measures associated with a portfolio of instruments by performing a
simulation on a computer, wherein said computer comprises: at least
one processor configured to execute at least a compound risk factor
sampling module and a risk measure module; at least one memory
coupled to the at least one processor; and at least one database;
wherein the at least one processor is configured to: identify at
least a first time horizon for said simulation; receive data
identifying X, the data identifying X storable in the at least one
database, wherein X is a vector of scalar-valued market risk factor
processes, each market risk factor process defined by a start
value, at least one function representing a model, and zero or more
parameters for the model; receive data identifying Y, the data
identifying Y storable in the at least one database, wherein Y is a
vector of scalar-valued credit driver processes, each credit driver
process defined by a start value, at least one function
representing a model, and zero or more parameters for the model;
receive data comprising one or more co-variance matrices that
define the joint evolution of X and Y over said first time horizon,
the data comprising the one or more co-variance matrices storable
in the at least one database; identify a first parameter M wherein
M>0 and a second parameter S wherein S>1; wherein M defines a
desired number of market risk factor samples and S defines a
desired number of systemic credit driver samples for each of M
market risk factor samples; generate MS conditional loss
distributions for said first time horizon to compute an
unconditional loss distribution {circumflex over (F)} for said
first time horizon by performing acts comprising: generating MS
scenarios, said MS scenarios defined by MS sets of X and Y values
(X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from 1 to
S; wherein said act of generating MS scenarios comprises: for each
m from 1 to M, generating a sample, having index m, of a vector of
normal random variables represented by .XI.; for each m from 1 to M
and for each s from 1 to S, generating a random sample, having
index ms, of .DELTA.Y from a conditional distribution of .DELTA.Y
derived from the sample of the vector .XI. having index m and from
at least one of the one or more co-variance matrices, .DELTA.Y
being an increment of Y; computing said MS sets of X and Y values
(X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from 1 to
S, wherein X.sub.m is calculated as a value of X at the first time
horizon based on a previous value of X.sub.m, the at least one
function associated with X, and the sample having index m of the
vector .XI., and wherein Y.sub.ms is calculated as a value of Y at
the first time horizon based on a previous value of Y.sub.ms, the
at least one function associated with Y, and the random sample
having index ms of .DELTA.Y, and wherein if said first time horizon
comprises exactly one time step, said previous value of X.sub.m and
Y.sub.ms is the start value associated with X and Y respectively,
for all m from 1 to M, and for all s from 1 to S; for each of the
MS scenarios defined by MS sets of X and Y values
(X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from 1 to
S, analytically deriving a conditional loss distribution
F.sub.X.sub.m.sub.Y.sub.ms to generate said MS conditional loss
distributions for said first time horizon; and producing, by the at
least one processor, the unconditional loss distribution
{circumflex over (F)} for said first time horizon as a mixture of
the MS conditional loss distributions for said first time horizon;
and provide the unconditional loss distribution {circumflex over
(F)} for said first time horizon for calculating one or more risk
measures from said unconditional loss distribution {circumflex over
(F)}, said one or more risk measures for use in evaluating risk
associated with said portfolio.
18. The system of claim 17, wherein the at least one processor is
further configured to: calculate said one or more risk measures
from said unconditional loss distribution {circumflex over (F)};
and at least one of store said one or more risk measures in said at
least one memory or output said one or more risk measures.
19. The system of claim 17, wherein said first time horizon
comprises k time steps, each of said k time steps ending at time
t.sub.k, where k>1; wherein at least one of said one or more
co-variance matrices is associated with a k-th time step; wherein
the at least one processor is further configured to, for each time
step j, for j from 1 to k-1, prior to generating N scenarios: for
each m from 1 to M, generate a sample, having index m, of a vector
of normal random variables represented by .XI.; for each m from 1
to M and for each s from 1 to S, generate a random sample, having
index ms, of .DELTA.Y from a conditional distribution of .DELTA.Y
derived from the sample of the vector .XI. having index m and from
at least one of the one or more co-variance matrices, .DELTA.Y
being an increment of Y; compute said MS sets of X and Y values
(X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from 1 to
S, wherein X.sub.m is calculated as a value of X at time t.sub.j
based on a value of X.sub.m at time t.sub.j-1, the at least one
function associated with X, and the sample having index m of the
vector .XI., wherein Y.sub.ms is calculated as a value of Y at time
t.sub.j based on a value of Y.sub.ms at time t.sub.j-1, the at
least one function associated with Y, and the random sample having
index ms of .DELTA.Y, and wherein said value of X.sub.m and
Y.sub.ms at time t.sub.0 is the start value associated with X and Y
respectively, for all m from 1 to M, and for all s from 1 to S; and
wherein the at least one processor is further configured to set
said previous value of X.sub.m and Y.sub.ms for use in calculating
X.sub.m and Y.sub.ms at the first time horizon to the value of
X.sub.m and Y.sub.ms at time t.sub.k-1 respectively, for all m from
1 to M, and for all s from 1 to S.
20. The system of claim 17, wherein said one or more risk measures
comprise at least one risk measure selected from the group
consisting of: a mean of said unconditional loss distribution
{circumflex over (F)}, a variance of said unconditional loss
distribution {circumflex over (F)}, a value at risk equaling a
specified p-quantile of said unconditional loss distribution
{circumflex over (F)}, an unexpected loss comprising a value at
risk equaling a specified p-quantile less a mean of said
unconditional loss distribution {circumflex over (F)}, and an
expected shortfall comprising an expected value of losses that
exceed a specified p-quantile of said unconditional loss
distribution {circumflex over (F)}.
21. The system of claim 17, wherein X comprises at least one
process each selected from the group consisting of: a Brownian
motion with drift, a Brownian motion without drift, an
Ornstein-Uhlenbeck process, a Hull-White process, a Geometric
Brownian motion process, and a Black-Karasinski process.
22. The system of claim 17, wherein Y comprises a Brownian motion
process, such that each .DELTA.Y is normally distributed.
23. The system of claim 17, wherein the conditional distribution of
.DELTA.Y derived from the sample of the vector .XI. having index m
and from the at least one of the one or more co-variance matrices
is represented by a mean m and at least one second co-variance
matrix.
24. The system of claim 17, wherein the at least one processor is
further configured to analytically derive the conditional loss
distribution F.sub.X.sub.m.sub.Y.sub.ms by employing at least one
technique selected from the group consisting of: Law of Large
Numbers, Central Limit Theorem, convolution and Fast Fourier
Transforms.
25. The system of claim 17, wherein the at least one processor is
configured to identify the first parameter M and the second
parameter S by: identifying an acceptable variance level for a
selected one of said one or more risk measures; computing a
variance of estimates of said selected one risk measure;
determining M and S such that said variance is within said
acceptable variance level.
26. The system of claim 25, wherein said selected one risk measure
comprises a value at risk, wherein l.sub.p is a p-quantile, and
wherein said variance is computed according to the following
formula having coefficients .nu..sub.1.sup.0 and .nu..sub.2.sup.0:
Var ( l ^ p ) = 1 f ( l p ) 2 ( v 1 0 M + v 2 0 MS ) .
##EQU00023##
27. The system of claim 26, wherein:
.nu..sub.1.sup.0=Var(E[F.sub.X,Y(l.sub.p)|X]) and
.nu..sub.2.sup.0=E[Var(F.sub.X,Y(l.sub.p)|X].
28. The system of claim 26, wherein the at least one processor is
further configured to perform an initial pilot simulation to
estimate coefficients .nu..sub.1.sup.0, .nu..sub.2.sup.0, and
density f(l.sub.p) with M and S chosen to be large.
29. The system of claim 17, wherein the at least one processor is
configured to identify the first parameter M and the second
parameter S by: identifying a time window available for said
simulation of length T; and wherein M and S are identified by
solving an optimization problem.
30. The system of claim 29, wherein said one or more risk measures
comprise a value at risk, wherein c.sub.M is a processing time for
each market risk factor sample, wherein c.sub.S is a processing
time for each systemic credit driver sample, wherein
.nu..sub.1.sup.0 and .nu..sub.2.sup.0 are coefficients, and wherein
said optimization problem comprises: min M , S v 1 0 M + v 2 0 MS
##EQU00024## s . t . c M M + c S MS .ltoreq. T ##EQU00024.2## M
.gtoreq. 1 ##EQU00024.3## S .gtoreq. 1. ##EQU00024.4##
31. The system of claim 30, wherein:
.nu..sub.1.sup.0=Var(E[F.sub.X,Y(l.sub.p)|X]) and
.nu..sub.2.sup.0=E[Var(F.sub.X,Y(l.sub.p)|X)].
32. The system of claim 30, wherein the at least one processor is
further configured to perform an initial pilot simulation to
estimate coefficients .nu..sub.1.sup.0 and .nu..sub.2.sup.0 with M
and S chosen to be large.
33. A non-transitory computer-readable medium upon which a set of
instructions are stored for execution on a computer, said computer
comprising at least one processor and at least one memory, said
non-transitory computer-readable medium comprising at least one
module configured to perform a method for generating an integrated
market and credit loss distribution for the purpose of calculating
one or more risk measures associated with a portfolio of
instruments by performing a simulation, said method comprising:
identifying at least a first time horizon for said simulation;
receiving data identifying X, wherein X is a vector of
scalar-valued market risk factor processes, each market risk factor
process defined by a start value, at least one function
representing a model, and zero or more parameters for the model;
receiving data identifying Y, wherein Y is a vector of
scalar-valued credit driver processes, each credit driver process
defined by a start value, at least one function representing a
model, and zero or more parameters for the model; receiving data
comprising one or more co-variance matrices that define the joint
evolution of X and Y over said first time horizon; identifying a
first parameter M wherein M>0 and a second parameter S wherein
S>1; wherein M defines a desired number of market risk factor
samples and S defines a desired number of systemic credit driver
samples for each of M market risk factor samples; generating MS
conditional loss distributions for said first time horizon to
compute an unconditional loss distribution {circumflex over (F)}
for said first time horizon by performing acts comprising:
generating MS scenarios, said MS scenarios defined by MS sets of X
and Y values (X.sub.m,Y.sub.ms) for all m from 1 to M, and for all
s from 1 to S; wherein said act of generating MS scenarios
comprises: for each m from 1 to M, generating a sample, having
index m, of a vector of normal random variables represented by
.XI.; for each m from 1 to M and for each s from 1 to S, generating
a random sample, having index ms, of .DELTA.Y from a conditional
distribution of .DELTA.Y derived from the sample of the vector .XI.
having index m and from at least one of the one or more co-variance
matrices, .DELTA.Y being an increment of Y; computing said MS sets
of X and Y values (X.sub.m,Y.sub.ms) for all m from 1 to M, and for
all s from 1 to S, wherein X.sub.m is calculated as a value of X at
the first time horizon based on a previous value of X.sub.m, the at
least one function associated with X, and the sample having index m
of the vector .XI., and wherein Y.sub.ms is calculated as a value
of Y at the first time horizon based on a previous value of
Y.sub.ms, the at least one function associated with Y, and the
random sample having index ms of .DELTA.Y, and wherein if said
first time horizon comprises exactly one time step, said previous
value of X.sub.m and Y.sub.ms is the start value associated with X
and Y respectively, for all m from 1 to M, and for all s from 1 to
S; for each of the MS scenarios defined by MS sets of X and Y
values (X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from
1 to S, analytically deriving a conditional loss distribution
F.sub.X.sub.m.sub.Y.sub.ms to generate said MS conditional loss
distributions for said first time horizon; and producing the
unconditional loss distribution {circumflex over (F)} for said
first time horizon as a mixture of the MS conditional loss
distributions for said first time horizon; and providing the
unconditional loss distribution {circumflex over (F)} for said
first time horizon for calculating one or more risk measures from
said unconditional loss distribution {circumflex over (F)}, said
one or more risk measures for use in evaluating risk associated
with said portfolio.
34. The non-transitory computer-readable medium of claim 33, the
method further comprising: calculating said one or more risk
measures from said unconditional loss distribution {circumflex over
(F)}; and at least one of storing said one or more risk measures in
said at least one memory or outputting said one or more risk
measures.
35. The non-transitory computer-readable medium of claim 33,
wherein said first time horizon comprises k time steps, each of
said k time steps ending at time t.sub.k, where k>1; wherein at
least one of said one or more co-variance matrices is associated
with a k-th time step; wherein said method further comprises, for
each time step j, for j from 1 to k-1, performing the following
acts prior to said act of generating N scenarios: for each m from 1
to M, generating a sample, having index m, of a vector of normal
random variables represented by .XI.; for each m from 1 to M and
for each s from 1 to S, generating a random sample, having index
ms, of .DELTA.Y from a conditional distribution of .DELTA.Y derived
from the sample of the vector .XI. having index m and from at least
one of the one or more co-variance matrices, .DELTA.Y being an
increment of Y; computing said MS sets of X and Y values
(X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from 1 to
S, wherein X.sub.m is calculated as a value of X at time t.sub.j
based on a value of X.sub.m at time t.sub.j-1, the at least one
function associated with X, and the sample having index m of the
vector .XI., wherein Y.sub.ms is calculated as a value of Y at time
t.sub.j based on a value of Y.sub.ms at time t.sub.j-1, the at
least one function associated with Y, and the random sample having
index ms of .DELTA.Y, and wherein said value of X.sub.m and
Y.sub.ms at time t.sub.0 is the start value associated with X and Y
respectively, for all m from 1 to M, and for all s from 1 to S; and
wherein said method further comprises setting said previous value
of X.sub.m and Y.sub.ms for use in calculating X.sub.m and Y.sub.ms
at the first time horizon to the value of X.sub.m and Y.sub.ms at
time t.sub.k-1 respectively, for all m from 1 to M, and for all s
from 1 to S.
36. The non-transitory computer-readable medium of claim 33,
wherein said one or more risk measures comprise at least one risk
measure selected from the group consisting of: a mean of said
unconditional loss distribution {circumflex over (F)}, a variance
of said unconditional loss distribution {circumflex over (F)}, a
value at risk equaling a specified p-quantile of said unconditional
loss distribution {circumflex over (F)}, an unexpected loss
comprising a value at risk equaling a specified p-quantile less a
mean of said unconditional loss distribution {circumflex over (F)},
and an expected shortfall comprising an expected value of losses
that exceed a specified p-quantile of said unconditional loss
distribution {circumflex over (F)}.
37. The non-transitory computer-readable medium of claim 33,
wherein X comprises at least one process each selected from the
group consisting of: a Brownian motion with drift, a Brownian
motion without drift, an Ornstein-Uhlenbeck process, a Hull-White
process, a Geometric Brownian motion process, and a
Black-Karasinski process.
38. The non-transitory computer-readable medium of claim 33,
wherein Y comprises a Brownian motion process, such that each
.DELTA.Y is normally distributed.
39. The non-transitory computer-readable medium of claim 33,
wherein the conditional distribution of .DELTA.Y derived from the
sample of the vector .XI. having index m and from the at least one
of the one or more co-variance matrices is represented by a mean m
and at least one second co-variance matrix.
40. The non-transitory computer-readable medium of claim 33,
wherein said analytically deriving the conditional loss
distribution F.sub.X.sub.m.sub.Y.sub.ms comprises employing at
least one technique selected from the group consisting of: Law of
Large Numbers, Central Limit Theorem, convolution and Fast Fourier
Transforms.
41. The non-transitory computer-readable medium of claim 33,
wherein said identifying the first parameter M and the second
parameter S comprises: identifying an acceptable variance level for
a selected one of said one or more risk measures; computing a
variance of estimates of said selected one risk measure;
determining M and S such that said variance is within said
acceptable variance level.
42. The non-transitory computer-readable medium of claim 41,
wherein said selected one risk measure comprises a value at risk,
wherein l.sub.p is a p-quantile, and wherein said variance is
computed according to the following formula having coefficients
.nu..sub.1.sup.0 and .nu..sub.2.sup.0: Var ( l ^ p ) = 1 f ( l p )
2 ( v 1 0 M + v 2 0 MS ) . ##EQU00025##
43. The non-transitory computer-readable medium of claim 42,
wherein: .nu..sub.1.sup.0=Var(E[F.sub.X,Y(l.sub.p)|X]) and
.nu..sub.2.sup.0=E[Var(F.sub.X,Y(l.sub.P)|X)].
44. The non-transitory computer-readable medium of claim 42, the
method further comprising performing an initial pilot simulation to
estimate coefficients .nu..sub.1.sup.0, .nu..sub.2.sup.0, and
density f(l.sub.p) with M and S chosen to be large.
45. The non-transitory computer-readable medium of claim 33,
wherein said identifying the first parameter M and the second
parameter S comprises: identifying a time window available for said
simulation of length T; and wherein M and S are identified by
solving an optimization problem.
46. The non-transitory computer-readable medium of claim 45,
wherein said one or more risk factors comprise a value at risk,
wherein c.sub.M is a processing time for each market risk factor
sample, wherein c.sub.S is a processing time for each systemic
credit driver sample, wherein .nu..sub.1.sup.0 and .nu..sub.2.sup.0
are coefficients, and wherein said optimization problem comprises:
min M , S v 1 0 M + v 2 0 MS ##EQU00026## s . t . c M M + c S MS
.ltoreq. T ##EQU00026.2## M .gtoreq. 1 ##EQU00026.3## S .gtoreq. 1.
##EQU00026.4##
47. The non-transitory computer-readable medium of claim 46,
wherein: .nu..sub.1.sup.0=Var(E[F.sub.X,Y(l.sub.p)|X]) and
.nu..sub.2.sup.0=E[Var(F.sub.X,Y(l.sub.p)|X)].
48. The non-transitory computer-readable medium of claim 46, the
method further comprising performing an initial pilot simulation to
estimate coefficients .nu..sub.1.sup.0 and .nu..sub.2.sup.0 with M
and S chosen to be large.
Description
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application is a divisional of prior U.S. patent
application Ser. No. 12/026,781, filed on Feb. 6, 2008, the
entirety of which is hereby incorporated by reference.
TECHNICAL FIELD
[0002] Embodiments described herein relate generally to systems and
methods for measuring risk associated with a portfolio, and in
particular, to systems and methods for compound risk factor
sampling with integrated market and credit risk for use in
determining a portfolio loss distribution.
BACKGROUND
[0003] Financial institutions, resource-based corporations, trading
organizations, governments, and others may employ risk management
systems and methods to measure risk associated with portfolios
comprising credit-risky instruments, such as for example, the
trading book of a bank. Accurately evaluating the risk associated
with a portfolio of instruments may assist in the management of the
portfolio. For example, it may allow opportunities for changing the
composition of the portfolio in order to reduce the overall risk or
to achieve an acceptable level of risk to be identified.
[0004] Evaluating the risk associated with a portfolio is a
non-trivial task, as instruments (e.g. securities, loans, corporate
bonds, credit derivatives, etc.) in the portfolio can be of varying
complexity, and may be subject to different types of risk. An
instrument may lose value due to adverse changes in market risk
factors, for example. An instrument may also lose value due to
changes in the credit state (e.g. a downgrade) of the counterparty
associated with the instrument, for example. Consider, by way of
illustration, that the price of a bond generally declines as
interest rates rise. Interest rates are examples of market risk
factors. Further examples of market risk factors may include equity
indices, foreign exchange rates, and commodity prices.
[0005] Also consider, by way of illustration, that an AA-rated
counterparty associated with an instrument of the portfolio may
transition to a credit state with a lower rating (e.g., B) or one
with a higher rating (e.g., AAA), resulting in an accompanying
decrease or increase, respectively, in the values of its financial
obligations. These changes may, in turn, affect the values of the
associated instrument. In an extreme case, a counterparty may
default, typically leaving creditors able to recover only some
fraction of the value of their instruments with the
counterparty.
[0006] Credit state migrations (e.g. transitions to different
credit states) may be determined by evaluating movements of a
creditworthiness index calculated for a specific counterparty. The
creditworthiness index may be based on values of a number of
systemic credit drivers that generally affect all counterparties
and of an idiosyncratic credit risk factor associated with the
specific counterparty.
[0007] The systemic credit drivers may comprise macroeconomic
variables or indices, such as for example, gross domestic product
(GDP), inflation rates, and country/industry indices. Accordingly,
these systemic credit drivers generally provide a credit
correlation between different counterparty names in a portfolio. In
contrast, each idiosyncratic credit risk factor is a latent
variable independently associated with a specific counterparty name
in the portfolio. Accordingly, these idiosyncratic credit risk
factors may also be referred to as counterparty-specific credit
risk factors herein.
[0008] In general, changes to market risk factors and systemic
credit drivers tend to be correlated (i.e. in statistical terms,
the market risk factors and systemic credit drivers are
co-dependent, not independent). Accordingly, many modern risk
management systems and methods may be expected to employ
methodologies that integrate market and credit risk (e.g. by
ensuring that such co-dependence is reflected in the computation of
risk measures associated with a portfolio) in order to more
accurately assess the financial risks associated with portfolios of
interest. Furthermore, approaches that integrate market and credit
risk have been further validated by the advent of the International
Standard for Banking Regulations Basel II.
[0009] To evaluate risk associated with a portfolio, at least some
risk management systems and methods perform simulations in which a
portfolio of instruments evolves under a set of scenarios (e.g. a
set of possible future outcomes, each of which may have an
associated probability of occurrence) over some specified time
horizon. The losses (or gains) that a portfolio of interest may
incur over all possible scenarios might be represented by a loss
distribution. With knowledge of the loss distribution associated
with the portfolio, it is possible to compute a risk measure for
the portfolio of interest.
[0010] However, as it is not possible to determine the exact loss
distribution analytically, it may be approximated by an empirical
distribution. By way of simulation, under each scenario, an
individual loss sample may be generated. The scenario used to
generate a given loss sample may represent a certain specific set
of market and credit conditions, identified by particular sampled
values of market risk factors, systemic credit drivers and/or
idiosyncratic credit risk factors defined for the respective
scenario.
[0011] The loss samples generated under a plurality of scenarios
may be used to generate the empirical distribution that
approximates the actual loss distribution. Accordingly, it will be
understood that the larger the number of scenarios used in the
simulation and thus the larger the number of loss samples
generated, the more accurate the approximation of the actual loss
distribution will be.
[0012] Estimates of risk measures associated with the portfolio may
then be computed based on the empirical distribution that
approximates the actual loss distribution. In this regard, the
quality of the estimated measurement of risk will also depend on
the number of loss samples generated. It will be understood that
the individual loss samples may also be referred to collectively as
a "loss sample", and the number of individual loss samples may be
referred to as the size of the "loss sample".
[0013] Some known risk management systems generate loss samples
according to a methodology that may be classified as a "simple
sampling" approach. In accordance with a "simple sampling"
approach, to generate a given loss sample, a corresponding market
risk factor sample, systemic credit driver sample, and
idiosyncratic credit risk factor sample is generated. In order to
integrate market and credit risk, the market risk factors and
systemic credit drivers are assumed to evolve in accordance with a
pre-specified co-dependence structure. It will be understood that
in order to obtain N loss samples using this approach, N market
risk factor samples, N systemic credit driver samples, and N
idiosyncratic credit risk factor samples will be generated in the
simulation for a portfolio of interest. Accordingly, the "simple
sampling" approach may be considered to be an example of a "brute
force" approach to generating loss samples for the portfolio in the
simulation.
[0014] Some other known risk management systems generate loss
samples according to a methodology that may be classified as a
"two-tier" approach. In accordance with a "two-tier" approach, a
joint sample of market risk factors and systemic credit drivers is
combined with multiple samples of idiosyncratic credit risk factor
values to obtain multiple loss samples. In order to integrate
market and credit risk, the market risk factors and systemic credit
drivers are assumed to evolve in accordance with a pre-specified
co-dependence structure. The "two-tier" approach attempts to reduce
the number of market risk factor and systemic credit driver samples
needed to obtain N loss samples. However, it will be understood
that if joint samples of market risk factors and systemic credit
drivers are employed, where there is a need to consider a larger
number of samples of one type of risk factor (e.g. systemic credit
drivers), then a larger number of samples of the other type of risk
factor (e.g. market risk factors) will be required.
[0015] Yet other known risk management systems do not attempt to
integrate market and credit risk when evaluating risk associated
with a portfolio. For example, some known risk management systems
may derive a loss distribution analytically, ignoring the
correlation between changes in market risk factors and systemic
credit drivers that exists, in reality.
SUMMARY
[0016] In one broad aspect, there is provided a
computer-implemented method for generating an integrated market and
credit loss distribution for the purpose of calculating one or more
risk measures associated with a portfolio of instruments by
performing a simulation, the method comprising at least the acts
of: generating N scenarios, said N scenarios defined by N sets of
X, Y, and Z values (X.sub.m,Y.sub.ms,Z.sub.msi) for all m from 1 to
M, for all s from 1 to S, and for all i from 1 to I, wherein X, Y
and Z comprise a market risk factor process, a systemic credit
driver process, and an idiosyncratic credit risk factor process,
respectively; and computing N simulated loss samples by simulating
the portfolio over the N scenarios over a first time horizon to
produce the integrated market and credit loss distribution over the
first time horizon; wherein said act of generating N scenarios
comprises: for each m from 1 to M, generating a sample, having
index m, of a vector .XI. of normal random variables; for each m
from 1 to M and for each s from 1 to S, generating a random sample,
having index ms, of .DELTA.Y from a conditional distribution of
.DELTA.Y derived from the sample of the vector .XI. having index m
and from a co-variance matrix, .DELTA.Y being an increment of Y;
for each m from 1 to M and for each s from 1 to S and for each i
from 1 to I, independently generating a random sample, having index
msi, of .DELTA.Z, .DELTA.Z being an increment of Z; and computing
said N sets of X, Y, and Z values (X.sub.m,Y.sub.ms,Z.sub.msi) for
all m from 1 to M, for all s from 1 to S, and for all i from 1 to
I, wherein X.sub.m is calculated as a value of X at the first time
horizon based on a previous value of X.sub.m, at least one function
associated with X, and the sample having index m of the vector
.XI., wherein Y.sub.ms is calculated as a value of Y at the first
time horizon based on a previous value of Y.sub.ms, a function
associated with Y, and the random sample having index ms of
.DELTA.Y, and wherein Z.sub.ms; is calculated as a value of Z at
the first time horizon based on a previous value of Z.sub.msi, a
function associated with Z, and the random sample having index msi
of .DELTA.Z.
[0017] In another broad aspect, there is provided a
computer-implemented method for generating an integrated market and
credit loss distribution for the purpose of calculating one or more
risk measures associated with a portfolio of instruments by
performing a simulation, the method comprising at least the acts
of: generating MS scenarios defined by MS sets of X and Y values
(X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from 1 to
S, wherein X and Y comprise a market risk factor process and a
systemic credit driver process, respectively; for each of the MS
scenarios, analytically deriving a conditional loss distribution
F.sub.X.sub.m.sub.Y.sub.ms to generate MS conditional loss
distributions for a first time horizon, computing the integrated
market and credit loss distribution from the conditional loss
distributions for the first time horizon; wherein said act of
generating N scenarios comprises: for each m from 1 to M,
generating a sample, having index m, of a vector .XI. of normal
random variables; for each m from 1 to M and for each s from 1 to
S, generating a random sample, having index ms, of .DELTA.Y from a
conditional distribution of .DELTA.Y derived from the sample of the
vector .XI. having index m and from a co-variance matrix, .DELTA.Y
being an increment of Y; computing said MS sets of X and Y values
(X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from 1 to
S, wherein X.sub.m is calculated as a value of X at a first time
horizon based on a previous value of X.sub.m, at least one function
associated with X, and the sample having index m of the vector
.XI., and wherein Y.sub.ms is calculated as a value of Y at the
first time horizon based on a previous value of Y.sub.ms, a
function associated with Y, and the random sample having index ms
of .DELTA.Y.
[0018] Other aspects, embodiments, and features are also disclosed
herein.
BRIEF DESCRIPTION OF THE DRAWINGS
[0019] For a better understanding of the various embodiments
described herein and to show more clearly how they may be carried
into effect, reference will now be made, by way of example only, to
the accompanying drawings in which:
[0020] FIG. 1 shows an example loss histogram of an empirical loss
distribution;
[0021] FIG. 2 shows two example loss histograms of empirical loss
distributions for different sample sizes;
[0022] FIG. 3 shows an example block diagram of a loss sample
computation module for computing a particular loss sample
L.sub.n;
[0023] FIG. 4 shows an example block diagram of a risk factor
sampling module for generating risk factor samples;
[0024] FIG. 5 shows an example block diagram of a risk factor model
for defining the generation of a risk factor sample;
[0025] FIG. 6 shows an example block diagram of a risk factor model
module for use in a risk factor sampling module implementing a
known "simple sampling" approach to generating risk factor
samples;
[0026] FIG. 7 shows an example block diagram illustrating example
output of a risk factor sampling module comprising the risk factor
model module of FIG. 6;
[0027] FIG. 8 shows an example block diagram of a risk factor model
module for use in a risk factor sampling module implementing a
known "two-tiered" approach to generating risk factor samples;
[0028] FIG. 9 shows an example block diagram illustrating example
output of a risk factor sampling module comprising the risk factor
model module of FIG. 8;
[0029] FIG. 10 shows an example graphical representation of the
risk factor scenario structure underlying a resulting set of risk
factor samples generated according to a known "two-tiered" approach
to generating risk factor samples;
[0030] FIG. 11 shows an example block diagram illustrating how
certain market factor models may be applied in a simulation
performed in accordance with a known "two-tiered" approach;
[0031] FIG. 12 shows an example block diagram of a risk factor
model module for use in a risk factor sampling module implementing
compound risk factor sampling in accordance with at least one
embodiment;
[0032] FIG. 13 shows an example block diagram of a risk factor
sampling module comprising the risk factor model module of FIG. 12
in accordance with at least one embodiment;
[0033] FIG. 14A shows a flowchart diagram illustrating acts in a
method of generating one or more risk measures associated with a
portfolio of instruments by performing a simulation in accordance
with at least one embodiment;
[0034] FIG. 14B shows a flowchart diagram illustrating acts in a
method of generating one or more risk measures associated with a
portfolio of instruments by performing a simulation in accordance
with at least one other embodiment;
[0035] FIG. 14C shows a flowchart diagram illustrating acts in a
method of generating one or more risk measures associated with a
portfolio of instruments by performing a simulation in accordance
with at least one other embodiment;
[0036] FIG. 15 shows an example graphical representation of the
risk factor scenario structure underlying a resulting set of risk
factors samples generated according to compound risk factor
sampling in accordance with at least one embodiment;
[0037] FIG. 16 shows an example block diagram of a risk factor
simulation system for implementing compound risk factor sampling in
accordance with at least one embodiment;
[0038] FIG. 17 shows another example block diagram illustrating
outputs of a risk factor simulation system for implementing
compound risk factor sampling in accordance with at least one
embodiment; and
[0039] FIG. 18 shows another example graphical representation of
the risk factor scenario structure underlying a resulting set of
risk factors samples generated according to compound risk factor
sampling in accordance with at least one embodiment.
DETAILED DESCRIPTION
[0040] Specific details are set forth herein, in order to
facilitate understanding of various embodiments. However, it will
be understood by those of ordinary skill in the art that some
embodiments may be practiced without these specific details. In
other instances, well-known methods, procedures and components have
not been described in detail so as not to obscure the embodiments
described herein. Furthermore, details of the embodiments described
herein, which are provided by way of example, are not to be
considered as limiting the scope of the appended claims.
[0041] Embodiments described herein relate generally to risk
management systems and methods for evaluating risk associated with
a portfolio of instruments. Generally, the system (and modules)
described herein may be implemented in computer hardware and/or
software. The acts described herein are performed on a computer,
which comprises at least one processor and at least one memory, as
well as other components as will be understood by persons skilled
in the art. Accordingly, one or more modules may be configured to
perform acts described herein when executed on the computer (e.g.
by the at least one processor). Modules and associated data (e.g.
instructions, input data, output data, intermediate results
generated which may be permanently or temporarily stored) may be
stored in the at least one memory, which may comprise one or more
known memory or storage devices. The acts performed in respect of a
method in accordance with an embodiment described herein may be
provided as instructions, executable on a computer, on a
computer-readable storage medium. In some embodiments, the
computer-readable storage medium may comprise transmission-type
media.
[0042] It will also be understood that although reference may be
made to a "computer" herein, the "computer" may comprise multiple
computing devices, which may be communicatively coupled by one or
more network connections. In particular, one or more modules may be
distributed across multiple computing devices. It will also be
understood that certain functions depicted in the example
embodiments described herein as being performed by a given module
may instead be performed by one or more different modules or
otherwise integrated in the functions performed by one or more
different modules.
[0043] Risk management systems and methods typically evaluate risk
associated with a portfolio of instruments by computing one or more
risk measures derived from characteristics of a loss distribution F
associated with the portfolio. For example, these characteristics
of F may comprise the mean of the loss distribution, the variance
of the loss distribution and/or a specified quantile value of the
loss distribution. Some regulations (e.g. Basel II) may require
that a bank hold sufficient capital to offset a maximum loss that
can occur with a given probability p, consistent with the bank's
desired credit rating. This loss, known as the Value-at-Risk (VaR),
equals the p-th quantile l.sub.p of the portfolio loss distribution
F, where l.sub.p F.sup.-1(p).
[0044] Due to the complex relationships among, for example, asset
prices, exposures, and credit state migrations that affect the
instruments of a portfolio, the exact distribution F cannot
generally be derived analytically. Rather, it may be approximated
by an empirical loss distribution {circumflex over (F)}, which may
be obtained by simulating the portfolio under a set of possible
future outcomes, or scenarios, to obtain a set of N loss samples to
derive the empirical loss distribution. Risk measures may then be
computed based on the empirical loss distribution {circumflex over
(F)}, which approximates the actual distribution F.
[0045] Referring now to FIG. 1, there is shown an example loss
histogram 10 of an empirical loss distribution constructed from a
set of N loss samples 12 (i.e. L.sub.1 to L.sub.N) for use in
deriving one or more estimates of relevant characteristics such as,
for example, an estimated mean 14 ({circumflex over (.mu.)}), an
estimated variance 16 ({circumflex over (.sigma.)}.sup.2) and an
estimated p-th quantile 18 ({circumflex over (l)}.sub.p). Other
relevant characteristics may also be derived from the empirical
loss distribution, including an estimated unexpected loss (i.e.
difference between a p-th quantile and a mean), and an estimated
expected shortfall (i.e. an expected value of losses given that
they exceed a p-th quantile, a conditional expectation of losses in
the tail of the loss distribution). A circumflex " " over a
parameter is used herein to indicate a statistical estimate of the
parameter.
[0046] The degree to which {circumflex over (F)} approximates F,
and thus the quality of the associated risk estimates, typically
depends on the number of loss samples N (also referred to herein
generally as the "sample size").
[0047] Referring now to FIG. 2, there is shown two example loss
histograms 20, 22 of empirical loss distributions constructed from
loss samples of two different sizes N. In this example, loss
histogram 20 and loss histogram 22 represent empirical loss
distributions constructed from N=1,000 and N=10,000 loss samples,
respectively. It will be understood that as N increases, the loss
histogram, and accordingly the empirical loss distribution,
generally becomes smoother, which reflects a better degree of
approximation of F and may provide the basis for more accurate
estimates of risk measures that are calculated from the loss
distribution.
[0048] The effect of the sample size is especially pronounced when
estimating quantiles for p close to 1, which is typical for credit
portfolios. The quantiles for p close to 1 lie in the extreme right
tail of the loss histograms 20, 22 of FIG. 2. Ideally, if multiple
simulations of N samples each are performed, the resulting risk
estimates should remain more or less constant, i.e., the
variability of each of the risk estimates should be small. It will
be understood that the variability of a risk estimate will
generally decrease as N increases. In practice, accurately
estimating relevant quantiles for credit portfolios requires the
number of loss samples N to be extremely large (e.g., in the
millions).
[0049] Referring now to FIG. 3, there is shown an example block
diagram of a loss sample computation module 24 for computing a
particular loss sample L.sub.n 48 of a set of N loss samples, which
may be used to derive an empirical loss distribution for a
portfolio of interest. The loss sample computation module 24
receives as input a set of sampled values for one or more sets of
risk factors 30, 32, 34, which may be categorized into three groups
as follows: [0050] X.sub.n 30 denotes sampled values of one or more
market risk factors, e.g., interest rates, equity indices, foreign
exchange rates, and commodity prices; [0051] Y.sub.n 32 denotes
sampled values of one or more systemic credit drivers, e.g.,
macroeconomic factors such as GDP and inflation rates, and
country/industry sector indices; and [0052] Z.sub.n 34 denotes
sampled values of one or more counterparty-specific, or
idiosyncratic credit risk factors.
[0053] The loss sample computation module 24 may comprise a pricing
module 36, a credit transition module 40, and a portfolio
aggregation module 48.
[0054] Pricing module 36 may be configured to apply one or more
pricing functions to the sampled values of market risk factors
X.sub.n 30 received as input, and to compute the prices of the
financial instruments in the portfolio. The market risk factors
jointly determine the prices of all financial instruments in the
portfolio. Given the prices, the pricing module 36 may compute a
simulated exposure table 38 for each counterparty named in the
portfolio. Each simulated exposure table 38 indicates the amounts
that would be lost or gained if the respective counterparty
transitioned to any one of a number of possible credit states. The
pricing module 36 can determine the data for each simulated
exposure table 38 either stochastically and/or deterministically.
Data for each simulated exposure table 38 can be stored in one or
more computer memories or storage devices.
[0055] A credit transition module 40 may be configured to receive
as input sampled values of systemic credit drivers Y.sub.n 32 and
sampled values of idiosyncratic credit risk factors Z.sub.n 34, and
to apply a credit transition model to compute a simulated credit
state for each counterparty named in the portfolio. The eventual
credit state of a counterparty depends on the values of a subset of
credit drivers that are common to all counterparties (e.g. sampled
values of systemic credit drivers Y.sub.n 32), and on (b) the value
of a single credit risk factor unique to that counterparty (e.g.
selected from the sampled values of idiosyncratic credit risk
factors Z.sub.n 34).
[0056] The credit transition module 40 may also be configured to
compute a numerical creditworthiness index for each counterparty as
a weighted sum of the sampled values of systemic credit drivers
Y.sub.n 32 and one of the sampled values of idiosyncratic credit
risk factors Z.sub.n 34. For example, a vector of creditworthiness
indices W=.beta.Y+.sigma.Z may be computed, where .beta. is a
matrix of factor loadings and .sigma. is a diagonal matrix of
residual specific risk volatilities, with Y being a vector
comprising sampled values of systemic credit drivers and Z being a
vector comprising sampled values of idiosyncratic credit risk
factors.
[0057] Then each counterparty's simulated credit state may be
determined by comparing its associated creditworthiness index to a
set of threshold values as determined from a specified matrix of
credit transition probabilities 42. In particular, a default for a
given counterparty may be deemed to occur when its component value
of W falls below a certain pre-determined threshold value, as
determined from the matrix of credit transition probabilities 42.
Data used to populate the specified matrix of credit transition
probabilities 42 may be determined based on historical data.
Accordingly, the credit transition module 40 outputs a table of
simulated credit states 44 for each counterparty, from which a
credit state for each counterparty named in the portfolio can be
determined. Data for each table of simulated credit states 44, one
per YZ pair, can be stored in one or more computer memories or
storage devices.
[0058] For each counterparty in the portfolio, a portfolio
aggregation module 46 determines a sampled loss from instruments
with the specific counterparty. The portfolio aggregation module
obtains these counterparty losses using the associated table of
simulated credit states 44 (which provides the simulated credit
state for each counterparty) in conjunction with the associated
simulated exposure table 38 (which indicates the amount that would
be lost or gained if a specific counterparty transitioned to any
one of a number of possible credit states). In this example, given
the credit state of a counterparty, the sample loss from
instruments with the counterparty may be looked up in its
associated exposure table. The portfolio aggregation module 46 is
configured to then compute the aggregate portfolio loss sample
L.sub.n 48 as the sum of the losses from counterparties. Generated
loss samples can be stored in one or more computer memories or
storage devices.
[0059] The inventors recognized that the computational resources
(e.g., time and/or memory) required to implement each of the
modules shown in FIG. 3, and the various determinations they are
configured to perform can differ greatly amongst modules depending
on the specific input processed.
[0060] For example, consider that a particular counterparty's
credit state may depend on multiple systemic credit drivers, but on
only one idiosyncratic credit risk factor. When the credit
transition module 40 computes a creditworthiness index for a given
counterparty, processing the set of sampled values of systemic
credit drivers Y.sub.n 32 generally comprises a greater portion of
the computational work relative to that required to generate the
sampled value of the one idiosyncratic credit risk factor from the
set of sampled values of idiosyncratic credit risk factors Z.sub.n
34.
[0061] More significantly, computing simulated exposure tables 38
from the sampled values of market risk factors X.sub.n 30 requires
the pricing module 36 to price all financial instruments in the
portfolio. Since the number of instruments of a portfolio of
interest may be very large and will typically far exceed the number
of counterparties named in the portfolio, and given that pricing is
a mathematically intensive procedure (e.g. especially for
derivatives), the act of computing simulated exposure tables 38 by
pricing module 36 is generally far more computationally expensive
than the computing of simulated credit state tables 44 by the
credit transition module 40.
[0062] Referring now to FIG. 4, there is shown an example block
diagram of a risk factor sampling module 50 that applies risk
factor models 26 to generate samples of various risk factors 52,
54, 56 at a time step of a predetermined simulation time horizon.
For illustrative purposes, the risk factor sampling module 50 is
shown in the example of FIG. 4 as generating samples for three
market risk factors 52 (X.sup.1, X.sup.2, X.sup.3), two systemic
credit drivers 54 (Y.sup.1, Y.sup.2) and two idiosyncratic credit
risk factors 56 (Z.sup.1, Z.sup.2) at each iteration, or time step.
For example, at iteration n risk factor simulation module 50
generates a sample X.sub.n.sup.1 for a modeled market risk factor
X.sup.1.
[0063] It will be understood that the evolution of each risk factor
is governed by an appropriate mathematical model. In this example,
specific risk factor models 26 govern the evolution of each risk
factor 52, 54, 56 over a predetermined time horizon (or in some
instances, multiple time horizons). That is, the risk factor models
26 govern how the risk factor sampling module 50 generates samples
of risk factor values 28 for each of the risk factors 52, 54, 56 at
each time step of the time horizon. By way of example, FIG. 4 shows
how an idiosyncratic credit risk factor Z.sup.2 is sampled (to
produce sample Z.sub.n.sup.2) according to a specific idiosyncratic
credit risk factor model 58.
[0064] Referring now to FIG. 5, there is shown an example diagram
of idiosyncratic credit risk factor model 58, which defines the
evolution of an idiosyncratic credit risk factor from time t to
t+.DELTA.t. That is, the idiosyncratic credit risk factor model 58
defines how the risk factor sampling module (e.g. 50 of FIG. 4)
generates a risk factor sample Z(t+.DELTA.t) 68 for the time step
ending at time t+.DELTA.t. For example, idiosyncratic credit risk
factor Z(t+.DELTA.t) 68 is modeled by idiosyncratic credit risk
factor model 58, which is a Brownian motion.
[0065] Applying the idiosyncratic credit risk factor model 58
results in the generation of an increment value .DELTA.Z(t) 62 from
a sample 60 having a normal distribution with mean zero and
variance .DELTA.t. Increment value .DELTA.Z(t) 62 is added to the
risk factor sample Z(t) 64 previously generated for the time step
ending at time t to obtain the newly simulated risk factor sample
Z(t+.DELTA.t) 68. This process is repeated until t+.DELTA.t equals
the time horizon of the simulation, yielding a "sample path" of
risk factor sampled values over the time horizon.
[0066] Referring now to FIG. 6, there is shown an example block
diagram of a risk factor model module 92 for use in a risk factor
sampling module (e.g. 50 of FIG. 4) implementing a known "simple
sampling" approach to generating risk factor samples. The models
applied by risk factor model module 92 govern the evolution of
three different types of risk factors: market risk factors X(t),
systemic credit drivers Y(t), and idiosyncratic credit risk factors
Z(t). In one example implementation, risk factor model module 92
may apply three different risk factor models (e.g. risk factor
model 26), one risk factor model for each type of risk factor.
[0067] The known "simple sampling" approach generally involves
generating one sample for each risk factor at each time step (i.e.
an evolution from time t to t+.DELTA.t). The risk factor model
module 92 that implements the "simple sampling" approach attempts
to integrate market and credit risk. Market risk factors X(t) and
systemic credit drivers Y(t) evolve in a correlated manner as
specified by a pre-specified co-variance matrix .SIGMA. 70.
[0068] As shown at 72, a joint sample of an increment value
.DELTA.X(t) 74 and increment value .DELTA.Y(t) 76 is generated
according the pre-specified co-variance matrix .SIGMA. 70 from the
joint distribution of increment value .DELTA.X(t) 74 and
.DELTA.Y(t) 76, where .DELTA.Y(t) 76 is represented by a centered,
normal distribution. Subsequently, these values are added to the
risk factor samples X(t) 78 and Y(t) 80 previously generated at the
time step ending at time t, to obtain newly simulated risk factor
samples, X(t+.DELTA.t) 88 and Y(t+.DELTA.t) 90 respectively.
[0069] Idiosyncratic credit risk factors are, by definition,
independent and therefore they are unaffected by the co-dependence
structure .SIGMA. 70. The risk factor model module 92 may generate
samples of the idiosyncratic credit risk factors, as was described
with reference to the risk factor model 58 of FIG. 5, for example.
Accordingly, in one aspect, risk factor model module 92 is
configured to apply an idiosyncratic credit risk factor model 58 to
generate samples of the idiosyncratic risk factors.
[0070] This risk factor model module 92 repeats this process until
all required risk factor samples are generated for all (of one or
more) time steps, i.e. when t+.DELTA.t equals the time horizon for
the simulation.
[0071] Referring now to FIG. 7, there is shown an example block
diagram illustrating example output of a risk factor sampling
module 50 comprising the risk factor model module 92 of FIG. 6,
which implements the "simple sampling" approach. A resultant set of
risk factor samples 59 generated by the risk factor sampling module
50 is shown. This resulting set of risk factor samples 59 may then
be input to and processed by, for example, loss sample computation
module 24 (FIG. 3) to obtain N=12 loss samples 100 (e.g. L.sub.1 to
L.sub.12 in this example).
[0072] It will be understood that in order to produce N=12 loss
samples using a simple sampling approach, N=12 distinct risk factor
sampled values for each risk factor 52, 54, 56 is produced by risk
factor model module 92. For example, the market risk factor X.sup.1
is sampled N=12 times. Then, each of the sampled values for the
given market risk factor (i.e. each of the N=12 values for X.sup.1)
is used only once, along with the other corresponding sampled risk
factors (e.g. one of the N=12 values produced for each of X.sup.2,
X.sup.3, Y.sup.1, Y.sup.2, Z.sup.1, Z.sup.2 in the example of FIG.
7) to compute a corresponding one of the resultant N loss
samples.
[0073] Since each risk factor is sampled N=12 times, as a result,
the N=12 sampled losses (L.sub.1 to L.sub.12), are independent.
Generally, it will also be understood that since N samples of each
market risk factor 52 are generated, the loss sample computation
module 24 (FIG. 3) will need to calculate N sets of simulated
exposure tables 38 (FIG. 3), which is a relatively highly
computationally expensive task, one for each of the N portfolio
loss samples.
[0074] By way of example, referring back to FIG. 3, for n=1, then a
generated set of market risk samples (X.sub.1.sup.1, X.sub.1.sup.2,
X.sub.1.sup.3) is provided as input X.sub.n 30 to the pricing
module 36 for use in calculating a first simulated exposure table
38, which in turn is used in obtaining a first loss sample L.sub.1
48. This step may be repeated to calculate multiple loss
samples.
[0075] This illustrates that with a "simple sampling" approach,
although the joint samples are taken for the samples of market risk
factors and systemic credit drivers in accordance with a
pre-specified co-dependence structure used in an attempt to
integrate credit and market risk, N samples of each risk factor
must be generated. This may result in computational and resource
inefficiencies, particularly since N sets of simulated exposure
tables will need to be generated in the simulation under this
approach, and in use, N may be very large.
[0076] Typically, the number of loss samples N that can be
generated in practice is limited by the availability of computing
resources (e.g. time and/or memory). Thus resource and/or time
intensive processes act as constraints on the number of loss
samples N that may be simulated.
[0077] In the development of a second known approach to generating
risk factor samples as described below, it was recognized that
since the idiosyncratic credit risk factors Z.sub.n are independent
of the market risk factors X.sub.n and systemic credit drivers
Y.sub.n, any sample of a credit risk factor Z.sub.k can be combined
with the joint sample of market risk factors and credit drivers
(X.sub.n, Y.sub.n) taken, while still preserving the required
co-dependence structure for market risk factors X.sub.n and
systemic credit drivers Y.sub.n. It was also recognized that
processing the sample idiosyncratic credit factor values Z.sub.n to
compute creditworthiness indices is generally computationally
inexpensive, relative to other processing acts performed when
computing loss samples.
[0078] Referring now to FIG. 8, there is shown an example block
diagram of a risk factor model module 98 for use in a risk factor
simulation module implementing a "two-tiered" approach to
generating risk factor samples. The illustrated "two-tiered"
approach generally involves generating multiple joint samples (less
than N) of the market risk factors and systemic credit drivers, and
combining each joint sample with multiple samples of the
idiosyncratic credit risk factors to obtain N loss samples. As a
result, under the "two-tiered" approach, a single joint market risk
factor and credit driver sample may be re-used to produce multiple
loss samples.
[0079] In the example of FIG. 8, the risk factor model module 98
generates joint samples of market risk factors and systemic credit
drivers as was described with reference to the risk factor module
92 of FIG. 6. However, the risk factor model module 98 is different
in that, at each time step of the time horizon for the simulation,
the risk factor model module 98 generates multiple samples of
increment values .DELTA.Z(t) 82 from a normal distribution having a
mean of zero and a variance of .DELTA.t, where .DELTA.t in the
context of specifying the normal distribution is understood to
multiply the K.times.K identity matrix when generating increments
for K idiosyncratic credit risk factors. Each increment value
.DELTA.Z(t) 82 is added to the corresponding idiosyncratic risk
factor sample Z(t) 84 previously generated at the time step ending
at time t, to obtain newly simulated idiosyncratic credit risk
factor samples Z(t+.DELTA.t) 93.
[0080] The multiple idiosyncratic credit risk factor samples
Z(t+.DELTA.t) 93 may be used with one joint market risk factor
sample X(t+.DELTA.t) 88 and systemic credit driver sample
Y(t+.DELTA.t) 90 to obtain multiple loss samples, one for each
Z(t+.DELTA.t) 93. Although the resultant loss samples are no longer
independent as multiple loss samples are generated from the same
market risk factor sample X(t+.DELTA.t) 88 and systemic credit
driver sample Y(t+.DELTA.t) 90, they do nevertheless satisfy the
weaker technical condition known as m-dependence.
[0081] Referring now to FIG. 9, there is shown an example block
diagram illustrating example output of a variant risk factor
sampling module 50 comprising the risk factor model module 98 of
FIG. 8, which implements the "two-tiered" approach. A resultant set
of risk factor samples 96 generated by the risk factor sampling
module 50 is shown. This resultant set of risk factor samples 96
may then be processed by, for example, loss sample computation
module 24 (FIG. 3) to obtain N=12 loss samples 100 (e.g. L.sub.1 to
L.sub.12 in this example).
[0082] It can be observed that, using this "two-tiered" approach,
N=12 portfolio loss samples can be obtained by combining I=4 (118)
idiosyncratic credit risk factor samples with each of B=3 (116)
joint samples of market risk factors and systemic credit drivers.
In this example, four idiosyncratic risk values Z(t+.DELTA.t) 93
are used with each given market risk factor sample X(t+.DELTA.t) 88
and each systemic credit driver value Y(t+.DELTA.t) 90 (FIG.
8).
[0083] In this example, three groups 102, 104, 106 of sets of risk
factor samples are generated. For each group, only one sample of a
given market risk factor e.g. (94 X.sup.1) and credit risk factor
is generated, and is re-used when combined with one of four samples
of the idiosyncratic credit risk factors, to generate four
different sets of risk factor samples per group, in this example.
Each set of risk factor samples can be used to calculate a loss
sample, and accordingly, N=BI=12 loss samples can be generated by
this approach in the example as shown.
[0084] More specifically, four loss samples (L.sub.1 to L.sub.4)
are generated by, for example, loss sample computation module 24
(FIG. 3), from four risk factor scenarios defined of a first group
102, which comprises one sample for each market risk factor 108
(X.sub.1.sup.1, X.sub.1.sup.2, X.sub.1.sup.3), one sample for each
systemic credit driver 110 (Y.sub.1.sup.1, Y.sub.1.sup.2), and four
samples for each idiosyncratic credit risk factor, generated by
risk factor sampling module 50. Similarly, four loss samples can be
computed from risk factor scenarios defined by the risk factor
samples of the second group 104, and also of the third group
106.
[0085] Referring back to FIG. 3, in accordance with "a two-tiered"
approach, the loss sample computation module 24 may re-use the same
simulated exposure table 38 generated from the same market risk
factor sample in the first group (e.g. group 102), for calculating
different loss samples, each based on a different one of I=4
idiosyncratic credit risk factor samples. It will be observed from
the example that the number of required simulated exposure tables
38 to be generated is reduced by a factor of B (i.e. only one per
B=3 distinct market risk factor samples) relative to the "simple
sampling" approach, while still obtaining N=BI=12 loss samples
100.
[0086] Accordingly, use of the "two-tiered" approach typically
results in a reduction in the number of distinct samples of market
risk factors and systemic credit drivers required relative to the
"simple sampling" approach (e.g. see FIG. 7) to provide the same
number N of loss samples.
[0087] Referring now to FIG. 10, there is shown an example
graphical representation of the risk factor scenario structure
underlying the resulting set of risk factor samples 96 generated
according to the "two-tiered" approach to generating risk factor
samples. Scenarios represented by sets of risk factor samples can
be viewed as a two-level tree, with the joint market risk factor
and systemic credit driver samples (X,Y) 112 emanating from the
root level, and the multiple idiosyncratic credit risk factor
samples Z 114 branching out from each joint market risk factor and
systemic credit driver sample (X,Y) 112 at the second branch level.
This graphical representation of the risk factor scenarios further
illustrates that only B distinct market risk factor samples (and
systemic credit driver samples) must be generated. Specifically,
each of B nodes in the first branch represents a distinct market
risk factor sample (of the joint sample) which can be re-used for I
different idiosyncratic credit risk factor samples. It will be
observed that since only B distinct market risk factor samples (or
B sets if multiple market risk factors are modeled), B
corresponding simulated exposure tables need be generated by loss
sample computation module 24.
[0088] The inventors realized, however, that although the
"two-tiered" approach provides certain advantages over the "simple
sampling" approach, a number of practical limitations may arise
with the former approach in certain applications. For example,
there may be a limit on the number of idiosyncratic risk factor
samples that may be employed, i.e. the size of I. It may be
observed that beyond a certain point, simply generating more
idiosyncratic credit risk factor samples for each joint sample in
the "two-tiered" approach as described above is no longer effective
for improving the approximation of the loss distribution F for a
given portfolio. In particular, if certain counterparties incur
significant systemic credit risk (i.e., their eventual credit
states depend largely on the systemic credit drivers), then a large
number of samples of systemic credit drivers Y would be required in
order to accurately approximate the right tail of the computed loss
distribution (e.g. see FIG. 2). One may consider choosing a higher
B so that a greater number of samples of Y may be obtained to
improve the approximation of the loss distribution. However, recall
that in the "two-tiered" approach, market risk factor samples and
systemic credit driver samples are jointly simulated. Therefore,
increasing the number of desired samples of systemic credit driver
Y would in turn, necessitate an equal increase in the number of
generated samples of market risk factor X. In computing loss
samples, the number of simulated exposure tables will also increase
accordingly. As previously noted, the act of generating simulated
exposure tables is very computationally expensive.
[0089] The inventors also observed that the known "two-tiered"
approach does not provide guidance on how a given selection of B
and I might impact the quality of risk estimates calculated from a
generated loss distribution. In practice, implementations of the
"two-tiered" approach typically require B and I to be determined
through trial and error.
Compound Risk Factor Sampling and Optimized Sampling Scheme
[0090] In accordance with at least one embodiment, a compound risk
factor sampling approach is employed in systems and methods
described herein. In one broad aspect, compound risk factor
sampling is performed that generally comprises conditionally
generating multiple samples of systemic credit driver Y for each
sample of market risk factor X generated, at each time step of a
time horizon for a simulation.
[0091] This approach may reduce the number of costly simulated
exposure calculations (e.g. generated simulated exposure tables 38
of FIG. 3) required to obtain a desired number N of loss samples,
compared to the known approaches described above. In another broad
aspect, there is provided systems and methods configured to
determine an optimal number of sample values for each of the market
risk factors X, systemic credit drivers Y and idiosyncratic credit
risk factors Z to be generated at each time step of a time horizon
for a simulation, in order to obtain an acceptable amount of
variability in one or more computed risk estimates, and/or to
satisfy an available computational budget, such as a time
constraint. This may generally eliminate the need to determine the
optimal or otherwise desired number of risk factor samples by trial
and error.
Compound Risk Factor Sampling
[0092] In at least one embodiment described herein, a compound risk
factor sampling approach as described herein is used to generate an
integrated market and credit loss distribution for the purpose of
calculating one or more risk measures associated with a portfolio
of instruments by performing a simulation.
[0093] A market risk factor process is denoted as X(t), a systemic
credit driver process as Y(t), and an idiosyncratic risk factor
process as Z(t). In at least one embodiment, each of the processes
are vector-valued, with X(t) and Y(t) indexed by the individual
scalar risk factors and Z(t) indexed by the counterparty names in
the portfolio. The simulation is performed for at least one time
horizon, wherein the time horizon comprises at least one time step.
Let t and t+.DELTA.t be two consecutive simulation times.
[0094] For a compound risk factor sampling approach, the following
assumptions are made: [0095] Market risk factor X(t) can be
partitioned into at least one group of components with each group
assigned a particular model. [0096] Market risk factor X(t) can be
transformed via a bijective function, which will be referred to as
G(X(t)). The function G(X(t)) may be allowed to depend on t and
.DELTA.t but such dependence is suppressed in the following
notation. The increment value
.DELTA.G(X(t)).ident.G(X(t+.DELTA.t))-G(X(t)) is a (possibly
time-dependent) bijective function of a centred Normal random
vector .XI.(t), and will be referred to as
H.sub.t,.DELTA.t(.XI.(t)). H.sub.t,.DELTA.t may depend on X(t) as
well, and such a dependency will be expressed as
H.sub.t,.DELTA.t(.XI.(t);X(t)); [0097] Models for each group within
the market risk factor X(t) that satisfy these assumptions include,
for example: Brownian motions (with or without drift);
Ornstein-Uhlenbeck processes; Hull-White processes; and Geometric
Brownian motions; Black-Karasinski processes. [0098] The
corresponding functions G and H for a group can be represented as
follows: [0099] 1.1. Brownian motion, possibly correlated
[0099] X(t+.DELTA.t)=X(t)+.XI.(t). [0100] The covariances for
.XI.(t) are set in the corresponding rows and columns of
.SIGMA..sub.11.
[0100] G(X)=X, H(.XI.(t))=.XI.(t). [0101] 1.2. Brownian motion with
drift, possibly correlated
[0101] X(t+.DELTA.t)=X(t)+b(t).DELTA.t+.XI.(t). [0102] where b(t)
is the instantaneous drift vector which is constant over the time
step increment. The covariances for .XI.(t) are set in the
corresponding rows and columns of .SIGMA..sub.11.
[0102] G(X)=X, H(.XI.(t))=b(t).DELTA.t+.XI.(t). [0103] 1.3.
Geometric Brownian motion, possibly correlated, with or without
drift, with or without Ito correction
[0103]
X(t+.DELTA.t)=X(t)exp(u(t).DELTA.t-(.delta./2).sigma..sup.2(t)+.X-
I.(t)) [0104] where u(t) is the instantaneous drift vector which is
constant over the time step increment, and .delta.=1 if the Ito
correction is included and 0 otherwise. The covariances for .XI.(t)
are set in the corresponding rows and columns of .SIGMA..sub.11;
.sigma..sup.2(t) is the vector of variances taken from the
corresponding diagonal entries of .SIGMA..sub.11.
[0104] G(X)=log X,
H(.XI.(t))=u(t).DELTA.t-(.delta./2).sigma..sup.2(t)+.XI.(t). [0105]
1.4. Ornstein-Uhlenbeck process, with possible nonzero mean
reverting level
[0105] X(t+.DELTA.t)= x+e.sup.-a.DELTA.t[X(t)- x]+.XI.(t) [0106]
where x is the vector of mean reverting levels and a is the vector
of mean reverting rates, both constant over the entire final
horizon. The covariances for .XI.(t) are set in the corresponding
rows and columns of .SIGMA..sub.11 and are of the form
[0106]
cov.sub.jk[1-exp{-(a.sub.j+a.sub.k).DELTA.t}]/[a.sub.j+a.sub.k]
[0107] for the (j,k)-th pair of components of .XI.(t). Here a.sub.j
and a.sub.k are the j-th and k-th components of a respectively and
cov.sub.jk is the instantaneous covariance of the underlying
driving Brownian motions for the (j,k)-th pair of components of
.XI.(t), their instantaneous covariance being constant over the
entire final horizon.
[0107] G(X)=X, H(.XI.(t))=[1-e.sup.-a.DELTA.t][ x-X(t)]+.XI.(t)
[0108] where 1 denotes the vector with all components equal to 1.
[0109] 1.5. Black-Karasinski process, with possible nonzero
exponential mean reverting level
[0109] log X(t+.DELTA.t)= x+e.sup.-a.DELTA.t[log X(t)- x]+.XI.(t)
[0110] where x is the vector of mean reverting levels and a is the
vector of mean reverting rates, both for the log process and both
constant over the entire final horizon. The covariances for .XI.(t)
are set in the corresponding rows and columns of .SIGMA..sub.11 and
are of the form
[0110]
cov.sub.jk[1-exp{-(a.sub.j+a.sub.k).DELTA.t}]/[a.sub.j+a.sub.k]
[0111] for the (j,k)-th pair of components of .XI.(t). Here a.sub.j
and a.sub.k are the j-th and k-th components of a respectively and
cov.sub.jk is the instantaneous covariance of the underlying
driving Brownian motions for the (j,k)-th pair of components of
.XI.(t), their instantaneous covariance being constant over the
entire final horizon.
[0111] G(X)=log X, H(.XI.(t))=[1-e.sup.-a.DELTA.t][ x-log
X(t)]+.XI.(t) [0112] where 1 denotes the vector with all components
equal to 1. [0113] Systemic credit driver Y(t) is a correlated
Brownian motion (CBM). The increment value
.DELTA.Y(t).ident.Y(t+.DELTA.t)-Y(t) is normally distributed with
mean zero. [0114] A random vector (.XI.(t), .DELTA.Y(t)) is
conditional on (X(t), Y(t)) and is jointly normally distributed,
having a covariance matrix .SIGMA., where
[0114] .ident. [ 11 12 21 22 ] . ##EQU00001## [0115] Note that
.SIGMA. will depend generally on t, .DELTA.t, X(t), Y(t) even
though this dependence is suppressed in the notation. [0116]
Idiosyncratic credit risk factor Z(t) is a standard Brownian
motion, which is independent of (X(t), Y(t)). The increment value
.DELTA.Z(t).ident.Z(t+.DELTA.t)-Z(t) is normally distributed, N (0,
.DELTA.t), and is independent of the random vector (.XI.(t),
.DELTA.Y(t)).
[0117] Referring now to FIG. 11, there is shown, for comparative
purposes, an example block diagram of a risk factor model module
142 illustrating how certain market factor models may be applied in
a simulation performed in accordance with a known "two-tiered"
approach.
[0118] In this simplified example, it may be observed that the risk
factor model module 142 implements a "two-tiered" approach, since
at the end of each time step t+.DELTA.t, a single market risk
factor sample X(t+.DELTA.t) 88 and a single systemic credit driver
sample Y(t+.DELTA.t) 90 are generated, along with multiple
idiosyncratic credit risk samples Z(t+.DELTA.t) 93.
[0119] Referring now to FIG. 12, there is shown an example block
diagram of a risk factor model module 144 for generating risk
factor samples for a time step ending at time t+.DELTA.t, for use
in a risk factor simulation module implementing compound risk
factor sampling in accordance with at least one embodiment.
[0120] As shown in FIG. 12, for a time step ending at time
t+.DELTA.t, the risk factor model module 144 not only generates
multiple idiosyncratic credit risk factors samples Z(t+.DELTA.t)
93, but also multiple systemic credit drivers samples Y(t+.DELTA.t)
136, while only generating a single market risk factor sample
X(t+.DELTA.t) 88. The risk factor model module 144 generates market
risk factor samples and systemic credit driver samples in a manner
that preserves their co-dependence. Specifically, the credit driver
samples Y(t+.DELTA.t)s 136 are generated conditionally on the
market risk sample X(t+.DELTA.t) 88.
[0121] Generally, the risk factor model module 144 conditionally
generates risk factor samples for the time step ending at time
t+.DELTA.t, (i.e. samples X(t+.DELTA.t) 88, Y(t+.DELTA.t)s 136, and
Z(t+.DELTA.t)s 93) by generating the increment values .XI.(t) 120,
.DELTA.Y(t)s 132, and .DELTA.Z(t)s 82 respectively, using the
relations derived from the above assumptions:
X(t+.DELTA.t)=G.sup.-1(G(X(t))+H.sub.t,.DELTA.t(.XI.(t)))
Y(t+.DELTA.t)=Y(t)+.DELTA.Y(t)(for each Y(t))
Z(t+.DELTA.t)=Z(t)+.DELTA.Z(t)(for each Z(t)).
[0122] The risk factor model module 144 is provided with the
predetermined co-variance matrix .SIGMA. 124 that defines the joint
evolution of market risk factors and systemic credit drivers over
time. In at least one embodiment, .SIGMA. 124 is a covariance
matrix of a random vector (.XI.(t), .DELTA.Y(t)) that is
conditional on X(t) and Y(t) and is jointly normally
distributed.
[0123] The risk factor model module 144 generates a sample of a
vector .XI.(t) 120 (as defined above) of normal random variables
with a distribution N(0,.SIGMA..sub.11). This vector .XI.(t) 120 is
used to obtain the market risk sample X(t+.DELTA.t) 88 and
conditionally generate the systemic credit driver samples
Y(t+.DELTA.t)s 136.
[0124] Specifically, the risk factor model module 144 obtains a
market risk factor sample X(t+.DELTA.t) 88 by transforming the
random vector .XI.(t) 120 via the above defined bijectve function
H.sub.t,.DELTA.t conditional on the previously obtained (i.e. at
the end of time step t) market risk factor sample X(t) 78. This
results in the increment value .DELTA.G(X(t)) 122 (i.e.
H(.XI.(t);X(t))), where
.DELTA.G(X(t)).ident.G(X(t+.DELTA.t))-G(X(t)). A transformation
module 140 may be configured to use the increment value
.DELTA.G(X(t)) 122 to obtain X(t+.DELTA.t) 88, since
X(t+.DELTA.t)=G.sup.-1(G(X(t))+H(.XI.(t);X(t))). The specific
functions used for G and H.sub.t,.DELTA.t may depend on how the
market risk factor process X is modeled. The market risk factor
sample X(t+.DELTA.t) 88 is generated based on the sample of the
vector .XI.(t) 120 of normal random variables, the model for the
market risk factor process X, and a previous market risk factor
sample X(t) 78 generated at the end of time step t.
[0125] The risk factor model module 144 generates credit driver
samples Y(t+.DELTA.t)s 136 conditionally on X(t), X(t+.DELTA.t),
and Y(t), (or equivalently on X(t), .XI.(t), and Y(t)), by
implementing a conditional parameters module 126 and a CBM model
138.
[0126] Given the random vector .XI.(t) 120 and the co-variance
matrix .SIGMA. 124, a conditional parameters module 126 computes a
conditional mean .mu.(.XI.(t)) and conditional co-variance matrix
{tilde over (.SIGMA.)} 128, where:
.mu.(.XI.(t))=.SIGMA..sub.21.SIGMA..sub.11.sup.-1.XI.(t)
{tilde over
(.SIGMA.)}=.SIGMA..sub.22-.SIGMA..sub.21.SIGMA..sub.11.sup.-1.SIGMA..sub.-
12.
[0127] In a case where .SIGMA..sub.11 is not invertible, then
alternatively the conditional parameters module 126 may use, for
example, a Moore-Penrose generalized inverse, .SIGMA..sub.11.sup.+
in place of .SIGMA..sub.11.sup.-1.
[0128] The conditional parameters 128 (.mu.(.XI.(t)) and {tilde
over (.SIGMA.)}) are provided to the CBM model 138 for defining the
multi-sample conditional distribution 130 for generating the
multiple increment values .DELTA.Y(t)s 132. Specifically, the
increment values .DELTA.Y(t)s 132 are generated from a multi-sample
with the conditional normal distribution N(.mu.(.XI.(t),{tilde over
(.SIGMA.)}).
[0129] These increment values .DELTA.Y(t)s 132 are combined with
multiple systemic credit driver samples Y(t)s 134 previously
generated at the time step ending at time t. This results in
multiple systemic credit driver samples Y(t+.DELTA.t)s 136 being
conditionally generated on the market risk factor sample
X(t+.DELTA.t) 88.
[0130] In addition, the risk factor model module 144 may
independently generate multiple idiosyncratic credit risk samples
Z(t+.DELTA.t)s 93, as generally described in relation to FIG. 8.
However, note that multiple idiosyncratic credit risk samples
Z(t+.DELTA.t)s 93 are generated for each market risk factor
sample-systemic credit driver sample pair. Specifically, a set of I
idiosyncratic credit risk samples Z(t+.DELTA.t)s 93 are generated
for each of the S conditional systemic credit driver samples
Y(t+.DELTA.t)s 136, per market risk factor sample X(t+.DELTA.t) 88.
This is graphically illustrated in FIG. 15, and will be explained
in further detail herein.
[0131] The risk factor model module 144 will repeat this process
until the steps are performed for a given time step t+.DELTA.t that
is the last time step of the time horizon. Although only one market
risk factor sample is shown to be generated in this example,
multiple market risk factor samples (M) may be generated at the end
of each time step t+.DELTA.t, with the systemic credit driver
samples generated conditionally on each of the market risk factor
samples, as will be explained herein.
[0132] FIG. 12 illustrates how the discrete-time credit driver
process Y is generated incrementally, conditionally on the
discrete-time market risk factor process X using .XI.(t) 120 and
.DELTA.Y(t)s 132. By repeatedly sampling the N(.mu.(.XI.(t),{tilde
over (.SIGMA.)}) distribution to generate .DELTA.Y(t)s 132,
multiple conditional systemic credit driver samples Y(t+.DELTA.t)s
136 are generated at the end of each time step t+.DELTA.t.
Illustrative Example
[0133] The risk factor model module 144 will be further illustrated
with a simple example consisting of three risk factors: two market
factors--an equity value Xe, following a Geometric Brownian Motion
and a mean reverting interest rate X.sub.r--and a single credit
driver Y, following a Brownian Motion:
dX.sub.e=vX.sub.edt+.sigma..sub.1X.sub.edB.sub.1
dX.sub.r=a[ x-X.sub.r]dt+.sigma..sub.2dB.sub.2
dY=dB.sub.3
where .nu. is a constant growth rate, x is the constant mean
reverting level, a is the rate of mean reversion,
.sigma..sub.1,.sigma..sub.2 are instantaneous volatilities, and
(B.sub.1,B.sub.2,B.sub.3) is a Brownian motion with instantaneous
correlation matrix, (.rho..sub.ij).sub.1.ltoreq.i,j.ltoreq.3.
[0134] The solutions to these stochastic differential equations are
given as
log X e ( t ) = log X e ( 0 ) + [ v - .sigma. 1 2 2 ] t + .sigma. 1
B 1 ( t ) ##EQU00002## X r ( t ) = - at [ X r ( 0 ) - x _ ] + x _ +
.sigma. 2 - at .intg. 0 t as B 2 ( s ) Y ( t ) = Y ( 0 ) + B 3 ( t
) ##EQU00002.2##
[0135] Moreover, the increments are given by
.DELTA. log X e ( t ) = [ v - .sigma. 1 2 2 ] .DELTA. t + .sigma. 1
.DELTA. B 1 ( t ) ##EQU00003## .DELTA. X r ( t ) = [ 1 - - a
.DELTA. t ] [ x _ - X r ( t ) ] + .sigma. 2 - a [ t + .DELTA. t ]
.intg. t t + .DELTA. t as B 2 ( s ) ##EQU00003.2## .DELTA. Y ( t )
= .DELTA. B 3 ( t ) . ##EQU00003.3##
[0136] Thus we can set (with Transposition of matrices are denoted
by a superscript, "'"; vectors are represented as columns):
G ( X e , X r ) = ( log X e , X r ) ' ##EQU00004## .XI. ( t ) = (
.XI. 1 ( t ) , .XI. 2 ( t ) ) ' .ident. ( .DELTA. B 1 ( t ) , - a [
t + .DELTA. t ] .intg. t t + .DELTA. t as B 2 ( s ) ) '
##EQU00004.2## H t , .DELTA. t ( .XI. ) = ( [ v - .sigma. 1 2 2 ]
.DELTA. t + .sigma. 1 .XI. 1 , [ 1 - - a .DELTA. t ] [ x _ - X r (
t ) ] + .sigma. 2 .XI. 2 ) ' . ##EQU00004.3##
[0137] Indeed, (.XI..sub.1(t),.XI..sub.2(t),.DELTA.Y(t))' is
normally distributed with mean (0,0,0)' because we can write it in
the form
.intg..sub.t.sup.t+.DELTA.tA(s)(dB.sub.1(s),dB.sub.2(s),dB.sub.3(s))'
for a deterministic matrix function A: A(s)=diag(1,
exp(-a[t+.DELTA.t-s], 1)).
[0138] Using the well known result:
[.intg..sub.t.sup.t+.DELTA.t.phi.(s)dB.sub.i(s).intg..sub.t.sup.t+.DELTA-
.t.psi.(s)dB.sub.j(s)]=.rho..sub.ij.intg..sub.t.sup.t+.DELTA.t.phi.(s).psi-
.(s)ds
for deterministic integrands, .phi. and .psi., the covariance
matrix .SIGMA. 124 of (.XI..sub.1(t),.XI..sub.2(t),.DELTA.Y(t))' is
found to be
.ident. [ 11 12 21 22 ] = [ .DELTA. t .rho. 12 a [ 1 - - a .DELTA.
t ] .rho. 13 .DELTA. t .rho. 12 a [ 1 - - a .DELTA. t ] 1 2 a [ 1 -
- 2 a .DELTA. t ] .rho. 23 a [ 1 - - a .DELTA. t ] .rho. 13 .DELTA.
t .rho. 23 a [ 1 - - a .DELTA. t ] .DELTA. t ] . ##EQU00005##
[0139] Using these illustrative example results, the generation of
risk factor samples at each given time step t+.DELTA.t (by e.g.
risk factor model module 144) is reduced incrementally to that for
.XI.,.DELTA.Y,.DELTA.Z, as described above in relation to FIG.
12.
[0140] Referring now to FIG. 13, there is shown an example block
diagram of a compound risk factor sampling module 200 comprising
the risk factor model module 144 of FIG. 12.
[0141] For illustrative purposes, the example compound risk factor
sampling module 200 receives as input three market risk factor
processes 202 (X.sup.1, X.sup.2, X.sup.3), two credit driver
processes 204 (Y.sup.1, Y.sup.2), and two idiosyncratic credit risk
factor processes 206 (Z.sup.1, Z.sup.2). The compound risk factor
sampling module 200 also receives covariance matrix .SIGMA. 124
(FIG. 12) as input, which is provided to the risk factor model
module 144. Generally, the risk factor model module 144
conditionally generates multiple systemic credit driver samples on
a given market risk factor sample, as generally described above in
relation to FIG. 12.
[0142] The risk factor model module 144 implements at least one
market risk factor model 208 for generating samples for at least
one specified market risk factor. In this example, the risk factor
model module 144 implements a risk factor model for each of the
three market risk factors 202. The models for the market risk
factor process may be any of the models described above, such as
for example, Brownian motions (with or without drift);
Ornstein-Uhlenbeck processes; Hull-White processes; Geometric
Brownian motions; Black-Karasinski processes.
[0143] An example market risk factor is X.sup.3, and a market risk
factor sample X.sub.n.sup.3 is generated by the risk factor model
module 144.
[0144] The risk factor model module 144 further implements CBM
models (e.g. CBM model 212) for generating systemic credit driver
samples of the systemic credit driver processes 204, as is
described in relation to CBM model 138 of FIG. 12. For example,
samples for systemic credit driver Y.sup.1 are generated according
to CBM model 212, which functions similarly to CBM model 138.
Specifically, CBM model 212 generates samples of credit driver
Y.sup.1 conditionally on the market risk factor samples generated
for the particular time step. This is achieved as described in
relation to FIG. 12, and illustratively shown by the risk factor
model module 144 passing the conditional parameters 128
(.mu.(.XI.(t)) and {tilde over (.SIGMA.)}) from the market risk
model 208 to the CBM model 212 for compound risk factor
sampling.
[0145] Idiosyncratic credit risk factors 206 are modeled as
Brownian motions. For example, samples for idiosyncratic credit
risk factor Z.sup.1 evolve as is described in relation to FIG. 8 by
the risk factor model module 144 implementing idiosyncratic credit
risk factors model 214.
[0146] The compound risk factor sampling module 200 further
receives a sampling scheme, or a set of parameter values for M 216,
S 218 and (optionally) I 220. These parameter values indicate the
number M of market risk factor samples, the number S of systemic
credit driver samples for each of the M market risk factor samples,
and the number I of idiosyncratic credit risk factor samples for
each of the S of systemic credit driver samples, that are to be
generated at each time step of the simulation. Details of how these
sample size values M, S, I may be optimally determined will be
described herein in accordance with at least one embodiment.
[0147] The compound risk factor sampling module 200 uses the
resulting set of risk factor samples in defining risk factor
scenarios.
[0148] Referring now to FIG. 14A, there is shown a flowchart
diagram illustrating a computer-implemented method 300 for
generating an integrated market and credit loss distribution for
the purpose of calculating one or more risk measures associated
with a portfolio of instruments by performing a simulation, in
accordance with at least one embodiment described herein. The acts
of the method 300 are performed by a computer comprising at least
one processor and at least one memory.
[0149] At 305, at least a first time horizon for performing the
simulation is identified. The time horizon comprises at least one
time step, and may comprise a plurality of time steps. Furthermore,
a simulation may be performed for multiple time horizons by
repeatedly performing 320 to 365 in order to generate risk measures
for each time horizon.
[0150] At 310, data identifying a market risk factor process X, a
systemic credit driver process Y, and an idiosyncratic credit risk
factor process Z is received as input. The market risk factor
process X is a vector-valued process indexed by individual scalar
risk factors, the systemic credit driver process Y is a
vector-valued process indexed by individual scalar risk factors,
and the idiosyncratic credit risk factor process Z is a
vector-valued process indexed by counterparty names in the
portfolio of instruments.
[0151] The data identifying processes X, Y, and Z comprises, for
each process X, Y and Z, a start value or initial value, at least
one function representing a model (i.e. Brownian Motions (with or
without drift); Ornstein-Uhlenbeck processes; Hull-White processes;
Geometric Brownian Motions; Black-Karasinski processes), and zero
or more parameters for the model associated with the respective
process.
[0152] In addition, at 310, data comprising one or more co-variance
matrices (e.g. .SIGMA.124) is received. As described above, the one
or more co-variance matrices defines the joint evolution of X and Y
over the first time horizon. If the time horizon comprises multiple
time steps, one of the one or more co-variance matrices is
associated with each of the time steps, and accordingly, defines
the joint evolution of X and Y over the respective time step.
[0153] At 315, a first parameter M, a second parameter S, and a
third parameter I are identified. These parameter values define a
compound risk factor sampling scheme. Specifically, M defines a
desired number of market risk factor samples, S defines a desired
number of systemic credit driver samples that are to be generated
for each of M market risk factor samples, and I defines a desired
number of idiosyncratic credit risk factor samples to be generated
for each of S systemic credit driver samples. Accordingly, the
sampling scheme will define the desired number of risk factor
samples for the time horizon. More particularly, M is a value
greater than 0, S is a value greater than 1, and I is a value
greater than 0, in at least one embodiment. As shown in FIG. 13,
for example, the parameter values M 216, S 218, and I 220 are
provided to the compound risk factor sampling module 200 to
generate MSI risk factor scenarios.
[0154] Generally, acts 320 to 350 relate to the generation of N=MSI
risk factor scenarios for the time horizon. However, if the time
horizon contains multiple time steps, then acts 320 to 345 are
repeated until the end of the given time step is also the end of
the time horizon identified at 305. In one example embodiment, the
time horizon has two time steps, such that acts 320 and 345 will be
repeated twice, in generating the N scenarios for the time
horizon.
[0155] For ease of reference, the following indexing scheme will be
used to refer to particular risk factor samples: [0156] X.sub.m is
the m-th of the M market risk factor samples; [0157] Y.sub.ms is
the s-th of the S systemic credit driver samples occurring with
market risk factor sample X.sub.m; and [0158] Z.sub.ms; is the i-th
of the I idiosyncratic credit factor samples occurring with market
risk factor sample X.sub.m and systemic credit driver sample
Y.sub.ms.
[0159] The N=MSI scenarios are defined by N sets of X, Y, and Z
values (X.sub.m,Y.sub.ms,Z.sub.msi) for all m from 1 to M, for all
s from 1 to S, and for all i from 1 to I. In one example
embodiment, these N scenarios for the time horizon will be
generated after performing acts 320 to 345 twice, once for each
time step. Acts 320 and 345 will be described generally with
reference to a given time step.
[0160] At 320, for each m from 1 to M, a sample, having index m, of
a vector .XI.(t) (e.g. .XI.(t) 120 of FIG. 12) of centred normal
random variables is generated.
[0161] At 325, for each m from 1 to M and for each s from 1 to S, a
random sample, having index ms, of .DELTA.Y(t) from a conditional
distribution N(.mu.(.XI.(t),{tilde over (.SIGMA.)}) is generated.
The conditional distribution is derived from the sample of the
vector .XI.(t) having index m, and from the one or more co-variance
matrices received at 310. Again if the time horizon contains
multiple time steps, then the co-variance matrix used is the one
associated with the given time step. As shown in FIG. 12, the
co-variance matrix is used to derive the conditional covariance
matrix {tilde over (.SIGMA.)} used for the above defined
distribution of Y. This results in MS samples for the increment
.DELTA.Y(t).
[0162] At 330, for each m from 1 to M and for each s from 1 to S
and for each i from 1 to I, a random sample, having index msi, of
an increment of Z (AZ) is independently generated. The generation
of the samples for .DELTA.Z is generally as is described in
relation to FIGS. 8 and 12 above. This results in MSI samples for
the increment .DELTA.Z.
[0163] At 335, for each of the M samples of the vector .XI.(t), a
market risk factor sample X.sub.m, m {1, 2, . . . , M}, is
calculated for a given time step using the sample having the index
m for the vector .XI.(t). The market risk factor sample X.sub.m is
calculated as is generally described in relation to FIG. 12. That
is, X.sub.m=X(t+.DELTA.t)=G.sup.-1(G(X(t))+H(.XI.(t);X(t))), for
the mth sample of the vector .XI.(t), where the end of the given
time step is t+.DELTA.t. The market risk factor samples are
generated based on the at least one function associated with X
(i.e. the given model which is used to define G and H) and the
market risk factor sample obtained at the previous time step (i.e.
X(t)). If the given time step is the first time step of the time
horizon, then the previous market risk factor sample is the start
value received at 310. This results in the generation of M market
risk samples X.sub.m for m {1, 2, . . . , M} for the given time
step.
[0164] At 340, for each of the MS samples of .DELTA.Y(t), a
systemic credit driver sample Y.sub.ms, m.epsilon.{1, 2, . . . ,
M}, and s.epsilon.{1, 2, . . . , S}, is calculated for a given time
step using the ms-th sample of .DELTA.Y(t). The systemic credit
driver sample Y.sub.ms is calculated as is generally described in
relation to FIG. 12. Systemic credit driver samples are based on
the function associated with Y and the systemic credit driver
samples obtained at the previous time step (i.e. Y(t)). If the
given time step is the first time step of the time horizon, then
each of the previous systemic credit driver samples is the start
value received at 310. This results in the generation MS systemic
credit driver samples Y.sub.ms, for m.epsilon.{1, 2, . . . , M},
and s.epsilon.{1, 2, . . . , S}, for the given time step. That is,
S systemic credit driver samples Y.sub.ms are generated
conditionally on each of the M market risk samples X.sub.m.
[0165] At 345, for each of the MSI samples for .DELTA.Z, an
idiosyncratic credit risk factor sample Z.sub.msi, m.epsilon.{1, 2,
. . . , M}, s.epsilon.{1, 2, . . . , S}, and i.epsilon.{1, 2, . . .
, I}, is calculated for a given time step using the msi-th sample
of .DELTA.Z. The idiosyncratic credit risk factor sample Z.sub.msi
is calculated as is generally described in relation to FIGS. 8 and
12. That is, idiosyncratic credit risk factor samples are based on
the function associated with Z and the idiosyncratic credit risk
factor samples obtained at the previous time step (i.e. Z(t)). If
the given time step is the first time step of the time horizon,
then each of the previous idiosyncratic credit risk factor samples
is the start value received at 310. This results in MSI
idiosyncratic credit risk factor samples, for m.epsilon.{1, 2, . .
. , M}, s.epsilon.{1, 2, . . . , S}, and i.epsilon.{1, 2, . . . ,
I}. That is, I idiosyncratic credit risk factor samples Z.sub.msi
are generated for each of the generated S systemic credit driver
samples Y.sub.ms.
[0166] If the end of the given time step is not the end of the time
horizon, then steps 320 to 345 are repeated for the next time step.
This may result in the generation of intermediary market risk
factor samples, systemic credit driver samples, and idiosyncratic
credit risk samples, which may be stored in at least one memory
and/or at least one storage device.
[0167] At 350, N=MSI risk factor scenarios are generated for the
time horizon. The N scenarios are defined by N sets of X, Y, and Z
values (X.sub.m,Y.sub.ms,Z.sub.msi) for all m from 1 to M, for all
s from 1 to S, and for all i from 1 to I. Note that the values
(X.sub.m,Y.sub.ms,Z.sub.msi) are the samples for a given time step,
with the end of the given time step equal to the end of the time
horizon. Put another way, the scenarios generated at 350 in at
least one embodiment are a result of a simulation performed over
the time horizon.
[0168] Referring now to FIG. 15, there is shown a graphical
representation of the resulting set of risk factor scenarios.
Specifically, the resulting set of risk factor scenarios may be
illustrated as a three-level regular unrooted tree. Each node (e.g.
node 408) on the tree indicates a risk factor sample. A set of
samples of market risk factor X (such as e.g. set 402), the set
being of size M, is shown as the first level of the tree.
[0169] Then, for each market risk factor sample X.sub.m, where
m.epsilon.{1, 2, . . . , M} (e.g. node 410) there are S conditional
samples of systemic credit driver Y generated (such as e.g. set
404). This results in a total set of systemic credit driver samples
of size MS, or (Y.sub.m1, . . . , Y.sub.ms) for each m.epsilon.{1,
2, . . . , M} (i.e. S samples of Y per sample of X) and is shown as
the second level of the tree.
[0170] For each of the market risk factor samples m.epsilon.{1, 2,
. . . , M} and a corresponding systemic credit driver sample from
the generated systemic credit deriver samples s.epsilon.{1, 2, . .
. , S}, there are I idiosyncratic credit risk factor samples
generated (such as e.g. set 406). This results in a total set of
idiosyncratic risk factor samples of size MSI (i.e. I samples per
MS market risk factor--systemic credit driver sample) and is shown
as the third level of the tree.
[0171] Referring back to FIG. 14A, at 355, N=MSI simulated loss
samples are computed by simulating the portfolio over the N risk
factor scenarios over the time horizon. The simulated loss samples
may generally be computed as described in relation to FIG. 3, using
the N sets of X, Y, and Z values for all m from 1 to M, for all s
from 1 to S, and for all i from 1 to I that define the N risk
factor scenarios. For compound risk factor sampling, only M
separate simulated exposure tables (e.g. table 38) are generated
by, for example, the pricing module 36 (i.e. a simulated exposure
table for each distinct market risk factor sample) in order to
provide N=MSI loss samples. In contrast, following the "two-tiered"
approach, MS exposure tables would have been calculated, and in the
"simple sampling" or "brute force" approach, MSI exposure tables
would have been calculated.
[0172] Each of the N=MSI loss samples may be denoted as
L(X.sub.m,Y.sub.ms,Z.sub.msi), in respect of a given m, s and i.
Using the N=MSI loss samples, the empirical unconditional loss
distribution function {circumflex over (F)} may be obtained. The
distribution may also be stored. For any loss value l then
{circumflex over (F)}(l) is the proportion of the simulated loss
samples which are less than or equal to a given value l where:
F ^ ( l ) = 1 MSI m = 1 M s = 1 S i = 1 I 1 { L ( X m , Y ms , Z
msi ) .ltoreq. l } ##EQU00006##
where 1{ . . . } is the indicator of the event in braces, taking
the value 1 if the event occurs, or 0 if the event does not
occur.
[0173] The empirical unconditional loss distribution function
{circumflex over (F)} may then be used to calculate one or more
risk measures, which may be used for evaluating risk associated
with the portfolio.
[0174] Accordingly, at 360, at least one risk measure for the
portfolio is calculated from one or more characteristics of the
empirical unconditional loss distribution {circumflex over (F)}.
For example, a risk measure may be one of: a mean, a variance, a
value at risk equaling a specified p-quantile, an unexpected loss
equaling a specified p-quantile, and an expected shortfall equaling
a specified p-quantile as previously defined.
[0175] At 365, the at least one risk measure calculated at 360 is
stored and/or output for use in evaluating the risk associated with
the portfolio.
[0176] In the "two-tiered" approach, joint samples of market risk
factors and systemic credit driver samples are taken in a manner
that accounts for the correlation between changes in market risk
factors and systemic credit drivers. For a desired number of
distinct systemic credit driver samples (e.g. an increased number
relative to other risk factors may be desired to accurately
approximate the loss distribution for certain portfolios),
generation of joint samples will require that a corresponding
market risk factor sample be generated for each systemic credit
driver sample. This also holds for a "simple sampling"
approach.
[0177] Accordingly, when it is considered necessary to generate a
large number of distinct systemic credit driver samples, a
correspondingly large size M of distinct market risk factor samples
is also generated when computing sample losses. Computing sample
losses for an increased number of distinct market risk factor
samples may increase cost (e.g. is computationally expensive) much
more significantly relative to the increase in cost when the number
of distinct systemic credit driver samples and/or the number of
distinct idiosyncratic credit risk factor samples is increased.
This may be due in part to, for example, the increase in the number
of derivative positions of a portfolio, which must be valued for
each of the distinct market risk factor samples generated.
[0178] In contrast, with a compound risk factor sampling approach,
it becomes possible to sample market risk factors and systemic
credit drivers in a manner that allows the number of distinct
market risk factor samples (i.e. M) and the number of distinct
systemic credit driver samples (i.e. MS) in generated scenarios to
be different. Accordingly, an increase in the number of distinct
systemic credit driver samples does not require a corresponding
increase in the number of distinct market risk factors samples
required.
[0179] At least one embodiment described herein, as described with
reference to FIG. 14A for example, relates to a specific
implementation of a system and method that not only allows the
number of distinct market risk factor samples (i.e. M) and the
number of distinct systemic credit driver samples (i.e. MS) in
generated scenarios to be different, but also further ensures that
risk factor samples are generated consistent with the correlation
between changes in market risk factors and systemic credit
drivers.
Further Variant Embodiments
[0180] Embodiments of the method 300 described with reference to
FIG. 14A may be generally regarded as describing a pure Monte Carlo
(MC) approach, in that random sampling is carried out in all three
"tiers" in the performance of sequence of method acts. In
particular, at 330 of FIG. 14A, the increment .DELTA.Z was randomly
sampled, for use at 345, in generating idiosyncratic credit risk
factor samples. However, sampling of the idiosyncratic credit risk
factor Z is not essential, and the empirical unconditional loss
distribution {circumflex over (F)} may be determined in alternative
ways. For example, in variant embodiments, an analytic valuation or
approximation may be employed to determine each of a number of
conditional loss distributions F.sub.X.sub.m.sub.,Y.sub.ms, which
may in turn be used to compute the empirical unconditional loss
distribution {circumflex over (F)}.
[0181] The above formula for the empirical unconditional loss
distribution F may be rearranged to:
F ^ ( l ) = 1 MSI m = 1 M s = 1 S i = 1 I 1 { L ( X m , Y ms , Z
msi ) .ltoreq. l } .ident. 1 MS m = 1 M s = 1 S F X m , Y ms ( l )
##EQU00007##
[0182] F.sub.X.sub.m.sub.,Y.sub.ms denotes an empirical conditional
loss distribution function, conditional on the market risk
factor--systemic credit driver scenario X.sub.m,Y.sub.ms. In a pure
MC approach (as described with reference to FIG. 14A), the
conditional loss distribution function F.sub.X.sub.m.sub.,Y.sub.ms
is:
F X m , Y ms ( l ) = 1 I i = 1 I 1 { L ( X m , Y ms , Z msi )
.ltoreq. l } ##EQU00008##
where 1{ . . . } is the indicator of the event in braces, taking
the value 1 if the event occurs, or 0 if the event doesn't
occur.
[0183] In a variant embodiment, an analytic valuation or
approximation for F.sub.X.sub.m.sub.,Y.sub.ms might be available
and may be used, as described further with reference to FIG.
14B.
[0184] Referring now to FIG. 14B, there is shown a flowchart
diagram illustrating a computer-implemented method 300 for
generating an integrated market and credit loss distribution for
the purpose of calculating one or more risk measures associated
with a portfolio of instruments by performing a simulation, in
accordance with at least one variant embodiment.
[0185] Act 305 is generally as is described in relation to FIG.
14A. Further, act 309, is similar to the act performed at 310 of
FIG. 14A, except that only data identifying a market risk factor
process X, and a systemic credit driver process Y, is received as
input. More particularly, data identifying an idiosyncratic credit
risk factor process Z is not required. Accordingly, at 316, only
parameter values for M and S are identified, but is otherwise
similar to the act performed at 315 of FIG. 14A.
[0186] Acts 320 to 351 are similar to acts 320 to 350 of FIG. 14A,
except that only MS risk factor scenarios are generated for the
time horizon, and accordingly acts 330 and 345 described with
reference to FIG. 14A are essentially eliminated in this at least
one variant embodiment. In respect of embodiments described with
reference to FIG. 14B, the MS scenarios are defined by MS sets of X
and Y values (X.sub.m,Y.sub.ms) for all m from 1 to M, and for all
s from 1 to S.
[0187] At act 352, for each of the MS scenarios defined by MS sets
of X and Y values (X.sub.m,Y.sub.ms) for all m from 1 to M, and for
all s from 1 to S, a conditional loss distribution
F.sub.X.sub.m.sub.,Y.sub.ms is analytically derived. This results
in the generation of MS conditional loss distributions
F.sub.X.sub.m.sub.,Y.sub.ms for the first time horizon.
[0188] As previously noted, in at least one variant embodiment, an
analytic valuation or approximation for F.sub.X.sub.m.sub.,Y.sub.ms
is used. For example, each empirical conditional loss distribution
F.sub.X.sub.m.sub.,Y.sub.ms may be approximated according to one of
a number of analytic techniques, such as the Law of Large Numbers
(LLN) or Central Limit Theorem (CLT), if the portfolio is
sufficiently large and fine grained. The notion of fine granularity
in finance is that no counterparty (or small number of
counterparties) contributes an overwhelming amount to the loss
distribution. In more general contexts of mathematical statistics,
this is known as uniform infinitesimality. This is justified by the
conditional independence of the counterparties of the portfolio,
conditional on a given market risk factor sample and systemic
credit driver sample pair.
[0189] Alternatively, by the same independence property, the
conditional loss distributions F.sub.X.sub.m.sub.,Y.sub.ms can be
calculated as the convolution of all the individual counterparty's
loss distributions, for example, using the Fast Fourier Transform
(FFT) after discretizing the loss values onto a common lattice.
[0190] Accordingly, by way of example, the following methods may be
employed to calculate the conditional loss distributions,
F.sub.X.sub.m.sub.,Y.sub.ms, at 352:
[0191] LLN
[0192] CLT
[0193] convolution via FFT
[0194] At 353, the unconditional loss distribution {circumflex over
(F)} is calculated as a mixture (e.g. the mean) of the MS
conditional loss distributions, such that:
F ^ = 1 MS m = 1 M s = 1 S F X m , Y ms ##EQU00009##
[0195] Finally, acts 360 and 365 are generally as described with
reference to FIG. 14A.
[0196] Referring now to FIG. 14C, there is shown a flowchart
diagram illustrating a computer-implemented method 300 for
generating an integrated market and credit loss distribution for
the purpose of calculating one or more risk measures associated
with a portfolio of instruments by performing a simulation, in
accordance with at least one variant embodiment described
herein.
[0197] Embodiments of method 300 as described in FIG. 14C relates
to a "hybrid" of the embodiments generally described with reference
to FIGS. 14A and 14B. Generally, in the hybrid case, which may be
regarded as a combination of an MC technique employed in
embodiments described with reference to FIG. 14A and an analytic
technique employed in embodiments described with reference to FIG.
14B, the portfolio is partitioned into several non-overlapping
sub-portfolios, in each of which, a distinct method is used to
calculate the conditional loss distributions
F.sub.X.sub.m.sub.,Y.sub.ms. In particular, an MC technique is used
to determine the conditional loss distributions for at least a
first sub-portfolio, and an analytic approach is used to determine
the conditional loss distributions for at least a second
sub-portfolio. The resulting conditional loss distributions are
convoluted together, for example, using FFT.
[0198] In FIG. 14C, acts 305, 310 and 311 are as generally
described with reference to acts 305, 310, and 315 of FIG. 14A.
[0199] At 312, the portfolio of interest is partitioned into a
first sub-portfolio and a second sub-portfolio. Only two
sub-portfolios are shown for ease of explanation, however, it will
be understood that the portfolio may be partitioned into more than
two non-overlapping groups in variant embodiments. Generally for
each of the sub-portfolios, MS empirical conditional loss
distributions are calculated using any of the previously identified
methods, for example. These may include, for example, MC, LLN, CLT
and convolution via FFT. By way of illustration, FIG. 14C will be
described with respect to an embodiment wherein the MS empirical
conditional loss distributions for the first sub-portfolio are
calculated via MC, and the MS empirical conditional loss
distributions for the second sub-portfolio are calculated via an
analytic technique (e.g. one of LLN, CLT and convolution via
FFT).
[0200] At 313, for the first sub-portfolio, MSI risk factor
scenarios for the time horizon are generated. The MSI risk factor
scenarios may be generated by, for example, performing the acts 315
to 350 as described with reference to FIG. 14A, for the first
sub-portfolio. The MSI scenarios for the first sub-portfolio are
defined by N sets of X, Y, and Z values
(X.sub.m,Y.sub.ms,Z.sub.msi) for all m from 1 to M, for all s from
1 to S, and for all i from 1 to I. X.sub.m, Y.sub.ms, and Z.sub.msi
for all m from 1 to M, for all s from 1 to S, and for all i from 1
to I comprise the risk factor samples generated for the first
sub-portfolio, generated at 335, 340, and 345 of FIG. 14A, for
example.
[0201] At 314, MSI simulated loss samples for the first
sub-portfolio are computed by simulating the first sub-portfolio
over the MSI risk factor scenarios. The simulated loss samples may
be generally computed as described with reference to FIG. 3, using
the MSI sets of X, Y, and Z values (X.sub.m,Y.sub.ms,Z.sub.msi) for
all m from 1 to M, for all s from 1 to S, and for all i from 1 to I
that define the N=MSI risk factor scenarios. The N=MSI loss samples
may be denoted as L(X.sub.m,X.sub.ms,Z.sub.msi).
[0202] At 317, for each m.epsilon.{1, 2, . . . , M} and
s.epsilon.{1, 2, . . . , S}, an empirical conditional loss
distribution function, F.sub.X.sub.m.sub.,Y.sub.ms, is calculated
based on the simulated loss samples, L(X.sub.m, Y.sub.ms,
Z.sub.msi). For any loss value, l, F.sub.X.sub.m.sub.,Y.sub.ms(l)
is the proportion of the simulated loss values which are less than
or equal to a given value, l; viz.
F X m , Y ms ( l ) = 1 I i = 1 I 1 { L ( X m , Y ms , Z msi )
.ltoreq. l } ##EQU00010##
where 1{ . . . } is the indicator of the event in braces, taking
the value 1 if the event occurs, or 0 if the event doesn't
occur.
[0203] This results in MS conditional loss distribution functions
F.sub.X.sub.m.sub.,Y.sub.ms.sup.P1, for each m from 1 to M and each
s from 1 to S, for the first sub-portfolio.
[0204] At 319, the risk factor samples obtained in relation to the
first sub-portfolio are re-used in the processing of the second
sub-portfolio to produce MS risk factor scenarios for the second
sub-portfolio. Specifically, MS risk factor scenarios for the
second sub-portfolio are defined by MS sets of X and Y values
(X.sub.m,Y.sub.ms) for all m from 1 to M, and for all s from 1 to S
obtained for the first sub-portfolio.
[0205] At 321, the act performed at 352 as generally described with
reference to FIG. 14B is performed to analytically derive a
conditional loss distribution for each the MS risk factor scenarios
for the second sub-portfolio. This results in the generation of MS
conditional loss distributions F.sub.X.sub.m.sub.,Y.sub.ms.sup.P2,
for each m from 1 to M and each s from 1 to S, for the second
sub-portfolio.
[0206] At 323, the MS conditional loss distributions
F.sub.X.sub.m.sub.,Y.sub.ms.sup.P1 generated at 317 for the first
sub-portfolio are convoluted via FFT with the MS conditional loss
distributions F.sub.X.sub.m.sub.,Y.sub.ms.sup.P2 generated at 321
for the second sub-portfolio. More specifically, for each m from 1
to M and each s from 1 to S, MS empirical conditional loss
distributions F.sub.X.sub.m.sub.,Y.sub.ms are calculated for the
portfolio by convoluting, for example via FFT, the ms-th
conditional loss distribution F.sub.X.sub.m.sub.,Y.sub.ms.sup.P1
for said first sub-portfolio with the ms-th conditional loss
distribution F.sub.X.sub.m.sub.,Y.sub.ms.sup.P2 for said second
sub-portfolio.
[0207] At act 354, the unconditional loss distribution {circumflex
over (F)} for the portfolio is calculated as a mixture (e.g. a
mean) of the MS conditional loss distributions, such that:
F ^ = 1 MS m = 1 M s = 1 S F X m , Y ms ##EQU00011##
[0208] Acts 360 and 365 are performed as generally described with
reference to FIGS. 14A and 14B.
Sample Size Determination
[0209] In another broad aspect, systems and methods to facilitate
the selection of appropriate risk factor sample size values (e.g.
M, S and optionally I) are provided. In at least one embodiment,
appropriate values can be automatically selected given a set of
performance requirements.
[0210] For example, in the context of embodiments described herein
with reference to FIGS. 14A through 14C, optimal values for M, S
and I can be computed to be provided as the parameters identified
at act 315 of FIG. 14A and act 311 of FIG. 14C. Similarly, optimal
values computed for M and S may also be provided as the parameters
identified at 316 of FIG. 14B.
[0211] The primary performance criterion is the variability of the
resulting estimates of the one or more risk measures obtained from
the empirical loss distribution {circumflex over (F)}. Examples of
risk measures may include, without limitation: a mean, a variance,
a value at risk equaling a specified p-quantile, an unexpected loss
comprising a value at risk equaling a specified p-quantile less a
mean, and an expected shortfall comprising an expected value of
losses that exceed a specified p-quantile as previously
defined.
[0212] The VaR l.sub.p (the pth quantile) of the loss distribution
{circumflex over (F)} can be estimated from N loss samples by the
empirical p-quantile {circumflex over (l)}.sub.p, which is defined
as:
{circumflex over (l)}.sub.p=L.sub.(.left brkt-bot.Np.right
brkt-bot.+1)
where L.sub.(k) is the kth order statistic, i.e., the kth smallest
value of the N loss samples.
[0213] For example, if N=100 then the 97.5 percentile (p=0.975) is
estimated by the kth order statistic (i.e. L.sub.(k) where
k=[97.5]+1=98. In this example, the 97.5 percentile is estimated by
the third largest loss of the N loss samples.
[0214] As the size N of loss samples becomes large, the sample
quantile {circumflex over (l)}.sub.p of an m-dependent sequence has
variance Var({circumflex over (l)}.sub.p) defined as follows:
Var ( l ^ p ) = Var ( F ^ ( l p ) ) f ( l p ) 2 ##EQU00012##
where f is the probability density of the loss distribution.
[0215] Using the Law of Total (Conditional) Variance, it can be
shown that:
Var ( F ^ ( l p ) ) = v 1 0 M + v 2 0 MS + v 3 0 MSI
##EQU00013##
for appropriate coefficients .nu..sub.1.sup.0, .nu..sub.2.sup.0 and
.nu..sub.3.sup.0.
[0216] Defining Var({circumflex over (F)}({circumflex over
(l)}.sub.p)).ident..sigma..sup.2, the following variance
decomposition result is obtained.
[0217] Proposition 1 There are nonnegative constants,
.nu..sub.1.sup.0, .nu..sub.2.sup.0, .nu..sub.3.sup.0, which do not
depend on M, S, I, such that
.sigma. 2 = v 1 o M + v 2 o MS + v 3 o MSI . ( 1 ) ##EQU00014##
[0218] It will be understood that the last term is absent for
embodiments applying a pure analytic technique (see e.g. FIG. 14B).
For all cases--i.e. pure MC (see e.g. FIG. 14A), pure analytic (see
e.g. FIG. 14B), or analytic-MC hybrid (see e.g. FIG. 14C)--the
coefficients .nu..sub.1.sup.0, .nu..sub.2.sup.0 are defined as:
.nu..sub.1.sup.0=Var(E[F.sub.X,Y(l.sub.p)|X],
.nu..sub.2.sup.0=E[Var(F.sub.X,Y(l.sub.p)|X)],
the expression for .nu..sub.3.sup.0 depending on the particular
technique.
[0219] For a pure MC method, the term .nu..sub.3.sup.0 is defined
as:
.nu..sub.3.sup.0=p-E[{F.sub.X,Y(l.sub.p)}.sup.2].
[0220] The term .upsilon..sub.3.sup.0 is not applicable for the
pure analytic method.
[0221] For an analytic-MC hybrid method, let F.sub.X,Y.sup.A denote
the conditional loss distribution for the part of the portfolio
using analytic methods and let F.sub.X,Y.sup.MC denote the
conditional loss distribution for the part of the portfolio using
the MC method. Thus F.sub.X,Y=F.sub.X,Y.sup.A*F.sub.X,Y.sup.MC
where * is a convolution of cumulative distribution functions such
that
F.sub.X,Y.sup.A*F.sub.X,Y.sup.MC(l)=.intg.F.sub.X,Y.sup.A(l-l')dF.sub.X,Y-
.sup.MC(l')).
[0222] Then, in a hybrid case, the term .nu..sub.3.sup.0 is defined
as:
.nu..sub.3.sup.0=E[((F.sub.X,Y.sup.A).sup.2*F.sub.X,Y.sup.MC)(l.sub.p)]--
E[{F.sub.X,Y(l.sub.p)}.sup.2]
where (F.sub.X,Y.sup.A) is treated as a cumulative distribution
function and * again denotes the convolution of cumulative
distribution functions.
[0223] Formally, the analytic case is just the MC case with I set
to .infin..
[0224] Therefore, the variance of the estimated p-quantile (i.e.
the estimated VaR) is related to the risk factor sample sizes as
follows
Var ( l ^ p ) = 1 f ( l p ) 2 ( v 1 0 M + v 2 0 MS + v 3 0 MSI ) (
2 a ) ##EQU00015##
[0225] In practice, the values of the coefficients
.nu..sub.1.sup.0, .nu..sub.2.sup.0, .nu..sub.3.sup.0 and the
density f(l.sub.p) are estimated from an initial pilot simulation
with M, S and I chosen to be large.
[0226] Once these values have been obtained (e.g. by a pilot
simulation module 545 of FIG. 16), Equation 2a can be used to
determine parameters M, S and I that will provide quantile
estimates with the predetermined level of precision (e.g. an
acceptable level for the given application) on a regular basis.
[0227] In summary, determining a desired sampling scheme generally
involves identifying an acceptable variance level for a risk
measure, and computing a variance of estimates of said selected one
risk measure. Finally M, S and I are determined such that said
variance is within said acceptable variance level.
[0228] For example, if the risk estimate is the VaR, then the
variance of that particular risk measure may be computed using
Equation 2a. Then M, S and I are determined such that the variance
of the estimated VaR is within an acceptable tolerance level.
[0229] As a further example, the mean of the loss distribution can
be estimated from N=MSI sampled losses by the sample mean
.mu. ^ = 1 MSI m = 1 M s = 1 S i = 1 I L ( X m , Y ms , Z msi )
##EQU00016##
[0230] Similar to the estimated p-quantile, the variance of the
sample mean can be expressed as:
Var ( .mu. ^ ) = v 1 0 M + v 2 0 MS + v 3 0 MSI ( 2 b )
##EQU00017##
for appropriate coefficients .nu..sub.1.sup.0, .nu..sub.2.sup.0 and
.nu..sub.3.sup.0. In this case, the coefficients are given by
.nu..sub.1.sup.0=Var(E[L(X,Y,Z)|X]),
.nu..sub.2.sup.0=E[Var(.LAMBDA.(X,Y)|X)] where
.LAMBDA.(X,Y)=E[L(X,Y,Z)|X,Y]
and
.nu..sub.3.sup.0=E[Var(L(X,Y,Z)|X,Y)]
[0231] If the MS conditional loss distributions
F.sub.X.sub.m.sub.,Y.sub.ms are obtained analytically, then the
mean loss is estimated as the average of their respective means,
i.e.,
.mu. ^ = 1 MS m = 1 M s = 1 S .mu. ^ ms ##EQU00018##
where {circumflex over (.mu.)}.sub.ms.ident..LAMBDA.(X,Y) using the
notation above. In this case, the values of .nu..sub.1.sup.0 and
.nu..sub.2.sup.0 are the same as for the sample mean while
.nu..sub.3.sup.0=0.
[0232] As noted previously, in practice the number of risk factor
samples that can be generated may be limited by computational
resource and/or time constraints. For example, since banks
typically assess risk on a daily basis, there may be an 8-hour
window for completing the simulation. It is possible to use an
expression for the variance of the desired estimator (e.g. Equation
2a or 2b, for risk measure VaR and mean respectively) in
conjunction with such constraints to obtain an optimal sampling
scheme (e.g. a set of sample sizes M, S and I) that minimizes the
variability of risk estimates while satisfying constraints on
resources and/or time.
[0233] Suppose that a time window of length T is available for the
simulation and that the processing times for the various types of
risk factor samples are:
[0234] c.sub.M for each market factor sample
[0235] c.sub.S for each credit driver sample
[0236] c.sub.I for each idiosyncratic credit factor sample
[0237] These processing times may be received as input (e.g. via
input module 540) and/or obtained or computed otherwise prior to
determining the sampling scheme.
[0238] The optimal sampling scheme may be obtained by solving the
following optimization problem:
min M , S , I v 1 0 M + v 2 0 MS + v 3 0 MSI s . t . c M M + c S MS
+ c I MSI .ltoreq. T M .gtoreq. 1 S .gtoreq. 1 I .gtoreq. 1. ( 3 a
) ##EQU00019##
[0239] If no sampling of Z is performed, as is the case with
analytic methods (see e.g. FIG. 14B), then the optimization problem
simplifies to
min M , S v 1 0 M + v 2 0 MS s . t . c M M + c S MS .ltoreq. T M
.gtoreq. 1 S .gtoreq. 1. ( 3 b ) ##EQU00020##
[0240] Referring now to FIG. 16, there is shown an example diagram
of a risk factor simulation system 500 implementing a compound risk
factor sampling approach and configured to determine an optimized
sampling scheme in accordance with at least one embodiment. The
system 500 may be implemented as computer hardware and/or software
applications that comprise a set of integrated components in
modular form. Referring also to FIG. 17, there is shown another
example diagram of a system 500 including a set of generated risk
factor samples 504 that define risk factor scenarios.
[0241] Risk factor simulation system 500 generally comprises input
data modules 540 to support the loading and managing of large
amounts of information obtained from various data sources as input
(i.e. internal applications, internal data sources, external data
sources, market sources, instrument sources). Input data modules
may receive data identifying a market risk factor process X, a
systemic credit driver process Y, and an idiosyncratic credit risk
factor process Z, for example. Again, X may be a vector-valued
process indexed by individual scalar risk factors, Y may be a
vector-valued process indexed by individual scalar risk factors,
and Z may be a vector-valued process indexed by counterparty names
in the portfolio of instruments. The data identifying processes X,
Y, and Z comprises, for each process X, Y and Z, a start value, at
least one function representing a model, and zero or more
parameters for the model associated with the respective
process.
[0242] Input data modules 540 may also receive data comprising one
or more co-variance matrices that defines the joint evolution of X
and Y over the first time horizon, or over a given time step in the
event the time horizon comprises multiple time steps.
[0243] Input data modules 540 may also receive a data indicating a
predetermined time period T over which to perform the risk factor
simulation (e.g. time T 516 of FIG. 17), which may provide for a
computational constraint (e.g. a time within which the simulation
is required to be performed). In addition, the input data modules
540 may receive data indicating the processing times (e.g.
processing times 514 of FIG. 17) required for generating a risk
factor sample for each of the various types of risk factor samples.
The input modules 540 may also receive data indicating a
performance constraint indicating an acceptable level of
variability for the obtained risk measure (e.g. VaR).
[0244] The data received by input data modules 540 may be stored
in, for example, a database 550 (internal or external), which may
be implemented using one or more memories and/or storage devices,
for access by other system 500 modules. In addition, other data
generated and/or utilized by the system 500 modules may be stored
in database 550 for subsequent retrieval and use.
[0245] The risk factor simulation system 500 further comprises an
initial pilot simulation module 545 for estimating values for
coefficients .upsilon..sub.1.sup.0, .upsilon..sub.2.sup.0,
.upsilon..sub.3.sup.0 510 and the probability density of the loss
distribution f(l.sub.p) 512. The initial pilot simulation module
545 selects large values for M, S and I and runs an initial pilot
simulation using the system 500 to obtain the pilot simulation loss
distribution {circumflex over (F)}. The coefficients
.upsilon..sub.1.sup.0, .upsilon..sub.2.sup.0, .upsilon..sub.3.sup.0
510 and the density f(l.sub.p) 512 are then estimated from the
pilot simulation loss distribution {circumflex over (F)}.
[0246] The main components of risk factor simulation system 500
(FIG. 16) comprise an optimized sampling scheme module 502 and a
compound risk factor sampling module 200 (FIG. 13).
[0247] The optimized sampling scheme module 502 receives the
initially estimated coefficients .upsilon..sub.1.sup.0,
.upsilon..sub.2.sup.0, .upsilon..sub.3.sup.0 510 and the density
f(l.sub.p) 512 from initial pilot simulation module 545. The
optimized sampling scheme module 502 may also receive additional
data, for example, from database 550 or input module 540, such as
the time T 516 available for performing the simulation and the
processing times c.sub.M, c.sub.S, c.sub.I 514 for generating each
of the risk factor samples.
[0248] The optimized sampling scheme module 502 is configured to
solve one or more predefined optimization problems, such as e.g.
Equation 3a, to compute parameters for the optimal sampling scheme
(M, S, I) 508. Other optimization problems relating M, S, and
(optionally) I to the variability of the selected risk measure(s)
may alternatively be implemented in variant embodiments.
[0249] For example, in the event that an analytic technique is used
to derive the unconditional loss distribution, such as is described
with reference to FIG. 14B, alternatively the optimization module
502 may determine an optimal sampling scheme (M,S) by implementing
e.g. Equation 3b.
[0250] In addition, the optimization module 502 may receive other
performance related data, such as a performance level parameter
indicating a required maximum level of variability for one or more
risk measures. The optimization module 502 may use such data to
identify a maximum acceptable variance level for at least a
selected one risk measure.
[0251] The optimization module 502 is configured to compute a
variance of estimates of the selected one risk measure, as
described herein. Finally, the optimization module 502 determines
values for M, S and, optionally, I, such that the variance is
within the acceptable variance level (e.g. by evaluating Equation
2a and/or 2b).
[0252] Further, the optimization module 502 may be configured to
evaluate equations 2a and/or 2b in conjunction with solving an
optimization problem (e.g. 3a and/or 3b) to obtain an optimal
sampling scheme 508 that provides an acceptable level of
variability as indicated by a specified performance level. For
example, for p=0.999, then the VaR value (Var({circumflex over
(l)}.sub.p)) of the p-quantile (or VaR) provided by Equations 2a
and/or 2b may be required to be at least equal to (if not lower
than) the specified performance level considered acceptable.
[0253] For illustration purposes, in this example, optimized
sampling scheme module 502 (FIG. 17) computes an optimal sampling
scheme 508, represented by M=2, S=2, and I=3.
[0254] The optimized sampling scheme module 502 provides data
identifying the optimal sampling scheme 508 to the compound risk
factor sampling module 200. The compound risk factor sampling
module 200 generally implements, for example, acts 320 to 350 of
FIG. 14A to generate MSI risk factor scenarios defined by the
resulting set of the risk factor samples, in the illustrated
example. The compound risk factor sampling module 200 comprises a
risk factor model module 144 (FIGS. 12 and 13), for generating the
compound risk factor samples 504 used by the compound risk factor
sampling module 200 to define the MSI risk factor scenarios. It
will be understood that the risk factor model module 144 may
implement any or all of the market risk factor models, the systemic
credit driver CBM models, and the idiosyncratic credit risk factor
models described herein for compound risk factor sampling.
[0255] Referring to FIG. 17, the resulting set of risk factor
samples 504 that define the risk factor scenarios is illustrated.
The resulting set of risk factor samples 504 comprises M=2 distinct
market risk factor samples for each of the 3 market risk factors X,
MS=4 distinct systemic credit driver samples (conditioned on the
market risk factors) for each of the 2 systemic credit drivers Y,
and MSI=12 idiosyncratic credit risk factor samples for each of the
2 idiosyncratic credit risk factors Z. The compound risk factor
sampling module 200 produces MSI risk factor scenarios using the
resulting set of risk factor samples 504. Each row of the resulting
set of risk factor samples 504 constitutes a risk factor scenario,
for MSI scenarios in total.
[0256] Comparing the risk factor samples 504 illustrated in detail
in FIG. 17, with the risk factor samples 96 of FIG. 9 (the
"two-tiered" approach) and the risk factor samples 59 of FIG. 7
(the "simple-sampling" approach), it is shown that the compound
risk factor sampling module 200 reduces the number of distinct
market risk factor samples generated, while keeping the resultant
number N=12 of risk factor scenarios and simulated loss samples
100/506 the same. Furthermore, in this example of FIG. 17, it can
be seen that compound risk factor sampling module 144 generates a
larger number of distinct systemic credit driver samples (e.g. 4 in
FIG. 17 versus 3 in FIG. 9), while requiring a smaller number of
distinct market risk factor samples (e.g. 2 in FIG. 17 versus 3 in
FIG. 9) and hence a smaller number of simulated exposure tables
that would need to be computed to generate the same number of loss
samples.
[0257] Referring to FIG. 18, there is shown a simplified graphical
representation 600 of three risk factors, i.e. a market risk factor
X.sup.1 518, a systemic credit driver Y.sup.1 520, and an
idiosyncratic credit risk factor Z.sup.1 522, making up a subset of
the resulting compound risk factor sample 504 that is generated by
the compound risk factor sampling module 200 according to the
optimal sampling scheme 508.
[0258] Specifically, the three risk factor subset of the compound
risk factor sample 504 is illustrated as a three level tree (as in
FIG. 15), with M=2 market risk factor samples 602, MS=4 total
systemic credit driver samples 604 (i.e. S=2 systemic credit driver
samples for each of the M=2 market risk factor samples), and MSI=12
total idiosyncratic credit risk factor samples 606 (i.e. I=3
idiosyncratic samples for each of the MS=4 market risk factor
sample--systemic credit driver sample pairs).
[0259] Referring back to FIGS. 16 and 17, the risk factor samples
504 that define the risk factor scenarios may be provided to a loss
sample module 555 for use in obtaining MSI=N=12 portfolio loss
samples 506. The loss sample module 555 may function generally as
is described with reference to the loss sample module 24 of FIG. 3.
Generally, the loss sample module 555 may be configured to use the
market risk factor samples for pricing the instruments of the
portfolio and calculating exposure tables (one per M market risk
factor samples). The systemic credit driver samples and
idiosyncratic risk samples are used to determine the simulated
credit states for the counterparties by computing a
creditworthiness index.
[0260] The MSI=N=12 simulated loss samples 506 may then be provided
to a loss distribution module 528. The loss distribution module 528
may be configured to determine an empirical unconditional loss
distribution {circumflex over (F)} based on the simulated loss
samples 506, as may be generally described with reference to act
355 of FIG. 14A.
[0261] Alternatively, the compound sampling module 200 may provide
MS risk factor scenarios (defined by the set of risk factor
samples) directly to the loss distribution module 528. The loss
distribution module 528 may be configured to perform acts 352 and
353 of FIG. 14B in order to generate the empirical unconditional
loss distribution {circumflex over (F)}.
[0262] Further, in the event the portfolio is partitioned into two
sub-portfolios for example (as is described in relation to FIG.
14C), the loss distribution module 528 may receive a hybrid of MSI
loss samples for a first sub-portfolio from the loss sample module
555, and MS risk factor scenarios for a second sub-portfolio. The
loss distribution module 528 may be configured to perform acts 317,
321, 323 and 354 of FIG. 14C to generate the empirical
unconditional loss distribution {circumflex over (F)}.
[0263] Finally, a risk measure module 530 is configured to
determine at least one risk measure using at least one
characteristic of the approximate loss distribution. Example risk
measures may include, without limitation: the mean, the variance,
the VaR (the p-quantile), unexpected loss, and expected shortfall.
The one or more computed risk measures may be used to evaluate risk
associated with the portfolio of interest, which integrates credit
and market risk. The risk measure may be stored (in e.g. database
550) and/or output by the risk factor simulation system 500, for
further use.
[0264] The compound risk factor sampling scheme described herein
may be extended to encompass other portfolio risk model variations,
in variant embodiments.
[0265] What has been described herein is merely illustrative of a
number of example embodiments. Other configurations, variations,
and arrangements to the systems and methods may be implemented by
those skilled in the art without departing from the spirit and
scope of the embodiments described herein as defined in the amended
claims.
* * * * *