28  Sensitivity Analysis

Author

Yun-Tien Lee

“Sensitivity analysis is the art of understanding how small changes in assumptions can lead to big changes in outcomes, helping us navigate uncertainty with clarity.” — Unknown

28.1 Chapter Overview

Sensitivity analysis tells us how fragile our valuation or risk metrics are once key drivers shift. We will move from local techniques (derivatives, finite differences, automatic differentiation) that explain the effect of tiny shocks to global techniques (regression-based screening, Sobol, Morris, FAST) that summarize how entire distributions of inputs ripple through a model. The chapter closes with practical scenario-analysis guidance so you can turn the math into board-ready stress narratives.

28.2 Setup

To keep the examples concrete we will work with a simplified block of participating whole life policies.

using Dates

@enum Sex Female = 1 Male = 2
@enum Risk Standard = 1 Preferred = 2

mutable struct Policy
    id::Int
    sex::Sex
    benefit_base::Float64
    pir::Float64 # the policy interest rate
    mode::Int
    prem::Float64
    pp::Int
    issue_date::Date
    issue_age::Int
    risk::Risk
end

The fields give us everything we need to project reserves: demographic attributes, benefit base, the policy interest rate (pir), premium pattern, and risk class. Real models would store many more underwriting features, but this toy structure is enough to showcase the techniques.

28.3 The Data

using MortalityTables
sample_csv_data =
    IOBuffer(
        raw"id,sex,benefit_base,pir,mode,prem,pp,issue_date,issue_age,risk
         1,M,100000.0,0.03,1,1000.0,10,1999-12-05,30,Std"
    )

mort = Dict(
    Male => MortalityTables.table(262).ultimate, # assuming no issue ages < 2 or > 120
    Female => MortalityTables.table(261).ultimate, # assuming no issue ages < 2 or > 120
)
Dict{Sex, OffsetArrays.OffsetVector{Float64, Vector{Float64}}} with 2 entries:
  Male   => [0.00051, 0.00038, 0.00035, 0.00032, 0.0003, 0.00027, 0.00025, 0.00…
  Female => [0.00045, 0.00031, 0.00025, 0.00022, 0.0002, 0.00019, 0.00019, 0.00…
using CSV, DataFrames

policies = let

    # read CSV directly into a dataframe
    # df = CSV.read("sample_inforce.csv",DataFrame) # use local string for notebook
    df = CSV.read(sample_csv_data, DataFrame)

    # map over each row and construct an array of Policy objects
    map(eachrow(df)) do row
        Policy(
            row.id,
            row.sex == "M" ? Male : Female,
            row.benefit_base,
            row.pir,
            row.mode,
            row.prem,
            row.pp,
            row.issue_date,
            row.issue_age,
            row.risk == "Std" ? Standard : Preferred,
        )
    end

end
1-element Vector{Policy}:
 Policy(1, Male, 100000.0, 0.03, 1, 1000.0, 10, Date("1999-12-05"), 30, Standard)

For example, res(2, policies[1]) returns the reserve at the end of policy year 2 for our sample contract. We will refer to policies[1] when we need a concrete contract for the sensitivity examples.

Given a basic insurance product, a pure whole of life (WOL) policy with level benefits and level premiums payable within the first 10 years, the reserve in-force at the end of the \(y^{th}\) (\(y\) >= 0) policy year is defined by

\[ res(y) = \sum_{t=age+y}^{120} (sur_{t-age-y} * mort_t * B_y * \sqrt{1 + pir}) - (P_y * sur_{t-age-y}) \]

where

  • \(mort_t\) is the mortality at age \(t\)
  • \(sur_y\) is the survival probability adjusted with the policy interest rate, with values of
    • \(sur_0 = 1\),
    • \(sur_x = sur_{x-1} * (1 - mort_{age+y}) / (1 + pir)\) for \(x >= y\), and
    • 0 for \(x < y - 1\) or \(age + x >= 120\), or ultimate age of the current mortality table
  • \(B_y\) is the level benefit throughout the policy
  • \(P_y\) is the level premium within the first 10 policy years which is 0 for policy years after 10
  • \(pir\) is the level policy interest rate throughout the policy
# Can be made more efficient by storing values in a pre-calculated table. The recursive calculation here is for illustrative purposes.
function sur(y::Int, pol::Policy; Ω=120)
    if y == 0
        1.0
    elseif y < 0 || Ω - y <= pol.issue_age
        0.0
    else
        sur(y - 1, pol) * (1 - mort[pol.sex][pol.issue_age+y-1]) / (1 + pol.pir)
    end
end

function res(y::Int, pol::Policy; Ω=120)
    s = 0.0
    if y >= 1 && y <= Ω - pol.issue_age
        for t in (pol.issue_age+y):Ω
            prem = 0.0
            if y <= pol.pp
                prem = pol.prem
            end
            s += sur(t - pol.issue_age, pol) * mort[pol.sex][t] * pol.benefit_base / sqrt(1 + pol.pir) - prem * sur(t - pol.issue_age, pol)
        end
    end
    s
end
res (generic function with 1 method)

28.4 Common Sensitivity Analysis Methodologies

28.4.1 Finite Differences

Central finite differences approximate derivatives by shocking one input up and down while holding everything else fixed. The result can be interpreted as a “delta” (absolute change per unit of input) or as an elasticity if we scale by the base reserve.

function res_wrt_r_fd(y::Int, pol::Policy, r::Float64, h=1e-3)
    p₊, p₋ = deepcopy(pol), deepcopy(pol)
    p₊.pir, p₋.pir = r + h, r - h
    (res(y, p₋) - res(y, p₊)) / (2 * res(y, pol))
end

res_wrt_r_fd(2, policies[1], 0.03) # percentage changes in reserve at year 2 when the interest rate at 3% with a perturbation of 0.1%
0.18602131593930354

Use a smaller h when pir is quoted in decimals (e.g., 3%) and a larger h if you work in basis points. Always sanity check that the perturbation stays within practical limits (no negative credited rates).

28.4.2 Regression Analyses

Regression-based sensitivity (also called standardized regression coefficients) fits a metamodel between sampled inputs and the resulting reserves, then reports correlations and partial correlations as quick importance scores.

using GlobalSensitivity

function r1_wrt_r(r)
    p = deepcopy(policies[1])
    p.pir = r[2]
    p.prem = r[3]
    res(Int(floor(r[1])), p)
end

# reserve @ year 1 or 2, interest rate @ 0.03 ± 0.01, prem @ 1000.0 ± 0.1
reg_anal = gsa(r1_wrt_r, RegressionGSA(), [[1, 2], [0.029, 0.031], [999.9, 1000.1]], samples=1000)
println(reg_anal.pearson)
[0.002256718958015217 -0.9998838789971553 -0.00455063251199558]

The Pearson coefficients show the correlation coefficient matrix between inputs and outputs.

28.4.3 Sobol Indices

Sobol is a variance-based method, and it decomposes the variance of the output of the model or system into fractions which can be attributed to inputs or sets of inputs. This helps to get not just the individual parameter’s sensitivities, but also gives a way to quantify the effect and sensitivity from the interaction between the parameters.

\[ Y = f_0 + \sum_{i=1}^{d}f_i(X_i) + \sum_{i<j}^{d}f_{ij}(X_i, X_j) + ... + f_{1,2,...,d}(X_1, X_2, ..., X_d) \]

\[ Var(Y) = \sum_{i=1}^{d}V_i + \sum_{i<j}^{d}V_{ij} + ... + V_{1,2,...,d} \]

The Sobol Indices are “ordered”, the first order indices given by \(S_i = \dfrac{V_i}{Var(Y)}\), the contribution to the output variance of the main effect of \(X_i\). Therefore, it measures the effect of varying \(X_i\) alone, but averaged over variations in other input parameters. It is standardized by the total variance to provide a fractional contribution. Higher-order interaction indices \(S_{ij}, S_{ijk}\) and so on can be formed by dividing other terms in the variance decomposition by \(Var(Y)\).

using QuasiMonteCarlo, GlobalSensitivity

# reserve @ year 1/2, interest rate @ 0.03 ± 0.01, prem @ 1000.0 ± 0.1
L, U = QuasiMonteCarlo.generate_design_matrices(1000, [1, 0.029, 999.9], [2, 0.031, 1000.1], SobolSample())
s = gsa(r1_wrt_r, Sobol(), L, U)
println(s.S1)
println(s.ST)
Warning: The `generate_design_matrices(n, d, sampler, R = NoRand(), num_mats)` method does not produces true and independent QMC matrices, see [this doc warning](https://docs.sciml.ai/QuasiMonteCarlo/stable/design_matrix/) for more context. 
    Prefer using randomization methods such as `R = Shift()`, `R = MatousekScrambling()`, etc., see [documentation](https://docs.sciml.ai/QuasiMonteCarlo/stable/randomization/)
@ QuasiMonteCarlo ~/.julia/packages/QuasiMonteCarlo/KvLfb/src/RandomizedQuasiMonteCarlo/iterators.jl:255
[0.0, 1.0207707461946909, -2.1406196791444575e-5]
[0.0, 1.0014104679541778, 1.3708293198388621e-5]

S1 reports how much of the reserve variance comes from each input alone, while ST shows the total contribution including interaction terms. If rate and premium interact strongly, the gap ST - S1 will be large, signaling non-linear cross-effects.

28.4.4 Morris Method

The Morris method also known as Morris’s OAT method where OAT stands for One At a Time can be described in the following steps:

\[ EE_i = \frac{f(x_1, x_2, ...x_i + \Delta, ...x_k) - y}{\Delta} \]

We calculate local sensitivity measures known as “elementary effects”, which are calculated by measuring the perturbation in the output of the model on changing one parameter.

These are evaluated at various points in the input chosen such that a wide “spread” of the parameter space is explored and considered in the analysis, to provide an approximate global importance measure. The mean and variance of these elementary effects is computed. A high value of the mean implies that a parameter is important, a high variance implies that its effects are non-linear or the result of interactions with other inputs. This method does not evaluate separately the contribution from the interaction and the contribution of the parameters individually and gives the effects for each parameter which takes into consideration all the interactions and its individual contribution.

using GlobalSensitivity

# reserve @ year 1/2, interest rate @ 0.03 ± 0.01, prem @ 1000.0 ± 0.1
m = gsa(r1_wrt_r, Morris(), [[1, 2], [0.029, 0.031], [999.9, 1000.1]])
println(m.means)
println(m.variances)
[0.0 -655401.2640841826 -24.298613850484617]
[0.0 1.7497303192115158e8 0.05163706277733505]

From the means it can be observed which variables are more important, and the variances imply higher degree of nonlinearity or interactions with other variables.

28.4.5 Fourier Amplitude Sensitivity Tests

FAST offers a robust, especially at low sample size, and computationally efficient procedure to get the first and total order indices as discussed in Sobol. It utilizes monodimensional Fourier decomposition along a curve, exploring the parameter space. The curve is defined by a set of parametric equations,

\[ EE_i = \frac{f(x_1, x_2, ...x_i + \Delta, ...x_k) - y}{\Delta} \]

where \(s\) is a scalar variable varying over the range \(−∞ < s < +∞\), \(G_i\) are transformation functions and \(w_i, ∀i=1,2,…,N\) is a set of different (angular) frequencies, to be properly selected, associated with each factor for all \(N\) (samples) number of parameter sets.

using GlobalSensitivity

# reserve @ year 1/2, interest rate @ 0.03 ± 0.01, prem @ 1000.0 ± 0.1
fast = gsa(r1_wrt_r, eFAST(), [[1, 2], [0.029, 0.031], [999.9, 1000.1]], samples=1000)
println(fast.S1)
println(fast.ST)
[1.4375336849488495e-11 0.9976794599631812 1.3623825574750418e-5]
[7.257990255471469e-7 0.9999853591324693 0.0023141135503588206]

Interpretation mirrors Sobol: S1 is the main effect and ST the total effect. Differences between Sobol and FAST estimates mostly reflect Monte Carlo noise; agreement across both methods boosts confidence in your ranking.

28.4.6 Automatic Differentiation

By applying the chain rule repeatedly on elementary operations of computer calculations, automatic differentiation can be applied to measure impacts of small differences. More details in Chapter 16 on automatic differentiation.

28.4.7 Scenario Analyses

Quantitative sensitivities are only half the story. Boards and regulators still expect named scenarios that link narrative shocks to financial impacts. Use the techniques from Chapter 25 to build the paths, then layer on the following practices.

28.4.7.1 Reverse stress testing

Reverse stress tests start from an unacceptable outcome (e.g., statutory RBC < 250%) and work backward to the combination of market, actuarial, and operational events that would trigger it.

  1. Define the failure point in business terms.
  2. Enumerate the drivers that could push the system there (rate shocks, mass lapses, cyber loss).
  3. Build a trajectory that connects today’s state to the failure and quantify the cumulative impact.

Benefits include clearer vulnerability maps, stronger contingency plans, and alignment with supervisory expectations.

28.4.7.2 Stylistic scenarios

Stylistic (narrative) scenarios translate technical shocks into stories that executives recognize (“Higher-for-longer inflation with elevated credit spreads”). Describe the macro setting, policy responses, and customer behavior, then map those story elements to the stochastic blocks in your engine. This makes it easier to compare sensitivities across business units that may not share the same quantitative models.

28.4.7.3 Backtesting scenarios

Before anchoring capital plans on a scenario set, test whether similar historical episodes would have produced the modeled impacts.

  1. Define the scenario family (base/up/down, climate pathways, liquidity crunch).
  2. Pull historical periods that resemble each narrative and compute the actual KPI movements.
  3. Run the scenario through your model for those historical start dates.
  4. Compare modeled vs. realized outcomes and reconcile gaps.
  5. Adjust assumptions or document why the future is expected to differ (structural breaks, new hedging programs).

Key reminders when using history: validate data quality, resist overfitting scenarios to the past, and note that structural changes (regulation, product mix, technology) can limit comparability.