Occasionally Binding Constraints

Occasionally binding constraints are a form of nonlinearity frequently used to model effects like the zero lower bound on interest rates, or borrowing constraints. Perturbation methods are not able to capture them as they are local approximations. Nonetheless, there are ways to combine the speed of perturbation solutions and the flexibility of occasionally binding constraints. MacroModelling.jl provides a convenient way to write down the constraints and automatically enforces the constraint equation with shocks. More specifically, the constraint equation is enforced for each periods unconditional forecast (default forecast horizon of 40 periods) by constraint equation specific anticipated shocks, while minimising the shock size.

This guide will demonstrate how to write down models containing occasionally binding constraints (e.g. effective lower bound and borrowing constraint), show some potential problems the user may encounter and how to overcome them, and go through some use cases.

Common problems that may occur are that no perturbation solution is found, or that the algorithm cannot find a combination of shocks which enforce the constraint equation. The former has to do with the fact that occasionally binding constraints can give rise to more than one steady state but only one is suitable for a perturbation solution. The latter has to do with the dynamics of the model and the fact that we use a finite amount of shocks to enforce the constraint equation.

Beyond the examples outlined in this guide there is a version of Smets and Wouters (2003) with the ELB in the models folder (filename: SW03_obc.jl).

Example: Effective lower bound on interest rates

Writing a model with occasionally binding constraints

Let us take the Galı́ (2015), Chapter 3 model containing a Taylor rule and implement an effective lower bound on interest rates. The Taylor rule in the model: R[0] = 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0]) needs to be modified so that R[0] never goes below an effective lower bound . We can do this using the max operator: R[0] = max(R̄ , 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0]))

The model definition after the change of the Taylor rule looks like this:

julia> using MacroModelling
julia> @model Gali_2015_chapter_3_obc begin W_real[0] = C[0] ^ σ * N[0] ^ φ Q[0] = β * (C[1] / C[0]) ^ (-σ) * Z[1] / Z[0] / Pi[1] R[0] = 1 / Q[0] Y[0] = A[0] * (N[0] / S[0]) ^ (1 - α) R[0] = Pi[1] * realinterest[0] C[0] = Y[0] log(A[0]) = ρ_a * log(A[-1]) + std_a * eps_a[x] log(Z[0]) = ρ_z * log(Z[-1]) - std_z * eps_z[x] nu[0] = ρ_ν * nu[-1] + std_nu * eps_nu[x] MC[0] = W_real[0] / (S[0] * Y[0] * (1 - α) / N[0]) 1 = θ * Pi[0] ^ (ϵ - 1) + (1 - θ) * Pi_star[0] ^ (1 - ϵ) S[0] = (1 - θ) * Pi_star[0] ^ (( - ϵ) / (1 - α)) + θ * Pi[0] ^ (ϵ / (1 - α)) * S[-1] Pi_star[0] ^ (1 + ϵ * α / (1 - α)) = ϵ * x_aux_1[0] / x_aux_2[0] * (1 - τ) / (ϵ - 1) x_aux_1[0] = MC[0] * Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ + α * ϵ / (1 - α)) * x_aux_1[1] x_aux_2[0] = Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ - 1) * x_aux_2[1] log_y[0] = log(Y[0]) log_W_real[0] = log(W_real[0]) log_N[0] = log(N[0]) pi_ann[0] = 4 * log(Pi[0]) i_ann[0] = 4 * log(R[0]) r_real_ann[0] = 4 * log(realinterest[0]) M_real[0] = Y[0] / R[0] ^ η R[0] = max(R̄ , 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0])) endModel: Gali_2015_chapter_3_obc Variables Total: 68 Auxiliary: 41 States: 44 Auxiliary: 40 Jumpers: 5 Auxiliary: 0 Shocks: 44 Parameters: 18

In the background the system of equations is augmented by a series of anticipated shocks added to the equation containing the constraint (max/min operator). This explains the large number of auxilliary variables and shocks.

Next we define the parameters including the new parameter defining the effective lower bound (which we set to 1, which implements a zero lower bound):

julia> @parameters Gali_2015_chapter_3_obc begin
           R̄ = 1.0
       
           σ = 1
       
           φ = 5
       
           ϕᵖⁱ = 1.5
       
           ϕʸ = 0.125
       
           θ = 0.75
       
           ρ_ν = 0.5
       
           ρ_z = 0.5
       
           ρ_a = 0.9
       
           β = 0.99
       
           η = 3.77
       
           α = 0.25
       
           ϵ = 9
       
           τ = 0
       
           std_a = .01
       
           std_z = .05
       
           std_nu = .0025
       
       endRemove redundant variables in non stochastic steady state problem:	1.749 seconds
Set up non stochastic steady state problem:				3.963 seconds
Take symbolic derivatives up to first order:				3.756 seconds
Find non stochastic steady state:					6.767 seconds
Model:        Gali_2015_chapter_3_obc
Variables
 Total:       68
  Auxiliary:  41
 States:      44
  Auxiliary:  40
 Jumpers:     5
  Auxiliary:  0
Shocks:       44
Parameters:   18

Verify the non stochastic steady state

Let's check out the non stochastic steady state (NSSS):

julia> SS(Gali_2015_chapter_3_obc)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
↓   Variables_and_calibrated_parameters ∈ 68-element Vector{Symbol}
And data, 68-element Vector{Float64}:
 (:A)               1.0
 (:C)               0.9505798249541406
 (:MC)              0.8888888888888892
 (:M_real)          0.9152363832868899
 (:N)               0.9346552651840673
 (:Pi)              1.0
 (:Pi_star)         1.0
 (:Q)               0.99
  ⋮
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁴⁰⁾)  0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁴⁾)   0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁵⁾)   0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁶⁾)   0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁷⁾)   0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁸⁾)   0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁹⁾)   0.0
julia> SS(Gali_2015_chapter_3_obc)(:R)1.0101010101010102

There are a few things to note here. First, we get the NSSS values of the auxilliary variables related to the occasionally binding constraint. Second, the NSSS value of R is 1, and thereby the effective lower bound is binding in the NSSS. While this is a viable NSSS it is not a viable approximation point for perturbation. We can only find a perturbation solution if the effective lower bound is not binding in NSSS. Calling get_solution reveals that there is no stable solution at this NSSS:

julia> get_solution(Gali_2015_chapter_3_obc)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
↓   Steady_state__States__Shocks ∈ 89-element Vector{Symbol}Variables ∈ 68-element Vector{Symbol}
And data, 89×68 adjoint(::Matrix{Float64}) with eltype Float64:
                         (:A)(:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁸⁾)  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁹⁾)
  (:Steady_state)         1.0              0.0               0.0
  (:A₍₋₁₎)                0.9              0.0               0.0
  (:S₍₋₁₎)               -3.88333e-16      0.0               0.0
  (:Z₍₋₁₎)               -3.97216e-16      0.0               0.0
  (:nu₍₋₁₎)               1.02181e-15  …   0.0               0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻²²⁾₍₋₁₎)   5.91285e-16      0.0               0.0
   ⋮                                   ⋱   ⋮
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁴⁰⁾₍ₓ₎)     -0.0             -0.0              -0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁴⁾₍ₓ₎)      -0.0             -0.0              -0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁵⁾₍ₓ₎)      -0.0          …  -0.0              -0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁶⁾₍ₓ₎)      -0.0             -0.0              -0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁷⁾₍ₓ₎)      -0.0             -0.0              -0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁸⁾₍ₓ₎)      -0.0             -0.0              -0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁹⁾₍ₓ₎)      -0.0             -0.0              -0.0

In order to get the other viable NSSS we have to restrict the values of R to be larger than the effective lower bound. We can do this by adding a constraint on the variable in the @parameter section. Let us redefine the model:

julia> @model Gali_2015_chapter_3_obc begin
           W_real[0] = C[0] ^ σ * N[0] ^ φ
       
           Q[0] = β * (C[1] / C[0]) ^ (-σ) * Z[1] / Z[0] / Pi[1]
       
           R[0] = 1 / Q[0]
       
           Y[0] = A[0] * (N[0] / S[0]) ^ (1 - α)
       
           R[0] = Pi[1] * realinterest[0]
       
           C[0] = Y[0]
       
           log(A[0]) = ρ_a * log(A[-1]) + std_a * eps_a[x]
       
           log(Z[0]) = ρ_z * log(Z[-1]) - std_z * eps_z[x]
       
           nu[0] = ρ_ν * nu[-1] + std_nu * eps_nu[x]
       
           MC[0] = W_real[0] / (S[0] * Y[0] * (1 - α) / N[0])
       
           1 = θ * Pi[0] ^ (ϵ - 1) + (1 - θ) * Pi_star[0] ^ (1 - ϵ)
       
           S[0] = (1 - θ) * Pi_star[0] ^ (( - ϵ) / (1 - α)) + θ * Pi[0] ^ (ϵ / (1 - α)) * S[-1]
       
           Pi_star[0] ^ (1 + ϵ * α / (1 - α)) = ϵ * x_aux_1[0] / x_aux_2[0] * (1 - τ) / (ϵ - 1)
       
           x_aux_1[0] = MC[0] * Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ + α * ϵ / (1 - α)) * x_aux_1[1]
       
           x_aux_2[0] = Y[0] * Z[0] * C[0] ^ (-σ) + β * θ * Pi[1] ^ (ϵ - 1) * x_aux_2[1]
       
           log_y[0] = log(Y[0])
       
           log_W_real[0] = log(W_real[0])
       
           log_N[0] = log(N[0])
       
           pi_ann[0] = 4 * log(Pi[0])
       
           i_ann[0] = 4 * log(R[0])
       
           r_real_ann[0] = 4 * log(realinterest[0])
       
           M_real[0] = Y[0] / R[0] ^ η
       
           R[0] = max(R̄ , 1 / β * Pi[0] ^ ϕᵖⁱ * (Y[0] / Y[ss]) ^ ϕʸ * exp(nu[0]))
       
       endModel:        Gali_2015_chapter_3_obc
Variables
 Total:       68
  Auxiliary:  41
 States:      44
  Auxiliary:  40
 Jumpers:     5
  Auxiliary:  0
Shocks:       44
Parameters:   18
julia> @parameters Gali_2015_chapter_3_obc begin R̄ = 1.0 σ = 1 φ = 5 ϕᵖⁱ = 1.5 ϕʸ = 0.125 θ = 0.75 ρ_ν = 0.5 ρ_z = 0.5 ρ_a = 0.9 β = 0.99 η = 3.77 α = 0.25 ϵ = 9 τ = 0 std_a = .01 std_z = .05 std_nu = .0025 R > 1.000001 endRemove redundant variables in non stochastic steady state problem: 1.251 seconds Set up non stochastic steady state problem: 3.436 seconds Take symbolic derivatives up to first order: 0.853 seconds Find non stochastic steady state: 0.432 seconds Model: Gali_2015_chapter_3_obc Variables Total: 68 Auxiliary: 41 States: 44 Auxiliary: 40 Jumpers: 5 Auxiliary: 0 Shocks: 44 Parameters: 18

and check the NSSS once more:

julia> SS(Gali_2015_chapter_3_obc)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
↓   Variables_and_calibrated_parameters ∈ 68-element Vector{Symbol}
And data, 68-element Vector{Float64}:
 (:A)               1.0
 (:C)               0.9505798249541406
 (:MC)              0.8888888888888892
 (:M_real)          0.9152363832868899
 (:N)               0.9346552651840673
 (:Pi)              1.0
 (:Pi_star)         1.0
 (:Q)               0.99
  ⋮
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁴⁰⁾)  0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁴⁾)   0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁵⁾)   0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁶⁾)   0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁷⁾)   0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁸⁾)   0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁹⁾)   0.0
julia> SS(Gali_2015_chapter_3_obc)(:R)1.0101010101010102

Now we get R > R̄, so that the constraint is not binding in the NSSS and we can work with a stable first order solution:

julia> get_solution(Gali_2015_chapter_3_obc)2-dimensional KeyedArray(NamedDimsArray(...)) with keys:
↓   Steady_state__States__Shocks ∈ 89-element Vector{Symbol}Variables ∈ 68-element Vector{Symbol}
And data, 89×68 adjoint(::Matrix{Float64}) with eltype Float64:
                         (:A)(:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁸⁾)  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁹⁾)
  (:Steady_state)         1.0              0.0               0.0
  (:A₍₋₁₎)                0.9              0.0               0.0
  (:S₍₋₁₎)               -3.88333e-16      0.0               0.0
  (:Z₍₋₁₎)               -3.97216e-16      0.0               0.0
  (:nu₍₋₁₎)               1.02181e-15  …   0.0               0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻²²⁾₍₋₁₎)   5.91285e-16      0.0               0.0
   ⋮                                   ⋱   ⋮
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁴⁰⁾₍ₓ₎)     -0.0             -0.0              -0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁴⁾₍ₓ₎)      -0.0             -0.0              -0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁵⁾₍ₓ₎)      -0.0          …  -0.0              -0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁶⁾₍ₓ₎)      -0.0             -0.0              -0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁷⁾₍ₓ₎)      -0.0             -0.0              -0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁸⁾₍ₓ₎)      -0.0             -0.0              -0.0
  (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ⁽⁹⁾₍ₓ₎)      -0.0             -0.0              -0.0

Generate model output

Having defined the system with an occasionally binding constraint we can simply simulate the model by calling:

julia> import StatsPlots
julia> plot_simulations(Gali_2015_chapter_3_obc)3-element Vector{Any}: Plot{Plots.GRBackend() n=36} Plot{Plots.GRBackend() n=26} Plot{Plots.GRBackend() n=14}

Simulation_elb

In the background an optimisation problem is set up to find the smallest shocks in magnitude which enforce the equation containing the occasionally binding constraint over the unconditional forecast horizon (default 40 periods) at each period of the simulation. The plots show multiple spells of a binding effective lower bound and many other variables are skewed as a result of the nonlinearity. It can happen that it is not possible to find a combination of shocks which enforce the occasionally binding constraint equation. In this case one solution can be to make the horizon larger over which the algorithm tries to enforce the equation. You can do this by setting the parameter at the beginning of the @model section: @model Gali_2015_chapter_3_obc max_obc_horizon = 60 begin ... end.

Next let us change the effective lower bound to 0.99 and plot once more:

julia> plot_simulations(Gali_2015_chapter_3_obc, parameters = :R̄ => 0.99)3-element Vector{Any}:
 Plot{Plots.GRBackend() n=34}
 Plot{Plots.GRBackend() n=22}
 Plot{Plots.GRBackend() n=14}

Simulation_elb2

Now, the effect of the effective lower bound becomes less important as it binds less often.

If you want to ignore the occasionally binding constraint you can simply call:

julia> plot_simulations(Gali_2015_chapter_3_obc, ignore_obc = true)3-element Vector{Any}:
 Plot{Plots.GRBackend() n=36}
 Plot{Plots.GRBackend() n=24}
 Plot{Plots.GRBackend() n=14}

Simulation_no_elb

and you get the simulation based on the first order solution approximated around the NSSS, which is the same as the one for the model without the modified Taylor rule.

We can plot the impulse response functions for the eps_z shock, while setting the parameter of the occasionally binding constraint back to 1, as follows:

julia> plot_irf(Gali_2015_chapter_3_obc, shocks = :eps_z, parameters = :R̄ => 1.0)3-element Vector{Any}:
 Plot{Plots.GRBackend() n=36}
 Plot{Plots.GRBackend() n=28}
 Plot{Plots.GRBackend() n=8}

IRF_elb

As you can see R remains above the effective lower bound in the first period.

Next, let us simulate the model using a series of shocks. E.g. three positive shocks to eps_z in periods 5, 10, and 15 in decreasing magnitude:

julia> shcks = zeros(1,15)1×15 Matrix{Float64}:
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
julia> shcks[5] = 3.03.0
julia> shcks[10] = 2.02.0
julia> shcks[15] = 1.01.0
julia> sks = KeyedArray(shcks; Shocks = [:eps_z], Periods = 1:15)2-dimensional KeyedArray(NamedDimsArray(...)) with keys: ↓ Shocks ∈ 1-element Vector{Symbol}Periods ∈ 15-element UnitRange{Int64} And data, 1×15 Matrix{Float64}: (1) (2) (3) (4)(12) (13) (14) (15) (:eps_z) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0
julia> plot_irf(Gali_2015_chapter_3_obc, shocks = sks, periods = 10)3-element Vector{Any}: Plot{Plots.GRBackend() n=32} Plot{Plots.GRBackend() n=28} Plot{Plots.GRBackend() n=8}

Shock_series_elb

The effective lower bound is binding after all three shocks but the length of the constraint being binding varies with the shock size and is completely endogenous.

Last but not least, we can get the simulated moments of the model (theoretical moments are not available):

julia> sims = get_irf(Gali_2015_chapter_3_obc, periods = 1000, shocks = :simulate, levels = true)┌ Warning: No solution in period: 167
└ @ MacroModelling ~/work/MacroModelling.jl/MacroModelling.jl/src/MacroModelling.jl:7069
3-dimensional KeyedArray(NamedDimsArray(...)) with keys:
↓   Variables ∈ 23-element Vector{Symbol}Periods ∈ 1000-element UnitRange{Int64}Shocks ∈ 1-element Vector{Symbol}
And data, 23×1000×1 Array{Float64, 3}:
[:, :, 1] ~ (:, :, :simulate):
                   (1)(999)          (1000)
  (:A)               1.00456             1.0             1.0
  (:C)               0.893386            0.95058         0.95058
  (:MC)              0.428594            0.888889        0.888889
  (:M_real)          0.894673            0.915236        0.915236
  (:N)               0.85399      …      0.934655        0.934655
  (:Pi)              0.980038            1.0             1.0
   ⋮                              ⋱      ⋮
  (:nu)             -0.000135444         0.0             0.0
  (:pi_ann)         -0.0798488    …      0.0             0.0
  (:r_real_ann)      0.0359605           0.0402013       0.0402013
  (:realinterest)    1.00903             1.0101          1.0101
  (:x_aux_1)         2.1964              3.452           3.452
  (:x_aux_2)         3.40123             3.8835          3.8835

Let's look at the mean and standard deviation of borrowing:

julia> import Statistics
julia> Statistics.mean(sims(:Y,:,:))0.9497189202046348

and

julia> Statistics.std(sims(:Y,:,:))0.016377904080143692

Compare this to the theoretical mean of the model without the occasionally binding constraint:

julia> get_mean(Gali_2015_chapter_3_obc)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
↓   Variables ∈ 23-element Vector{Symbol}
And data, 23-element Vector{Float64}:
 (:A)              1.0
 (:C)              0.9505798249541406
 (:MC)             0.8888888888888892
 (:M_real)         0.9152363832868899
 (:N)              0.9346552651840673
 (:Pi)             1.0
 (:Pi_star)        1.0
 (:Q)              0.99
  ⋮
 (:log_y)         -0.05068313851352032
 (:nu)             0.0
 (:pi_ann)         0.0
 (:r_real_ann)     0.040201343414006024
 (:realinterest)   1.0101010101010102
 (:x_aux_1)        3.4519956850053957
 (:x_aux_2)        3.8834951456310702
julia> get_mean(Gali_2015_chapter_3_obc)(:Y)0.950579824954141

and the theoretical standard deviation:

julia> get_std(Gali_2015_chapter_3_obc)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
↓   Variables ∈ 23-element Vector{Symbol}
And data, 23-element Vector{Float64}:
 (:A)             0.022941573387056085
 (:C)             0.033571696779991354
 (:MC)            0.21609085046272553
 (:M_real)        0.05926619832100485
 (:N)             0.03786945958932923
 (:Pi)            0.012358762176561934
 (:Pi_star)       0.0370762865296858
 (:Q)             0.020468532209359812
  ⋮
 (:log_y)         0.03531707269466925
 (:nu)            0.002886751345948128
 (:pi_ann)        0.04943504870624771
 (:r_real_ann)    0.056446450663229335
 (:realinterest)  0.014254154207886192
 (:x_aux_1)       0.9515263512145675
 (:x_aux_2)       0.5166586569328678
julia> get_std(Gali_2015_chapter_3_obc)(:Y)0.033571696779991396

The mean of output is lower in the model with effective lower bound compared to the model without and the standard deviation is higher.

Example: Borrowing constraint

Model definition

Let us start with a consumption-saving model containing a borrowing constraint (see [@citet cuba2019likelihood] for details). Output is exogenously given, and households can only borrow up to a fraction of output and decide between saving and consumption. The first order conditions of the model are:

\[\begin{align*} Y_t + B_t &= C_t + R B_{t-1}\\ \log(Y_t) &= \rho \log(Y_{t-1}) + \sigma \varepsilon_t\\ C_t^{-\gamma} &= \beta R \mathbb{E}_t (C_{t+1}^{-\gamma}) + \lambda_t\\ 0 &= \lambda_t (B_t - mY_t) \end{align*}\]

in order to write this model down we need to express the Karush-Kuhn-Tucker condition (last equation) using a max (or min) operator, so that it becomes:

\[0 = \max(B_t - mY_t, -\lambda_t)\]

We can write this model containing an occasionally binding constraint in a very convenient way:

julia> @model borrowing_constraint begin
           Y[0] + B[0] = C[0] + R * B[-1]
       
           log(Y[0]) = ρ * log(Y[-1]) + σ * ε[x]
       
           C[0]^(-γ) = β * R * C[1]^(-γ) + λ[0]
       
           0 = max(B[0] - m * Y[0], -λ[0])
       endModel:        borrowing_constraint
Variables
 Total:       49
  Auxiliary:  41
 States:      42
  Auxiliary:  40
 Jumpers:     1
  Auxiliary:  0
Shocks:       42
Parameters:   7

In the background the system of equations is augmented by a series of anticipated shocks added to the equation containing the constraint (max/min operator). This explains the large number of auxilliary variables and shocks.

Next we define the parameters as usual:

julia> @parameters borrowing_constraint begin
           R = 1.05
           β = 0.945
           ρ = 0.9
           σ = 0.05
           m = 1
           γ = 1
       endRemove redundant variables in non stochastic steady state problem:	0.495 seconds
Set up non stochastic steady state problem:				10.005 seconds
Take symbolic derivatives up to first order:				0.523 seconds
Find non stochastic steady state:					3.073 seconds
Model:        borrowing_constraint
Variables
 Total:       49
  Auxiliary:  41
 States:      42
  Auxiliary:  40
 Jumpers:     1
  Auxiliary:  0
Shocks:       42
Parameters:   7

Working with the model

For the non stochastic steady state (NSSS) to exist the constraint has to be binding (B[0] = m * Y[0]). This implies a wedge in the Euler equation (λ > 0).

We can check this by getting the NSSS:

julia> SS(borrowing_constraint)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
↓   Variables_and_calibrated_parameters ∈ 49-element Vector{Symbol}
And data, 49-element Vector{Float64}:
 (:B)                1.0
 (:C)                0.95
 (:Y)                1.0
 (:Χᵒᵇᶜ⁺ꜝ¹ꜝ)         0.0
 (:λ)                0.008157894736842098
 (:χᵒᵇᶜ⁺ꜝ¹ꜝʳ)       -0.008157894736842098
 (:χᵒᵇᶜ⁺ꜝ¹ꜝˡ)        8.773722820781672e-25
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝ)         0.0
  ⋮
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁴⁰⁾)   0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁴⁾)    0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁵⁾)    0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁶⁾)    0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁷⁾)    0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁸⁾)    0.0
 (:ϵᵒᵇᶜ⁺ꜝ¹ꜝᴸ⁽⁻⁹⁾)    0.0

A common task is to plot impulse response function for positive and negative shocks. This should allow us to understand the role of the constraint.

First, we need to import the StatsPlots package and then we can plot the positive shock.

julia> import StatsPlots
julia> plot_irf(borrowing_constraint)1-element Vector{Any}: Plot{Plots.GRBackend() n=14}

Positive_shock

We can see that the constraint is no longer binding in the first five periods because Y and B do not increase by the same amount. They should move by the same amount in the case of a negative shock:

julia> import StatsPlots
julia> plot_irf(borrowing_constraint, negative_shock = true)1-element Vector{Any}: Plot{Plots.GRBackend() n=16}

Negative_shock

and indeed in this case they move by the same amount. The difference between a positive and negative shock demonstrates the influence of the occasionally binding constraint.

Another common exercise is to plot the impulse response functions from a series of shocks. Let's assume in period 10 there is a positive shocks and in period 30 a negative one. Let's view the results for 50 more periods. We can do this as follows:

julia> shcks = zeros(1,30)1×30 Matrix{Float64}:
 0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  …  0.0  0.0  0.0  0.0  0.0  0.0  0.0
julia> shcks[10] = .60.6
julia> shcks[30] = -.6-0.6
julia> sks = KeyedArray(shcks; Shocks = [:ε], Periods = 1:30)2-dimensional KeyedArray(NamedDimsArray(...)) with keys: ↓ Shocks ∈ 1-element Vector{Symbol}Periods ∈ 30-element UnitRange{Int64} And data, 1×30 Matrix{Float64}: (1) (2) (3) (4) (5)(27) (28) (29) (30) (:ε) 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 -0.6
julia> plot_irf(borrowing_constraint, shocks = sks, periods = 50)1-element Vector{Any}: Plot{Plots.GRBackend() n=14}

Simulation

In this case the difference between the shocks and the impact of the constraint become quite obvious. Let's compare this with a version of the model that ignores the occasionally binding constraint. In order to plot the impulse response functions without dynamically enforcing the constraint we can simply write:

julia> plot_irf(borrowing_constraint, shocks = sks, periods = 50, ignore_obc = true)1-element Vector{Any}:
 Plot{Plots.GRBackend() n=14}

Simulation

Another interesting statistic is model moments. As there are no theoretical moments we have to rely on simulated data:

julia> sims = get_irf(borrowing_constraint, periods = 1000, shocks = :simulate, levels = true)┌ Warning: No solution in period: 363
└ @ MacroModelling ~/work/MacroModelling.jl/MacroModelling.jl/src/MacroModelling.jl:7069
3-dimensional KeyedArray(NamedDimsArray(...)) with keys:
↓   Variables ∈ 4-element Vector{Symbol}Periods ∈ 1000-element UnitRange{Int64}Shocks ∈ 1-element Vector{Symbol}
And data, 4×1000×1 Array{Float64, 3}:
[:, :, 1] ~ (:, :, :simulate):
        (1)           (2)(999)           (1000)
  (:B)    0.999025      0.978434          1.0              1.0
  (:C)    0.94805       0.907891          0.95             0.95
  (:Y)    0.999025      0.978434          1.0              1.0
  (:λ)    0.00951431    0.0370327         0.00815789       0.00815789

Let's look at the mean and standard deviation of borrowing:

julia> import Statistics
julia> Statistics.mean(sims(:B,:,:))0.9597223949913573

and

julia> Statistics.std(sims(:B,:,:))0.08986494457594138

Compare this to the theoretical mean of the model without the occasionally binding constraint:

julia> get_mean(borrowing_constraint)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
↓   Variables ∈ 4-element Vector{Symbol}
And data, 4-element Vector{Float64}:
 (:B)  1.0
 (:C)  0.95
 (:Y)  1.0
 (:λ)  0.008157894736842098

and the theoretical standard deviation:

julia> get_std(borrowing_constraint)1-dimensional KeyedArray(NamedDimsArray(...)) with keys:
↓   Variables ∈ 4-element Vector{Symbol}
And data, 4-element Vector{Float64}:
 (:B)  0.1147078669352811
 (:C)  0.13191404697557313
 (:Y)  0.11470786693528105
 (:λ)  0.07031743997991886

The mean of borrowing is lower in the model with occasionally binding constraints compared to the model without and the standard deviation is higher.