Advanced Usage
Package Directory Structure
The package directory structure follows Julia module conventions. Directories in square brackets indicate future additions. Note that this directory tree is not linked, although it appears to be.
DSGE.jl
doc/
: Code and model documentation.save/
: Sample input files; default input/output directories.src/
DSGE.jl
: The main module file.abstractdsgemodel.jl
: Defines theAbstractModel
type.abstractvarmodel.jl
: Defines theAbstractVARModel
,AbstractDSGEVARModel
, andAbstractDSGEVECMModel
types.defaults.jl
: Default settings for models.statespace_types.jl
: Defines types for computing the state-space representation of models.statespace_functions.jl
: Defines functions for computing the state-space representation of models.data/
: Manipulating and updating input dataset.solve/
: Solving the model; includesgensys.jl
code.estimate/
: Optimization, posterior sampling, and other functionality.forecast/
: Forecasts, smoothing, shock decompositions, and impulse response functions.decomp/
: Decompose changes in forecasts into three reasons: new data, data revisions, and changes in the calibration.analysis/
: Moment tables of estimated parameters, computation of forecast means and bands.altpolicy/
: Infrastructure for forecasting under alternative monetary policy rules.scenarios/
: Forecasting alternative scenarios.plot/
: Plot estimation results, forecasts, etc.packet/
: Automatically generate documents with results from forecasts and estimations.models/
representative/
: Representative agent models.m990/
: Contains code to define and initialize version 990 of the New York Fed DSGE model.- [
[m991/]
]: Code for new models should be kept in directories at this level in the directory tree heterogeneous/
: Heterogeneous agent modelspoolmodel/
: PoolModel type for model averaging.var/
: DSGE-VAR and DSGE-VECM models.test/
: Module test suite.
Working with Settings
There are many computational settings that affect how the code runs without affecting the mathematical definition of the model. While the default settings loaded are intended to be comprehensive rather than the minimal number of settings, users will generally want to check that these three settings are properly chosen:
saveroot::String
: The root directory for model output.dataroot::String
: The root directory for model input data.data_vintage::String
: Data vintage, formattedyymmdd
. By default,data_vintage
is set to today's date. It is (currently) the only setting printed to output filenames by default.cond_vintage::String
: Conditional data vintage, formattedyymmdd
. By default,cond_vintage
is set to today's date.data_id::Int64
: ID number to append to a created data set's name and to identify which saved data set to loadcond_id::Int64
: ID number to identify which conditional data set should be loaded
Many functions in DSGE.jl will either require input data or create output data, so it is important to check that the saveroot and dataroot are set as the user intends. Setting the data vintage is also useful for reproducibility. Economic data like GDP are frequently revised, which can pose issues for reproducing results. Setting the data vintage allows users to guarantee the correct vintage of data is used when generating results. By default, the data vintage is set to the current date, so a user will need to manually set the data vintage to the desired date.
Below, we describe several important settings for package usage.
For more details on implementation and usage of settings, see ModelConstructors.jl.
See defaults.jl for the complete description of default settings.
General
saveroot::String
: The root directory for model output.use_parallel_workers::Bool
: Use available parallel workers in computations.nominal_rate_observable
: Name (as aSymbol
) of the observable used to measure the nominal interest rate used to implement monetary policy.monetary_policy_shock
: Name (as aSymbol
) of the exogenous monetary policy shock in a concrete subtype ofAbstractDSGEModel
.n_mon_anticipated_shocks::Int
: Number of anticipated policy shocks.antshocks::Dict{Symbol, Int}
: a dictionary mapping the name of an anticipated shock to the number of periods of anticipation, e.g.:b => 2
adds anticipatedb
shocks up to two periods ahead.ant_eq_mapping::Dict{Symbol, Symbol}
: a dictionary mapping the name of an anticipated shock to the name of the state variable in the equation defining the shock's exogenous process, e.g.:b => :b
maps an anticipatedb
shock to the equationeq_b
.ant_eq_E_mapping::Dict{Symbol, Symbol}
: a dictionary mapping the name of an anticipated shock to the name of the state variable in the equation defining the shock's one-period ahead expectation e.g.:b => :Eb
maps an anticipatedb
shock to the equationeq_Eb
, whereEb
is $E_t[b_{t + 1}]$.proportional_antshocks::Vector{Symbol}
: a vector of the names of one-period ahead anticipated shocks which are specified as directly proportional to the realizations of the current period's unanticipated shocks. For a shockb
, the automatically generated parameterσ_b_prop
defines the proportionality to the current period shock, e.g. a value of 1 indicates an anticipated shock in the next period of the same size as the current period's unanticipated shock.
Data and I/O
dataroot::String
: The root directory for model input data.data_vintage::String
: Data vintage, formattedyymmdd
. By default,data_vintage
is set to today's date. It is (currently) the only setting printed to output filenames by default.dataset_id::Int
: Dataset identifier. There should be a unique dataset ID for each set of observables.cond_vintage::String
: Conditional data vintage, formattedyymmdd
.cond_id::Int
: Conditional dataset identifier. There should be a unique conditional dataset ID for each set of input, raw data mnemonics (not observables!).cond_semi_names::Vector{Symbol}
andcond_full_names::Vector{Symbol}
: names of observables for which we want to use semi- and full conditional data. All other observables areNaN
ed out in the conditional data periods.population_mnemonic::Nullable{Symbol}
: population series mnemonic in formNullable(:<mnemonic>__<source>)
(for example,Nullable(:CNP16OV__FRED)
), orNullable{Symbol}()
if the model doesn't use population data
Dates
date_presample_start::Date
: Start date of pre-sample.date_mainsample_start::Date
: Start date of main sample.date_zlb_start::Date
: Start date of zero lower bound regime.date_zlb_end::Date
: End date of zero lower bound regime.date_forecast_start::Date
: Start date of forecast period (or the period after the last period for which we have GDP data).date_forecast_end::Date
: End date of forecast, i.e. how far into the future to forecast.date_conditional_end::Date
: Last date for which we have conditional data. This is typically the same asdate_forecast_start
when we condition on nowcasts and current quarter financial data.
Estimation
Metropolis-Hastings Settings
reoptimize::Bool
: Whether to reoptimize the posterior mode. Iftrue
(the default),estimate
begins reoptimizing from the model object's parameter vector. See Optimizing or Reoptimizing for more details.calculate_hessian::Bool
: Whether to compute the Hessian. Iftrue
(the default),estimate
calculates the Hessian at the posterior mode.n_mh_simulations::Int
: Number of draws from the posterior distribution per block.n_mh_blocks::Int
: Number of blocks to run Metropolis-Hastings.n_mh_burn::Int
: Number of blocks to discard as burn-in for Metropolis-Hastings.mh_thin::Int
: Metropolis-Hastings thinning step.parallel::Bool
: Flag for running algorithm in parallel.mh_adaptive_accept::Bool
: if true, then the proposal distribution is adapted to achieve a target accept rate.mh_target_accept::Bool
: target accept rate when adaptively adjusting acceptance probability.mh_c::S = 0.5
: Initial scaling factor for covariance of the particles when using an adaptive proposal distribution. Controls size of steps in mutation step.mh_α::S = 1.0
: The mixture proportion for the mutation step's proposal distribution when using an adaptive proposal distribution. See?mvnormal_mixture_draw
for details. Note that a value of 0.9 has commonly been used in applications to DSGE models (see citations below).
Sequential Monte Carlo Settings
n_particles::Int
: Number of particles.n_smc_blocks::Int
: Number of parameter blocks in mutation step.n_mh_steps_smc::Int
: Number of Metropolis Hastings steps to attempt during the mutation step.λ::S
: The 'bending coefficient' λ in Φ(n) = (n/N(Φ))^λn_Φ::Int
: Number of stages in the tempering schedule.resampling_method::Symbol
: Which resampling method to use.:systematic
: Will use sytematic resampling.:multinomial
: Will use multinomial resampling.:polyalgo
: Samples using a polyalgorithm.
threshold_ratio::S
: Threshold s.t. particles will be resampled when the population drops below threshold * Nstep_size_smc::S
: Scaling factor for covariance of the particles. Controls size of steps in mutation step.mixture_proportion::S
: The mixture proportion for the mutation step's proposal distribution.target_accept::S
: The initial target acceptance rate for new particles during mutation.use_fixed_schedule::Bool
: Flag for whether or not to use a fixed tempering (ϕ) schedule.adaptive_tempering_target_smc::S
: Coefficient of the sample size metric to be targeted when solving for an endogenous ϕ or 0.0 if using a fixed schedule.tempered_update_prior_weight::S
: when bridging from old estimation, i.e. a tempered update, the user can create a bridge distribution as a convex combination of the prior and a previously ran estimation. This setting is the relative weight on the prior in the convex combination.smc_iteration::Int
: The iteration index for the number of times SMC has been run on the same data vintage. Primarily for numerical accuracy/testing purposes.previous_data_vintage::String
: the old data vintage from which to start SMC when using a tempered updatedebug_assertion::Bool
: print output (if applicable) when encountering an assertion error during SMC to help with debugging.
Miscellaneous
use_chand_recursion::Bool
: Flag for using Chandrasekhar Recursions in Kalman filter.
Forecasting
forecast_jstep::Int
: Forecast thinning step.forecast_block_size::Int
: Number of draws in each forecast block before thinning byforecast_jstep
.forecast_input_file_overrides::Dict{Symbol, String}
: Mapsinput_type
(s) to the file name containing input draws for that type of forecast. See Forecasting.forecast_horizons::Int
: Number of periods to forecast.impulse_response_horizons::Int
: Number of periods for which to calculate IRFs.n_periods_no_shocks::Int
: Number of periods for which no shocks are drawn (e.g. a full-distribution forecast draws shocks, but ifn_periods_no_shocks = 3
, then for 3 periods in the forecast horizon, no shocks will be drawn)
Alternative Policy
alternative_policy::AltPolicy
: See Alternative Policies.
Accessing Settings
The function get_setting(m::AbstractModel, s::Symbol)
returns the value of the setting s
in m.settings
. Some settings also have explicit getter methods that take only the model object m
as an argument. Note that not all are exported.
Overwriting Default Settings
To overwrite default settings added during model construction, a user must create a Dict{Symbol, Setting}
and pass that into the model constructor as the keyword argument custom_settings
. If the print
, code
, and description
fields of the new Setting
object are not provided, the fields of the existing setting will be maintained. If new values for print
, code
, and description
are specified, and if these new values are distinct from the defaults for those fields, the fields of the existing setting will be updated.
For example, overwriting use_parallel_workers
should look like this:
custom_settings = Dict{Symbol, Setting}(
:use_parallel_workers => Setting(:use_parallel_workers, true))
m = Model990(custom_settings = custom_settings)
Or like this:
m = Model990()
m <= Setting(:use_parallel_workers, true)
Note that using this second method will not work for all settings, e.g. n_anticipated_shocks
is a setting that must be passed into the model during construction, as in the first example.
By default, passing in custom_settings
overwrites the entries in the model object's settings
field. However, with the additional keyword argument testing = true
, it will overwrite the entries in test_settings
:
m = Model990(custom_settings = custom_settings, testing = true)
Accelerating Computation of Regime-Switching System
Regime-switching state space systems take more time to compute, which can severely slow down estimation and forecasting time. The interface for computing regime-switching systems is written to be easy to use and generic, but, as a result, its default behavior ignores information that could be used to accelerate computation time. We provide some settings that allow the user to specify such information about the state space system.
perfect_credibility_identical_transitions::Dict{Int, Int}
: different regimes may have the same transition equations (TTT
,RRR
, andCCC
matrices, also see Solving the Model). This setting tells the code to use another regime's transition equation rather than recalculate the equation. The keys of thisDict
are regime numbers, and the values specify the regime to which the keys' regimes are identical. For example, if the setting wasDict(2 => 1, 3 => 1)
, then we are saying that regimes 2 and 3 have the same transition equations as regime 1. Note that if you are usinggensys2
, then you cannot say thatgensys
regimes have the same transition equations as any regime on whichgensys2
is called, including the terminal period ofgensys2
, even though the transition equations may indeed be the same. The reason is that thegensys
regimes are computed before thegensys2
regimes, and to avoid extra calculations, the terminal period is computed during thegensys2
step. Therefore, trying to copy agensys2
regime for agensys
regime will cause an error (an attempt to access an undefined reference).
identical_eqcond_regimes::Dict{Int, Int}
: different regimes may have the same equilibrium conditions (see Solving the Model). This setting tells the code to copy another regime's equilibrium conditions rather than recompute the gensys matrices. The keys of thisDict
are regime numbers, and the values specify the regime to which the keys' regimes are identical. For example, if the setting wasDict(2 => 1, 3 => 1)
, then we are saying that regimes 2 and 3 have the same equilibrium conditions as regime 1.empty_measurement_equation::Vector{Bool}
: when using time-varying information sets and forward-looking observables, you may need to calculate the transition equations beyond the last period of available data. By default,compute_system
will also compute the measurement equation for these regimes in the future, which is unnecessary if you are trying to estimate the model. This setting specifies which regimes can have an empty measurement equation (set to be aMeasurement
type with undefined matrices for its fields). Afalse
element in the vector means that the measurement equation is nonempty, while atrue
element means an empty measurement equation. The length of the vector should be the same length as the number of regimes, and the indices of the vector corresponding to the regime number.empty_pseudo_measurement_equation::Vector{Bool}
: same asempty_measurement_equation
but for the pseudo-measurement equation. For estimations, you can omit all the pseudo-measurement equations since they are unnecessary.
Regime-Switching Forecasts
Forecasts can involve state-space systems with exogenous and unanticipated regime-switching in the history periods and forecast horizon. Anticipated temporary alternative policies can also occur in both the history and the forecast horizon. Historical regime switching may occur to reflect structural breaks or to allow a DSGE to handle special circumstances, such as the COVID-19 pandemic. Regime switches in the forecast horizon may occur because agents expect a ZLB until some date in the future. In a rational expectations equilibrium, agents will behave differently if they know a forecasted policy is temporary rather than permanent. Using exogenous regime-switching along with a modified gensys
solution algorithm is one way of implementing this expectation. See Regime-Switching for more details on the solution algorithm.
In this section, we will go over the interface for running regime-switching forecasts and discuss some details of the implementation. It is useful to also look at the posted example script for regime-switching. To understand how to implement your own regime-switching model, we recommend examining the implementation of regime-switching equilibrium conditions for Model1002
and how it is integrated with our solvers. For a guide to running permanent and/or temporary alternative policies, please see Alternative Policies.
Preparing a Model's Settings for Regime-Switching
Suppose we wanted to run a regime-switching forecast, where where the regimes are 1959:Q3-1989:Q4, 1990:Q1-2019:Q3, and 2019:Q4 to the end of the forecast horizon. The following lines are required:
m <= Setting(:regime_switching, true)
m <= Setting(:regime_dates, Dict{Int, Date}(1 => Date(1959, 9, 30), 2 => Date(1990, 3, 31),
3 => Date(2019, 12, 31))
The first setting turns on regime switching. Internally, functions like forecast_one
will decide whether to use regime switching or not depending on whether get_setting(m, :regime_switching)
is true and whether there are actually multiple regimes specified by :regime_dates
. The second setting is a Dict
mapping the regime number to the first date (inclusive) of that regime.
Before running a forecast, we must also run
setup_regime_switching_inds!(m; cond_type = cond_type)
which will automatically compute the (required) settings
:reg_forecast_start
: Regime of the first forecast start period:reg_post_conditional_end
: Regime of the period after the last conditional forecast period:n_regimes
: Number of total regimes. If this is 1, then regime switching will not occur.:n_hist_regimes
: Number of regimes in the history:n_fcast_regimes
: Number of regimes in the forecast horizon (including the conditional forecast):n_cond_regimes
: Number of regimes in the conditional forecast
These settings will generally depend on whether the forecast is conditional or not, so the user needs to pass in cond_type = :full
or cond_type = :semi
to setup_regime_switching_inds!
if the user wants a forecast with correct regime-switching.
Finally, to run a full-distribution forecast with regime-switching using forecast_one
or usual_model_forecast
, it is necessary to manually construct the matrix of parameter draws and pass them as an input with the keyword params
. Currently, we have not fully implemented loading parameters from a saved estimation file. For an example about how to do this, see this example script.
Time-Varying Information Sets
In many applications with regime-switching, changes in information sets may occur. For example, say in 1959:Q1 - 2007:Q3, people expected the Federal Reserve to always use a Taylor-style monetary policy rule. But in 2007:Q4, people realize that the Federal Reserve will implement a zero lower bound for N
periods before switching back to the Taylor-style rule. The measurement equation used in 2007:Q4 and subsequent periods need to account for this change in the information set. In particular, quantities like anticipated nominal rates and 10Y inflation rates involve forecasting the expected state of the economy. If the transition matrices (e.g. TTT
) are time-varying, and agents' information set includes knowledge that the matrices are time-varying, then the measurement equation should account for it. Explicitly, assume the state space evolves according to
If in period t
, the measurement equation includes the anticipated nominal rate in $k$ periods, and agents know that the transition equations are time-varying over some horizon $H$, then we need to calculate that expectation taking into account that agents know about the time variation in the matrices, e.g. $T_{t + 1}, \dots, T_{t + H}$. This approach allows for varying degrees of myopia, e.g. $H = 0$ implies that agents do not know about any time variation while $H < k$ captures the case that agents only know about the time variation up to a certain horizon forward.
To help the user write the correct measurement equation with time-varying transition equations, we have implemented the following two functions:
DSGE.k_periods_ahead_expectations
— Functionk_periods_ahead_expectations(TTT, CCC, TTTs, CCCs, t, k; permanent_t = length(TTTs),
integ_series = false, memo = nothing)
calculates the matrices associated with the expected state k
periods ahead from t
. This function should NOT be used with linear state space system matrices with any unit roots.
The TTT
and CCC
inputs are the transition matrix and constant vector associated with the current period t
, while the TTTs
and CCCs
are vectors containing the time-varying transition matrices and constant vectors, such that TTTs[t]
retrieves the time-varying transition matrix associated with period t
and TTTs[t + k]
retrieves the time-varying transition matrix associated with period t + k
. The optional argument permanent_t
indicates the period for which the matrices/vectors are no longer time-varying, i.e. if t >= permanent_t
, then TTTs[permanent_t]
is the transition matrix.
The formula implemented by this function is
𝔼ₜ[sₜ₊ₖ] = (∏ⱼ=₁ᵏ Tₜ₊ⱼ) sₜ + (∑ₘ₌₁ᵏ⁻¹ (∏ⱼ₌ₘ₊₁ᵏ Tₜ₊ⱼ) Cₜ₊ₘ) + Cₜ₊ₖ.
Additional simplifications are made if it is known that t + k > permanent_t
since this implies some matrices are the same. This recognition reduces unnecessary computations.
Keyword Arguments
integ_series::Bool
: set to true if there are some transition matries inTTT
that result in integrated series, in which case we cannot speed up computations by using left-divides.memo::Union{ForwardExpectationsMemo, Nothing}
: pass a properly formedForwardExpectationsMemo
to avoid calculating unnecessary products and powers of the matrices inTTTs
. Typically, the memo you want to compute is
# min_t is minimum t you will use, maximum_t is maximum t you will use, and
# max_k is the maximum window for forward expectations.
memo = ForwardExpectationsMemo(TTTs, min_t, length(TTTs), length(TTTs), min_t + max_k - length(TTTs),
max_t + max_k + 1 - length(TTTs))
DSGE.k_periods_ahead_expected_sums
— Functionk_periods_ahead_expected_sums(TTT, CCC, TTTs, CCCs, t, k; permanent_t = length(TTTs),
integ_series = false, memo = nothing)
calculates the matrices associated with the sum of the expected states between periods t + 1
and t + k
. This function should NOT be used with linear state space system matrices with any unit roots.
The TTT
and CCC
inputs are the transition matrix and constant vector associated with the current period t
, while the TTTs
and CCCs
are vectors containing the time-varying transition matrices and constant vectors, such that TTTs[t]
retrieves the time-varying transition matrix associated with period t
and TTTs[t + k]
retrieves the time-varying transition matrix associated with period t + k
. The optional argument permanent_t
indicates the period for which the matrices/vectors are no longer time-varying, i.e. if t >= permanent_t
, then TTTs[permanent_t]
is the transition matrix.
The formula implemented by this function is
∑ⱼ₌₁ᵏ 𝔼ₜ[sₜ₊ⱼ] = ∑ⱼ₌₁ᵏ(∏ⱼ=₁ᵏ Tₜ₊ⱼ) sₜ + ∑ᵣ₌₁ᵏ⁻¹(I + ∑ⱼ₌ᵣ₊₁ᵏ (∏ₘ₌ᵣ₊₁ʲ Tₜ₊ₘ))Cₜ₊ᵣ + Cₜ₊ₖ.
Additional simplifications are made if it is known that t + k > permanent_t
since this implies some matrices are the same. This recognition reduces unnecessary computations.
Keyword Arguments
integ_series::Bool
: set to true if there are some transition matries inTTT
that result in integrated series, in which case we cannot speed up computations by using left-divides.memo::Union{ForwardMultipleExpectationsMemo, Nothing}
: pass a properly formedForwardExpectationsMemo
to avoid calculating unnecessary products and powers of the matrices inTTTs
. Typically, the memo you want to compute is
# min_t is minimum t you will use, maximum_t is maximum t you will use, and
# max_k is the maximum window for forward expectations.
memo = ForwardMultipleExpectationsMemo(TTTs, min_t, length(TTTs), length(TTTs), min_t + max_k - length(TTTs),
max_t + max_k + 1 - length(TTTs))
See the measurement equation for Model 1002 for an example of how these functions are used.
To accelerate the computation time for these functions, we have also implemented types that create memos of the products and powers of the $\{T_{t + k}\}_{t = 1}^k$ matrices which are needed. See
DSGE.ForwardExpectationsMemo
— TypeForwardExpectationsMemo(TTTs::Vector{<: AbstractMatrix{S}},
current_regime::Int64, last_tv_period::Int64,
first_perm_period::Int64, min_perm_power::Int64 = 0,
max_perm_power::Int64 = 0) where {S <: Real}
computes the memo dictionaries of the necessary products/powers of TTTs for computing forward expectations of states and sums of states.
Inputs
TTTs
: complete sequence of time-varying transition matrices from regime 1 to the final regimecurrent_regime
: the current period's regimelast_tv_period
: the last period in which the transition matrix is believed to have changed relative to the previous period.first_perm_period
: the first period in which the transition matrix inTTTs
is permanently imposed. This period should be from the perspective of an omniscient econometrician rather than the agents' perspective.min_perm_power
: minimum power of the permanent matrix to be calculated, i.e. we calculate at leastTTTs[first_perm_period] ^ min_perm_power
max_perm_power
: maximum power of the permanent matrix to be calculated, i.e. we calculate at mostTTTs[first_perm_period] ^ max_perm_power
Notes
- To clarify what
last_tv_period
should be, note that
if every matrix in TTTs
is time-varying, AND agents believe every matrix is time-varying, then last_tv_period = length(TTTs)
. However, if agents are myopic and believe that the last time-varying matrix is the second to last one, then last_tv_period = length(TTTs) - 1
. This flexibility allows the user to specify different degrees of awareness about TTTs
.
- Note that when
last_tv_period == first_perm_period
, we do not calculate
time_varying_memo[last_tv_period] = TTTs[first_perm_period] * TTTs[first_perm_period - 1] * ... TTTs[t + 1]
Instead, we calculate
time_varying_memo[last_tv_period] = TTTs[first_perm_period - 1] * ... TTTs[t + 1]
The reason is that it is more efficient to include that first TTTs[first_perm_period]
in the powers of TTT[first_perm_period]
computed for permanent_memo
. So if length(TTTs) == 7
, to get the correct k
-periods ahead forward expectations from some period t < 7
, you need to run
memo = ForwardExpectationsMemo(TTTs, t, 7, 7, 1, t + k + 1 - 7)
# or equivalently . . .
# memo = ForwardExpectationsMemo(TTTs, t, length(TTTs), length(TTTs), 1, t + k + 1 - length(TTTs))
DSGE.ForwardMultipleExpectationsMemo
— TypeForwardMultipleExpectationsMemo(TTTs::Vector{<: AbstractMatrix{S}},
current_regime::Int64, min_last_tv_period::Int64, max_last_tv_period::Int64,
first_perm_period::Int64, min_perm_power::Int64 = 0,
max_perm_power::Int64 = 0) where {S <: Real}
computes the memo dictionaries of the necessary products/powers of TTTs for computing forward expectations of states and sums of states for multiple different horizon lengths for the "forward-looking" window, i.e. variation in how many periods ahead for which expectations are taken. This approach avoids repeating computations that would be incurred if the user instead repeatedly created ForwardExpectationsMemo
for each horizon length.
Inputs
TTTs
: complete sequence of time-varying transition matrices from regime 1 to the final regimecurrent_regime
: the current period's regimemin_last_tv_period
: the minimum last period in which the transition matrix is believed to have changed relative to the previous period.max_last_tv_period
: the maximum last period in which the transition matrix is believed to have changed relative to the previous period.first_perm_period
: the first period in which the transition matrix inTTTs
is permanently imposed. This period should be from the perspective of an omniscient econometrician rather than the agents' perspective.min_perm_power
: minimum power of the permanent matrix to be calculated, i.e. we calculate at leastTTTs[first_perm_period] ^ min_perm_power
max_perm_power
: maximum power of the permanent matrix to be calculated, i.e. we calculate at mostTTTs[first_perm_period] ^ max_perm_power
Notes
- To clarify what
min_last_tv_period
andmax_last_tv_period
should be, first consider the meaning of input argument
last_tv_period
for ForwardExpectationsMemo
. If every matrix in TTTs
is time-varying, AND agents believe every matrix is time-varying, then last_tv_period = length(TTTs)
. However, if agents are myopic and believe that the last time-varying matrix is the second to last one, then last_tv_period = length(TTTs) - 1
. This flexibility allows the user to specify different degrees of awareness about TTTs
.
The argument min_last_tv_period
and max_last_tv_period
allow the user to construct one memo object for computing varying lengths of forward expectations. For example, suppose we want expectations of the nominal rate from 1 to 6 periods ahead in the measurement equation in the regime t
. Then you should run
ForwardExpectationsMemo(TTTs, t, t + 1, t + 6, ...) # ellipsis omits the remaining input args
- Note that when
max_last_tv_period == first_perm_period
,
time_varying_memo[max_last_tv_period] = TTTs[first_perm_period] * TTTs[first_perm_period - 1] * ... TTTs[t + 1]
Instead, we calculate
time_varying_memo[max_last_tv_period] = TTTs[first_perm_period - 1] * ... TTTs[t + 1]
The reason is that it is more efficient to include that first TTTs[first_perm_period]
in the powers of TTT[first_perm_period]
computed for permanent_memo
.
The first type is mainly an "under the hood" type for k_periods_ahead_expectations
. The second type is a wrapper type that constructs all the memos needed to implement forward expectations of levels and sums in an efficient manner. We have automated the construction of memos with the Boolean Setting
s use_forward_expectations_memo
and use_forward_expected_sum_memo
. The first Setting
indicates that a memo type will be used for calls to k_periods_ahead_expectations
and the second indicates a memo type will be used for calls to k_periods_ahead_expected_sums
. It is assumed by default that the last matrix in the sequence TTTs
in a RegimeSwitchingSystem
is the first period in which a TTT
matrix permanently applies (hence we may assume that in all future periods the TTT
is the same), but if this is not the case, then the user needs to specify the correct regime with the Setting
memo_permanent_policy_regime::Int
.
For details on how we implement a state space system with time-varying information sets, see The TimeVaryingInformationSetSystem
Type. For guidance on how to use this type, e.g. calculating forecats, see this example script.
Available Types of Regime Switching
There are three cases involving regime switching that are implemented in DSGE.jl
- Exogenous and unanticipated regime switching (e.g. unanticipated regime-switching parameters)
- Alternative policies (temporary and permanent)
- Time-varying information sets
To implement regime-switching parameters or use temporary alternative policies, see this example script on regime-switching forecasts. This documentation on temporary alternative policies will also be helpful. For further details on regime-switching parameters, see the documentation for ModelConstructors.jl. To implement time-varying information sets, see this example script.
If the user wants to combine exogenous regime switching in both parameters and policies, then the user may find it useful to distinguish between model and parameter regimes. For example, when implementing a temporary alternative policy, we typically treat each period of the temporary policy as a distinct regime, but the parameters of the model may remain constant across these regimes of the temporary policy. To distinguish the two, we implement in ModelConstructors.jl a second interface for changing parameter regimes. Aside from
toggle_regime!(p::Parameter, regime::Int)
for example, we also have the syntax
toggle_regime!(p::Parameter, model_regime::Int, d::AbstractDict{Int, Int})
The latter syntax uses a dictionary to map a model regime to the correct parameter regime. As an example, suppose across 2020:Q1-Q4, I implement a temporary ZLB, and I assume that some parameters also regime switching during this period. Then I may want to write
d = Dict(1 => 1, # First model and parameter regime coincide (history until 2019:Q4).
2 => 2, # Regimes 2-5 represent 2020:Q1-Q4 andmap to the second regime
3 => 2,
4 => 2,
5 => 2,
6 => 1) # Starting in regime 6 (2021:Q1), the parameters switch back to the same values from before 2020.
For more details, see the regime toggling in Model 1002's eqcond
and the documentation for ModelConstructors.jl.
Once the regime-switching settings are properly created, the syntax for running a forecast is the same as when there is no regime-switching. See Forecasting.
Handling of the Zero Lower Bound (ZLB)
The New York Fed DSGE model can handle the ZLB in two ways.
In the first way, the New York Fed DSGE model treats the ZLB as a temporary alternative policy over a pre-specified horizon. In the second way, the New York Fed DSGE model treats the ZLB as a separate regime in which anticipated monetary policy shocks become "alive" and have positive standard deviations. However, this second form of the ZLB is not implemented as a separate regime. The reason is the only difference in the "pre-ZLB" and "post-ZLB" regimes is whether or not anticipated monetary policy shocks are non-zero. For an example, see the smoothing code as well as the auxiliary functions zlb_regime_matrices
and zlb_regime_indices
in this file.
This approach saves computational time. Rather than creating redundant matrices, we directly zero/un-zero the appropriate entries in the pre- and post-ZLB QQ
matrices. This approach also economizes on unnecessary switching, For instance, during the calculation of shock decompositions and trends, it is unnecessary to distinguish between the pre- and post-ZLB regimes.
Alternative Policy Uncertainty and Imperfect Awareness
The standard alternative policy code assumes that people completely believe the change in policy. However, in many cases, the more realistic modeling choice is assuming some uncertainty or imperfect awareness/credibility about the policy change. This approach can also partially address the concern that expectations have counterfactually strong effects in standard DSGEs (e.g. the forward guidance puzzle).
Theory
We model imperfect awareness by assuming there are $n$ possible alternative policies that may occur tomorrow and $n$ probability weights assigned to each policy. Further, it is believed that the alternative policy which occurs tomorrow will be permanent. One of the policies is the alternative policy which is actually implemented. The function gensys_uncertain_altpol
calculates the state space transition equation implied by these beliefs. A typical application is assuming that with probability $p$ some alternative policy occurs tomorrow and with probability $1-p$ the historical policy occurs tomorrow.
Imperfect awareness can occur in multiple periods and feature time-varying credibility by assuming myopia. For example, say agents in period $t$ believe the central bank will implement AIT in $t + 1$ and all subsequent periods with probability $p_t$ and the historical rule otherwise. After period $t + 1$ occurs and the central bank actually implements AIT, agents again believe that in period $t + 2$ and all subsequent periods, the central bank will implement AIT with probability $p_{t + 1}$ and the historical rule otherwise.
Imperfect awareness is robust to temporary alternative policies but requires the algorithm to account for time variation in the transition equation. In particular, we need to first compute the entire sequence of transition equations under the temporary alternative policy with perfect credibility. Once this sequence is available, we can then treat the temporary alternative policy as the alternative policy which is actually implemented and apply the same calculations described in the previous paragraph to each period of the temporary alternative policy.
Implementation
To apply imperfect awareness, the user needs to specify the possible alternative policies and the probability weights on these policies.
The alternative policy which is actually implemented should be added to the :regime_eqcond_info
dictionary as an EqcondEntry
, e.g.
get_setting(m, :regime_eqcond_info)[2] = DSGE.EqcondEntry(DSGE.ngdp())
The other alternative policies that agents believe may occur are then added as follows:
m <= Setting(:alternative_policies, [altpolicy1, altpolicy2]) # altpolicy1 and altpolicy2 are AltPolicy instances
The user specifies the probability weights when creating the EqcondEntry
instance for the :regime_eqcond_info
dictionary, e.g.
DSGE.EqcondEntry(DSGE.ngdp(), [p_t, 1 - p_t])
This approach permits time-variation in the probability weight because the user can use different p_t
for each regime, e.g.
get_setting(m, :regime_eqcond_info)[2] = DSGE.EqcondEntry(DSGE.ngdp(), [.5, .5])
get_setting(m, :regime_eqcond_info)[3] = DSGE.EqcondEntry(DSGE.ngdp(), [1., 0.])
Finally, before solving for the state space system or running forecasts, the user needs to add the line
m <= Seting(:uncertain_altpolicy, true)
To use imperfect awareness with a temporary altpolicy (eg. ZLB), the user needs to also add the following lines to the model's setup:
m <= Setting(:uncertain_temporary_altpolicy, true)
m <= Setting(:temporary_altpolicy_names, keys_of_temp_altpols) # e.g. keys_of_temp_altpols = [:zlb_rule] or [:zero_rate]
m <= Setting(:temporary_altpolicy_length, n_zlb_regs)
The first line tells that a temporary altpolicy with imperfect awareness should apply. The second line tells which alternative policies (based on their keys) should be recognized as temporary policies and is used to infer on which regimes gensys2
should be called. The third line indicates the number of regimes for which the temporary altpolicy occurs. If the third line is not specified, then it is assumed that all regimes in get_setting(m, :regime_eqcond_info)
except the last one are temporary altpolicy regimes. This assumption can be wrong, for example, if credibility changes after the temporary altpolicy ends.
For further guidance on adding imperfect awareness, please see the script uncertainaltpolicyzlb.jl.
Forward-Looking Variables in the Measurement and Pseudo-Measurement Equations
The measurement and pseudo-measurement equations often include "forward-looking" observables, such as the anticipated nominal interest rate and the expected average inflation rate over the next ten years. The measurement equations for these observables are therefore affected when imperfect awareness is assumed. Say $ZZ_1$ and $ZZ_2$ are the measurement equation matrices mapping states to observables under two different monetary policy rules which may occur and that all the observables are forward-looking. For simplicity, additionally assume that the associated $DD_1$ and $DD_2$ are both zero. Because agents at the end of period $t$ believe that either policy 1 or policy 2 occurs permanently in period $t + 1$, the measurement equation agents use to map states to data is just the weighted average of the measurement matrices, i.e.
The reason is that, conditional on alternative policy 1 occurring, the observables in $t + 1$ should be
Similarly, if policy 2 occurs, then
The law of iterated expectations gives us the desired result.
The user does not need to worry about coding their measurement equations to account for this, as long as the measurement equation will properly compute $ZZ_i$, given the policies specified in the settings :regime_eqcond_info
and :alternative_policies
. DSGE.jl will handle the calculation of the convex combinations under the hood. The only setting which users are advised to add is one that indicates which rows of $ZZ$ are associated with forward-looking observables, e.g.
m <= Setting(:forward_looking_observables, [:obs_longinflation, :obs_nominalrate1])
m <= Setting(:forward_looking_pseudo_observables, [:Expected10YearNaturalRate])
If such a setting exists, then DSGE.jl will only calculate the weighted average for the rows associated with these observables/pseudo-observables. Otherwise, we compute the weighted average of the different measurement matrices. This latter approach will always work, but it comes at the cost of unnecessary operations.
[Imperfect Awareness with Temporary Policies as Alternative Policies]
The previous documentation generally assumes that the alternative policies which people believe may occur are one-regime and permanent policies. However, it is possible that agents are imperfectly aware over alternative policies that involve temporary policies and thus require the use of gensys2
. The only difference the user needs to do is use a MultiPeriodAltPolicy
type rather than an AltPolicy
type when populating the Setting
alternative_policies
. See Types for documentation on the fields of a MultiPeriodAltPolicy
. As an example, the code snippet below implements a temporary ZLB as the alternative policy, assuming the existence of a regime-switching model instance m
.
# Alternative Policy 1: default/historical rule
altpol1 = default_policy()
# Alernative Policy 2: ZLB starting in regime 3 and ending in regime 5, and flexible AIT starting in regime 6
new_reg_eqcond_info = Dict(3 => EqcondEntry(zlb_rule(), reg3_weights), # reg3_weights specifies whatever weights
4 => EqcondEntry(zlb_rule(), reg4_weights), # the user wants in regime 3, etc.
5 => EqcondEntry(zlb_rule(), reg5_weights),
6 => EqcondEntry(flexible_ait(), reg6_weights))
new_infoset = [1:1, 2:2, [i:6 for i in 3:6]..., [i:i for i in 7:get_setting(m, :n_regimes)]...]
delete!(m.settings, :alternative_policies) # if :alternative_policies already exists, then a type error may occur
altpol2 = MultiPeriodAltPolicy(:temporary_zlb, get_setting(m, :n_regimes),
reg_eqcond_info, gensys2 = true,
temporary_altpolicy_names = [:zlb_rule],
temporary_altpolicy_length = 3,
infoset = new_infoset)
# Both AltPolicy and MultiPeriodAltPolicy are subtypes of AbstractAltPolicy
m <= Setting(:alternative_policies, DSGE.AbstractAltPolicy[altpol1, altpol2])
Automatically Generating Anticipated Shocks
We have implemented some functionality for automatically adding anticipated shocks for Model1002
. To add these shocks, the user must pass custom settings into the constructor using the custom_settings
keyword. The available settings for defining these shocks are:
antshocks::Dict{Symbol, Int}
: a dictionary mapping the name of an anticipated shock to the number of periods of anticipation, e.g.:b => 2
adds anticipatedb
shocks up to two periods ahead.ant_eq_mapping::Dict{Symbol, Symbol}
: a dictionary mapping the name of an anticipated shock to the name of the state variable in the equation defining the shock's exogenous process, e.g.:b => :b
maps an anticipatedb
shock to the equationeq_b
.ant_eq_E_mapping::Dict{Symbol, Symbol}
: a dictionary mapping the name of an anticipated shock to the name of the state variable in the equation defining the shock's one-period ahead expectation e.g.:b => :Eb
maps an anticipatedb
shock to the equationeq_Eb
, whereEb
is $E_t[b_{t + 1}]$.proportional_antshocks::Vector{Symbol}
: a vector of the names of one-period ahead anticipated shocks which are specified as directly proportional to the realizations of the current period's unanticipated shocks. For a shockb
, the automatically generated parameterσ_b_prop
defines the proportionality to the current period shock, e.g. a value of 1 indicates an anticipated shock in the next period of the same size as the current period's unanticipated shock.
As an example, the following code creates an instance of Model1002
with anticipation of b
shocks up to two periods ahead.
custom_settings = Dict{Symbol, Setting}(:antshocks => Setting(:antshocks, Dict{Symbol, Int}(:b => 2)),
:ant_eq_mapping => Setting(:ant_eq_mapping, Dict{Symbol, Symbol}(:b => :b)))
m = Model1002("ss10"; custom_settings = custom_settings)
Automatic Endogenous ZLB Enforcement as Temporary Rule
The user can enforce the ZLB during the forecast horizon in two ways. The default approach uses unanticipated monetary policy shocks. Instead, the user can also use the temporary ZLB machinery to enforce the ZLB. This enforcement is endogenous in the sense that, conditional on a draw of shocks, we want to figure out the required length of a temporary ZLB that will deliver non-negative interest rates throughout the horizon.
The enforcement is automated by trading off two objectives. First, we want the length of the ZLB to be minimal so that the ZLB is not unnecessarily accommodative, unless it is specifically desired for the ZLB to extend to at least some date. Second, we want to maintain a reasonable computational time. Finding a minimal ZLB length when there are multiple disconnected periods of negative interest rates would be prohibitively expensive because expecting more periods of temporary ZLB in the future affects agents' expectations today, and changing the number of periods of temporary ZLB in the past affects the future evolution of states.
Instead, we endogenously enforce the ZLB only for the first connected sequence of periods with negative interest rates and use unanticipated monetary policy shocks for future sequences of periods with negative rates. Our algorithm proceeds as follows.
- Forecast without any periods of temporary ZLBs (unless a minimum length is specified) and find the first connected sequence of periods with negative interest rates.
- Guess a sequence of temporary ZLB regimes that cover this first sequence of periods with negative interest rates.
- If the forecast under the temporary policy from 2 successfully enforces the ZLB over that first sequence and does not introduce negative rates after liftoff from the ZLB, then test whether shorter ZLBs will also enforce it. Otherwise, extend the sequence of temporary ZLBs using the same approch as 2.
- Once we have successfully found a minimum length that guarantees non-negative rates for the first sequence of periods, re-run the forecast using unanticipated monetary policy shocks to enforce any other sequences of periods with negative rates.
Note that sometimes extending the sequence of temporary ZLB regimes will cause two disjoint sequences of periods with negative rates to become contiguous, in which case we treat the two disjoint sequences as one connected sequences therafter.
To use this method, the user runs a forecast as follows
# (optional) maximum permitted length for temporary ZLB regimes in the forecast
m <= Setting(:max_temporary_altpol_length, max_zlb_length)
# (optional) minimum permitted length for temporary ZLB regimes and the
# ZLB regimes are assumed to start in the first period of the forecast
m <= Setting(:min_temporary_altpol_length, min_zlb_length)
# (optional) length of the contiguous ZLB prior to the first period of the
# forecast, ending in the regime prior to the start of the forecast
m <= Setting(:historical_temporary_altpolicy_lengh, hist_zlb_length)
forecast_one(m, input_type, cond_type, output_vars; rerun_smoother = true,
zlb_method = :temporary_altpolicy,
set_regime_vals_altpolicy = my_set_regime_vals_altpolicy_fnct,
set_info_sets_altpolicy = my_set_info_sets_altpolicy_fnct,
update_regime_eqcond_info! = my_update_regime_econd_info_fnct!,
nan_endozlb_failures = false)
The keyword arguments are briefly described below. For more details, see the docstring for forecast_one
.
rerun_smoother::Bool
: needs to be true if the current sequence of temporary ZLB regimes start during the history or conditional horizon because changing the length of the temporary ZLB affects the smoothed estimate of the state at the start of the forecast.zlb_method::Symbol
: set to:temporary_altpolicy
to enforce the ZLB as a temporary policy. Otherwise, unanticipated monetary policy shocks will be used.set_regime_vals_altpolicy::Function
: if there are regime-switching parameters, this function is needed to figure out what parameters should be assigned to the new model regimes added when extending the temporary ZLB length.set_info_sets_altpolicy::Function
: if theSetting
tvis_information_set
is used, then we need to specify how to updatetvis_information_set
as new model regimes are added to extend the temporary ZLB length.update_regime_eqcond_info!::Function
: specifies how to updateregime_eqcond_info
to include more or fewer regimes of temporary ZLB.nan_endozlb_failures::Bool
: sometimes the ZLB cannot be enforced because rates are negative even when the ZLB extends throughout the entire forecast horizon or because the max ZLB length is reached. By default, we enforce the remainder of the forecast horizon with unanticipated monetary policy shocks. If this kwarg is true, we returnNaN
s rather than use unanticipated shocks.
For further guidance on forecasting with an endogenously enforced ZLB, please see the script imperfectawarenesstempzlb_ait.jl with the keyword endozlb set to true.
Editing or Extending a Model
Users may want to extend or edit Model990
in a number of different ways. The most common changes are listed below, in decreasing order of complexity:
- Add new parameters
- Modify equilibrium conditions or measurement equations
- Change the values of various parameter fields (i.e. initial
value
,prior
,transform
, etc.) - Change the values of various computational settings (i.e.
reoptimize
,n_mh_blocks
)
Points 1 and 2 often go together (adding a new parameter guarantees a change in equilibrium conditions), and are such fundamental changes that they increment the model specification number and require the definition of a new subtype of AbstractModel
(for instance, Model991
). See Model specification for more details.
Any changes to the initialization of preexisting parameters are defined as a new model sub-specification, or subspec. While less significant than a change to the model's equilibrium conditions, changing the values of some parameter fields (especially priors) can have economic significance over and above settings we use for computational purposes. Parameter definitions should not be modified in the model object's constructor. First, incrementing the model's sub-specification number when parameters are changed improves model-level (as opposed to code-level) version control. Second, it avoids potential output filename collisions, preventing the user from overwriting output from previous estimations with the original parameters. The protocol for defining new sub-specifications is described in Model sub-specifications.
Model specification (m.spec
)
A particular model, which corresponds to a subtype of AbstractModel
, is defined as a set of parameters, equilibrium conditions (defined by the eqcond
function) and measurement equations (defined by the measurement
function). Therefore, the addition of new parameters, states, or observables, or any changes to the equilibrium conditions or measurement equations necessitate the creation of a new subtype of AbstractModel.
To create a new model object, we recommend doing the following:
- Duplicate the
m990
directory within the models directory. Name the new directorymXXX.jl
, whereXXX
is your chosen model specification number or string. Renamem990.jl
in this directory tomXXX.jl
. - In the
mXXX/
directory, change all references toModel990
toModelXXX
. - Edit the
m990.jl
,eqcond.jl
, andmeasurement.jl
files as you see fit. If adding new states, equilibrium conditions, shocks, or observables, be sure to add them to the appropriate list ininit_model_indices
. - Open the module file,
src/DSGE.jl
. AddModelXXX
to the list of functions to export, and include each of the files insrc/model/mXXX
.
It is very important that you include the default settings by adding the line default_settings!(m)
inside the function that creates a new instance of your model. Otherwise, many methods in DSGE.jl will fail because they assume many settings have default values that are set by default_settings!
.
Model sub-specifications (m.subspec
)
Model990
sub-specifications are initialized by overwriting initial parameter definitions before the model object is fully constructed. This happens via a call to init_subspec
in the Model990
constructor. (Clearly, an identical protocol should be followed for new model types as well.)
To create a new sub-specification (e.g., subspec 1) of Model990
, edit the file src/models/subspecs.jl
as follows (note that this example is not actually sub-specification 1
of Model990
. In the source code, our sub-specification 5
is provided as additional example.):
Step 1. Define a new function, ss1
, that takes an object of type Model990
(not AbstractModel
!) as an argument. In this function, construct new parameter objects and overwrite existing model parameters using the <=
syntax. For example,
function ss1(m::Model990)
m <= parameter(:ι_w, 0.000, (0.0, .9999), (0.0,0.9999), DSGE.Untransformed(), Normal(0.0,1.0), fixed=false,
description="ι_w: Some description.",
tex_label="\\iota_w")
m <= parameter(:ι_p, 0.0, fixed=true,
description= "ι_p: Some description"
tex_label="\\iota_p")
end
Step 2. Add an elseif
condition to init_subspec
:
...
elseif subspec(m) == "ss1"
return ss1(m)
...
To construct an instance of Model990
, ss1
, call the constructor for Model990
with ss1
as an argument. For example,
m = Model990("ss1")
Additional Tips
- The file
abstractdsgemodel.jl
defines numerous auxiliary functions, which allow the user to more easily call standard settings or count the number of dimensions for important variables. For example,data_vintage(m)
returns the vintage of the data specified by the model objectm
. Additionally seeabstractmodel.jl
in ModelConstructors.jl for more functions liken_observables(m)
, which returns the number of observables inm
.