estimate,hiddenDiffusion-method {BaPreStoPro} | R Documentation |
Bayesian estimation of the model Z_i = Y_{t_i} + ε_i, dY_t = b(φ,t,Y_t)dt + γ \widetilde{s}(t,Y_t)dW_t, ε_i\sim N(0,σ^2), Y_{t_0}=y_0(φ, t_0) with a particle Gibbs sampler.
## S4 method for signature 'hiddenDiffusion' estimate(model.class, t, data, nMCMC, propSd, adapt = TRUE, proposal = c("normal", "lognormal"), Npart = 100)
model.class |
class of the hidden diffusion model including all required information, see |
t |
vector of time points |
data |
vector of observation variables |
nMCMC |
length of Markov chain |
propSd |
vector of proposal variances for φ |
adapt |
if TRUE (default), proposal variance is adapted |
proposal |
proposal density: "normal" (default) or "lognormal" (for positive parameters) |
Npart |
number of particles in the particle Gibbs sampler |
Andrieu, C., A. Doucet and R. Holenstein (2010). Particle Markov Chain Monte Carlo Methods. Journal of the Royal Statistical Society B 72, pp. 269-342.
model <- set.to.class("hiddenDiffusion", y0.fun = function(phi, t) 0.5, parameter = list(phi = 5, gamma2 = 1, sigma2 = 0.1)) t <- seq(0, 1, by = 0.01) data <- simulate(model, t = t, plot.series = TRUE) est <- estimate(model, t, data$Z, 100) # nMCMC should be much larger! plot(est) ## Not run: # OU b.fun <- function(phi, t, y) phi[1]-phi[2]*y model <- set.to.class("hiddenDiffusion", y0.fun = function(phi, t) 0.5, parameter = list(phi = c(10, 1), gamma2 = 1, sigma2 = 0.1), b.fun = b.fun, sT.fun = function(t, x) 1) t <- seq(0, 1, by = 0.01) data <- simulate(model, t = t, plot.series = TRUE) est <- estimate(model, t, data$Z, 1000) plot(est) ## End(Not run)