Takes as input a stanfit object from call to sampling and extracts the posterior or functions thereof.
Usage
posterior(object, pars, ...)
# S3 method for class 'stanfit'
posterior(object, pars, dim.names, fun = "", melt = FALSE, ...)Arguments
- object
output from call to
sampling.- pars
character vector of posterior parameter samples to be extracted.
- ...
additional arguments to function
- dim.names
optional list of named lists containing dimension names for each parameter. If only a single
list(list())entry is given it is applied to all parameters. Not all parameters need to be provided with dimension names.- fun
one of either "mean", "median", "quantile" or "summary", which will be calculated across iterations if supplied.
- melt
logical value indicating whether output arrays should be converted to long format using
rehape2::melt.array()
Value
Returns a list of posterior samples for each parameter. If melt = TRUE then these are returned as data.frames, otherwise they are arrays. If fun is specificed then the output is summarised across iterations.
Examples
require(rstan)
mdl <- "data{ int n; vector[n] x; }
parameters{ real mu; }
model{ x ~ normal(mu, 1.0);}
generated quantities{ vector[n] x_sim; real x_sim_sum;
for (i in 1:n) x_sim[i] = normal_rng(mu, 1.0); x_sim_sum = sum(x_sim);}\n"
mdl <- stan_model(model_code = mdl)
n = 20
x = rnorm(n, 0, 2)
mdl.fit <- sampling(mdl, data = list(n = n, x = x),
init = function() list(mu = 0), chains = 1)
#>
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
#> Chain 1:
#> Chain 1: Gradient evaluation took 1.3e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.13 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1:
#> Chain 1:
#> Chain 1: Iteration: 1 / 2000 [ 0%] (Warmup)
#> Chain 1: Iteration: 200 / 2000 [ 10%] (Warmup)
#> Chain 1: Iteration: 400 / 2000 [ 20%] (Warmup)
#> Chain 1: Iteration: 600 / 2000 [ 30%] (Warmup)
#> Chain 1: Iteration: 800 / 2000 [ 40%] (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%] (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%] (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%] (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%] (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%] (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%] (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%] (Sampling)
#> Chain 1:
#> Chain 1: Elapsed Time: 0.01 seconds (Warm-up)
#> Chain 1: 0.011 seconds (Sampling)
#> Chain 1: 0.021 seconds (Total)
#> Chain 1:
posterior(mdl.fit, pars = c("mu", "x_sim", "x_sim_sum"), fun = "summarise")
#> $mu
#> hat med low_ci upp_ci
#> -0.1000440 -0.1015303 -0.4890429 0.3159461
#>
#> $x_sim
#>
#> [,1] [,2] [,3] [,4] [,5]
#> hat -0.040870768 -0.07267507 -0.08056306 -0.1324895 -0.08844662
#> med 0.004099512 -0.11395648 -0.08379635 -0.1034256 -0.08566050
#> low_ci -2.139257438 -2.00818590 -2.05792114 -2.1225715 -2.10293677
#> upp_ci 2.075696303 1.93319524 1.93511500 1.7202727 1.80936471
#>
#> [,6] [,7] [,8] [,9] [,10] [,11]
#> hat -0.06516031 -0.08709333 -0.1027110 -0.1566360 -0.06367040 -0.1294932
#> med -0.02878040 -0.04817707 -0.1567527 -0.1582803 -0.08413235 -0.1226976
#> low_ci -1.96188720 -2.09176468 -1.9753424 -2.1527482 -2.11390318 -2.1130901
#> upp_ci 1.85577910 1.80516785 1.9547406 1.7753191 2.01880935 1.7738193
#>
#> [,12] [,13] [,14] [,15] [,16] [,17]
#> hat -0.1176812 -0.10351678 -0.10487195 -0.09330567 -0.1464948 -0.09118370
#> med -0.1168339 -0.09927069 -0.08783777 -0.08972270 -0.1519127 -0.05868084
#> low_ci -2.1217656 -2.04986272 -1.95815226 -1.99440008 -2.1202363 -2.08132506
#> upp_ci 1.9602327 1.82948635 1.80971402 1.95663810 1.7700777 1.90184681
#>
#> [,18] [,19] [,20]
#> hat -0.09467418 -0.11084516 -0.1394361
#> med -0.06840261 -0.07629506 -0.1818249
#> low_ci -2.14364027 -2.11625492 -2.0611275
#> upp_ci 1.78071107 1.70116932 1.8459813
#>
#> $x_sim_sum
#> hat med low_ci upp_ci
#> -2.021819 -2.066421 -13.713997 9.700981
#>