\documentclass{article} \usepackage{natbib} \usepackage{graphics} \usepackage{amsmath,amssymb} \usepackage{indentfirst} \usepackage[utf8]{inputenc} \usepackage[tableposition=top]{caption} \usepackage{url} \DeclareMathOperator{\var}{var} \DeclareMathOperator{\cov}{cov} \DeclareMathOperator{\E}{E} \newcommand{\inner}[1]{\langle #1 \rangle} % \VignetteIndexEntry{MCMC Morph Example} \begin{document} <>= options(keep.source = TRUE, width = 60) foo <- packageDescription("mcmc") @ \title{Morphometric MCMC (mcmc Package Ver.~\Sexpr{foo$Version})} % $ (Just to make emacs syntax highlighting work properly) \author{Leif T. Johnson \and Charles J. Geyer} \maketitle \section{Overview} This is an example how to use morphometric Markov chains as implemented in the \verb@mcmc@ package in R. Let $X$ be an $\mathbb{R}^k$ valued random variable with probability density function, $f_X$. Let $g$ be a diffeomorphism, and $Y=g(X)$. Then the probability density function of $Y$, $f_Y$ is given by \begin{equation}\label{eq:def-fy} f_Y(y) = f_X\bigl(g^{-1}(y)\bigr) \det\bigl( \nabla g^{-1}(y) \bigr). \end{equation} Since $g$ is a diffeomorphism, we can draw inference about $X$ from information about $Y$ (and vice versa). It is not unusual for $f_X$ to either be known only up to a normalizing constant, or to be analytically intractable in other ways --- such as being high dimensional. A common solution to this problem is to use Markov chain Monte Carlo (MCMC) methods to learn about $f_X$. When using MCMC, a primary concern of the practitioner should be the question ``Does the Markov chain converge fast enough to be useful?'' One very useful convergence rate is called \emph{geometrically ergodic} \citep[Chapter~1]{johnson-thesis}. The \texttt{mcmc} package implements the Metropolis random-walk algorithm for arbitrary log unnormalized probability densities. But the Metropolis random-walk algorithm does not always perform well. As is demonstrated in \citet{johnson-geyer}, for $f_X$ and $f_Y$ related by diffeomorphism as in \eqref{eq:def-fy}, a Metropolis random-walk for $f_Y$ can be geometrically ergodic even though a Metropolis random-walk for $f_X$ is not. Since the transformation is one-to-one, inference about $f_X$ can be drawn from the Markov chain for $f_Y$. The \texttt{morph.metrop} and \texttt{morph} functions in the \texttt{mcmc} package provide this functionality, and this vignette gives a demonstration on how to use them. \section{T Distribution} \label{sec:toy} We start with a univariate example, which is a Student $t$ distribution with three degrees of freedom. Of course, one doesn't need MCMC to simulate this distribution (the R function \texttt{rt} does that), so this is just a toy problem. But it does illustrate some aspects of using variable transformation. A necessary condition for geometric ergodicity of a random-walk Metropolis algorithm is that the target density $\pi$ have a moment generating function \citep{jarner-tweedie}. For a univariate target density, which we have in this section, a sufficient condition for geometric ergodicity of a random-walk Metropolis algorithm is that the target density $\pi$ be exponentially light \citet{mengersen-tweedie}. Thus if we do not use variable transformation, the Markov chain simulated by the \texttt{metrop} function will not be geometrically ergodic. \citet[Example 4.2]{johnson-geyer} show that a $t$ distribution is sub-exponentially light. Hence using the transformations described in their Corollaries~1 and~2 will induce a target density $\pi_\gamma$ for which a Metropolis random-walk will be geometrically ergodic. using the transformation described as $h_2$ in \citet[Corollary~2]{johnson-geyer} will induce a target density for which a Metropolis random-walk will be geometrically ergodic. Passing a positive value for \texttt{b} to \texttt{morph} function will create the aforementioned transformation, $h_2$. It's as simple as <<>>= library(mcmc) h2 <- morph(b=1) @ We can now see the induced density. Note that \texttt{morph} works for log unnormalized densities, so we need exponentiate the induced density to plot it on the usual scale. <<>>= lud <- function(x) dt(x, df=3, log=TRUE) lud.induced <- h2$lud(lud) @ We can plot the two densities, <>= curve(exp(Vectorize(lud.induced)(x)), from = -3, to = 3, lty = 2, xlab = "t", ylab = "density") curve(exp(lud(x)), add = TRUE) legend("topright", c("t density", "induced density"), lty=1:2) @ The \texttt{Vectorize} in this example is necessary because the function \texttt{lud.induced} is not vectorized. Instead, it treats any vector passed as a single input, which is rescaled (using the specified diffeomorphism) and passed to \texttt{lud}. Compare the behavior of \texttt{lud} and \texttt{lud.induced} in the following example. <<>>= lud(1:4) lud(1) foo <- try(lud.induced(1:4)) class(foo) cat(foo, "\n") lud.induced(1) @ Because the function \texttt{dt} is vectorized, the function \texttt{lud} is also vectorized, mapping vectors to vectors, whereas the function \texttt{lud.induced} is not vectorized, mapping vectors to scalars. Before we start using random numbers, we set the seed of the random number generator so this document always produces the same results. <>= set.seed(42) @ To change the results, change the seed or delete the \texttt{set.seed} statement. Running a Markov chain for the induced density is done with \texttt{morph.metrop}. <<>>= out <- morph.metrop(lud, 0, blen=100, nbatch=100, morph=morph(b=1)) @ The content of \texttt{out\$batch} is on the scale of used by \texttt{lud}. Once the transformation has been set, no adjustment is needed (unless you want to change transformations). We start by adjusting the scale. <<>>= # adjust scale to find a roughly 20% acceptance rate out$accept @ An acceptance rate of \Sexpr{round(100 * out$accept, 1)}\% %$ to fix emacs highlighting is probably too high. By increasing the scale of the proposal distribution we can bring it down towards 20\%. <<>>= out <- morph.metrop(out, scale=4) out$accept @ We now use this Markov chain to estimate the expectation of the target distribution. But first we need to check whether our batch length is good. The following code <>= acf(out$batch) @ makes the autocorrelation plot (Figure~\ref{fig:fig0}). \begin{figure} \begin{center} <>= <> @ \end{center} \caption{Autocorrelation plot for the sequence of batch means.} \label{fig:fig0} \end{figure} It looks like there is no significant autocorrelation among the batches so the following produces a valid confidence interval for the true unknown mean of the target distribution (since this is a toy problem we actually know the true ``unknown'' mean is zero, but we pretend we don't know that for the purposes of the toy problem) <<>>= t.test(out$batch) @ If we want a point estimate and a Monte Carlo standard error, those are <<>>= colMeans(out$batch) apply(out$batch, 2, sd) / sqrt(out$nbatch) @ If a shorter confidence interval is desired, the Markov chain can be run longer (increase either the number of batches or the batch length, or both). Note that when calculating our estimate and the Monte Carlo standard error we are not concerned with what was happening on the transformed scale. The \texttt{morph.metrop} function seamlessly does this for us. \subsection{Comparison of Morphed and Unmorphed} To show the utility of the transformation, we will study the behavior of the Markov chain with and without the transformation for the same problem as in the preceding section. We will consider two different estimation methods. \begin{enumerate} \item \label{enum:rw} Estimate the mean of the target distribution using a random-walk Metropolis algorithm implemented by the \texttt{metrop} function. \citet{jarner-roberts} demonstrate that a central limit theorem does not hold for these estimates. \item \label{enum:rw-induced} Estimate the mean of the target distribution using a random-walk Metropolis algorithm implemented by the \texttt{morph.metrop} function with argument \texttt{morph = morph(b=1)}. \citet{johnson-geyer} demonstrate that a central limit theorem does hold for these estimates. \end{enumerate} For the former, we need to adjust the scale. <>= out.unmorph <- metrop(lud, 0, blen=1000, nbatch=1) out.unmorph$accept out.unmorph <- metrop(out.unmorph, scale=4) out.unmorph$accept out.unmorph <- metrop(out.unmorph, scale=6) out.unmorph$accept @ A scale of 6 appears to be about right. Now we do a long run for this sampler. Because this run takes longer than CRAN vingettes are supposed to take, we save the results to a file and load the results from this file if it already exists. <>= lout <- suppressWarnings(try(load("morph1.rda"), silent = TRUE)) if (inherits(lout, "try-error")) { out.unmorph <- metrop(out.unmorph, blen = 1e5, nbatch = 1e3) save(out.unmorph, file = "morph1.rda") } else { .Random.seed <- out.unmorph$final.seed } out.unmorph$accept @ Let's look at the distribution of batch means. The following code <>= foo <- as.vector(out.unmorph$batch) qqnorm(foo) qqline(foo) @ makes a Q-Q plot of the batch means (Figure~\ref{fig:fig4}). \begin{figure} \begin{center} <>= <> @ \end{center} \caption{Q-Q plot of batch means (batch length \Sexpr{out.unmorph$blen}) for the unmorphed chain.} \label{fig:fig4} \end{figure} We see bad behavior of the unmorphed chain. These batch means (or at least some batch means for sufficiently long batch length) should look normally distributed, and these don't. Not even close. We do a formal test just to check our interpretation of the plot <>= shapiro.test(foo) @ Now for comparison, we check the morphed chain. <>= lout <- suppressWarnings(try(load("morph2.rda"), silent = TRUE)) if (inherits(lout, "try-error")) { out.morph <- morph.metrop(out, blen = 1e5, nbatch = 1e3) save(out.morph, file = "morph2.rda") } else { .Random.seed <- out.morph$final.seed } out.morph$accept @ The following code <>= foo <- as.vector(out.morph$batch) qqnorm(foo) qqline(foo) @ makes a Q-Q plot of the batch means (Figure~\ref{fig:fig5}). \begin{figure} \begin{center} <>= <> @ \end{center} \caption{Q-Q plot of batch means (batch length \Sexpr{out.unmorph$blen}) for the morphed chain.} \label{fig:fig5} \end{figure} We see good behavior of the morphed chain. These batch means do look normally distributed. We do a formal test just to check our interpretation of the plot <>= shapiro.test(foo) @ \section{Binomial Distribution with a Conjugate Prior} We demonstrate a morphometric Markov chain using the \texttt{UCBAdmisions} data set included in \texttt{R}, (use \texttt{help(UCBAdmissions)} to see details of this data set). We will model the probability of a student being admitted or rejected, using the sex of the student and the department that the student applied to as predictor variables. For our prior, we naively assume that 30\% of all students are admitted, independent of sex or department. As this is a naive prior, we will only add 5 students to each gender-department combination. This will not give the prior much weight, most of the information in the posterior distribution will be from the data. If we have $L$ observations from a multinomial distribution, then using a multinomial logit-link, with model matrices $M^1,\dots,M^L$, regression parameter $\beta$, observed counts $Y^1,\dots,Y^N$ with observed sample sizes $N^1,\dots,N^L$ and prior probabilities $\xi^1, \dots, \xi^L$ and prior ``sample sizes'' $\nu^1,\dots,\nu^L$ then the posterior distribution of $\beta$ is given by \citep[Sec. 5.1.2]{johnson-thesis} \begin{equation}\label{eq:mult-post-conj-complicated} \pi(\beta|y,n,\xi,\nu) \propto \exp\biggl\{ \sum_{l=1}^L \inner{y^l + \xi^l \nu^l, M^l \beta} - (n^l + \nu^l) \log\bigl( \sum_j e^{M_{j\cdot} \beta} \bigr) \biggr\} \end{equation} where $\inner{a, b}$ denotes the usual inner product between vectors $a$ and $b$. For our application, we can simplify this in two ways. First, we use the posterior counts instead of the sum of the prior and data counts, i.e. use $y^{*l} = y^l + \xi^l \nu^l$ and $n^{*l} = n^l + \nu^l$. Second, to avoid having a direction of recession in $\pi(\beta|\cdot)$, we need to fix the elements of $\beta$ that correspond with one of the response categories. Since we are going to fitting a binomial response, if we set these elements of $\beta$ to be $0$, we may then replace the sequence of model matrices with a single model matrix; $M$ instead of $M^1,\dots,M^L$. The $l$-th row of $M$ will correspond to $M^l$. Label the two response categories $A$ and $B$. Without loss of generality, we will fix the elements of $\beta$ corresponding to category $B$ to 0. Let $x_1,\dots,x_L$ represent the posterior counts of category $A$, and $\beta^*$ represent the corresponding elements of $\beta$ --- these are the elements of $\beta$ we did not fix as 0. The meaning of $n^{*1},\dots,n^{*L}$ is unchanged. Then our simplified unnormalized posterior density is \begin{equation}\label{eq:simplified-posterior} \pi(\beta|x,n^*) \propto \exp\biggl\{ \inner{x, M \beta^*} - \sum_{l=1}^L n^{*l} \log\bigl(1 + e^{(M \beta^*)_l}\bigr) \biggr\}. \end{equation} This can be computed with a very simple \texttt{R} function, we implement it in log form. <>= lud.binom <- function(beta, M, x, n) { MB <- M %*% beta sum(x * MB) - sum(n * log(1 + exp(MB))) } @ Now that we have a function to calculate a log-unnormalized posterior density, we can run the Markov chain. To that, we need the model matrix. First we convert the \texttt{UCAdmissions} data to a \texttt{data.frame}. <>= dat <- as.data.frame(UCBAdmissions) dat.split <- split(dat, dat$Admit) dat.split <- lapply(dat.split, function(d) { val <- as.character(d$Admit[1]) d["Admit"] <- NULL names(d)[names(d) == "Freq"] <- val d }) dat <- merge(dat.split[[1]], dat.split[[2]]) @ Next we build the model matrix. Our model specification allows for an interaction between gender and department, even though our prior assumes that they are independent. <>= formula <- cbind(Admitted, Rejected) ~ (Gender + Dept)^2 mf <- model.frame(formula, dat) M <- model.matrix(formula, mf) @ As stated above, we will take $\nu = 5$ and $\xi=0.30$. That is, we will add 5 students to each gender-department combination, where each combination has a 30\% acceptance rate. <<>>= xi <- 0.30 nu <- 5 @ <>= lud.berkeley <- function(B) lud.binom(B, M, dat$Admitted + xi * nu, dat$Admitted + dat$Rejected + nu) @ This function is suitable for passing to \texttt{metrop} or \texttt{morph.metrop}. We know that using \texttt{morph.metrop} with \texttt{morph=morph(p=3)} will run a geometrically ergodic Markov chain \citep{johnson-geyer}. <<>>= berkeley.out <- morph.metrop(lud.berkeley, rep(0, ncol(M)), blen=1000, nbatch=1, scale=0.1, morph=morph(p=3)) berkeley.out$accept berkeley.out <- morph.metrop(berkeley.out, scale=0.05) berkeley.out$accept berkeley.out <- morph.metrop(berkeley.out, scale=0.02) berkeley.out$accept berkeley.out <- morph.metrop(berkeley.out, blen=10000) berkeley.out$accept @ <<>>= berkeley.out <- morph.metrop(berkeley.out, blen=1, nbatch=100000) @ Estimate the posterior mean acceptance probabilities for each gender-department combination. <<>>= beta <- setNames(colMeans(berkeley.out$batch), colnames(M)) MB <- M %*% beta dat$p <- dat$Admitted / (dat$Admitted + dat$Rejected) dat$p.post <- exp(MB) / (1 + exp(MB)) dat @ The small difference between the data and posterior probabilities is expected, our prior was given very little weight. Using \texttt{morph.metrop} with the setting \texttt{morph=morph(p=3)} in this setting is an efficient way of sampling from the posterior distribution. We can also compare the posterior distribution of admittance probability for each gender-department combination. Table~\ref{tab:post-quant} gives the 5\% and 95\% quantiles for the posterior distribution of the admittance probabilities for each gender-department combination. Figure~\ref{fig:posterior-probs} gives the same quantiles, plus the mean posterior-probability for each gender-department combination. From these we can see that for each department, there is considerable overlap of the distributions of probabilities for males and females. <>= posterior.probabilities <- t(apply(berkeley.out$batch, 1, function(r) { eMB <- exp(M %*% r) eMB / (1 + eMB) })) quants <- apply(posterior.probabilities, 2, quantile, prob=c(0.05, 0.95)) quants.str <- matrix(apply(quants, 2, function(r) sprintf("[%0.2f, %0.2f]", r[1], r[2])), nrow=2, byrow=TRUE) @ \begin{table}[ht] \caption{5\% and 95\% posterior quantiles for admittance probability for each gender-department combination} \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Gender & Dept. A & Dept. B & Dept. C & Dept. D & Dept. E. & Dept. F \\ \hline Female & \Sexpr{paste(quants.str[1, 1:6], collapse=" & ")} \\ Male & \Sexpr{paste(quants.str[2, 1:6], collapse=" & ")} \\ \hline \end{tabular} \label{tab:post-quant} \end{center} \end{table} \begin{figure} \begin{center} <>= x <- (0:5) * 2 + 1 plot(x[c(1, 6)] + 0.5 * c(-1, 1), 0:1, xlab="Department", ylab="Probability", xaxt="n", type="n") axis(1, x, LETTERS[1:6]) for(i in 1:6) { lines((x[i]-0.25)*c(1, 1), quants[1:2, i], lwd=2, col="gray") lines((x[i] + 0.25) * c(1, 1), quants[1:2, i + 6], lwd=2, col="gray") points(x[i] + 0.25 * c(-1, 1), dat$p.post[i + c(0, 6)], pch=c("F", "M")) } @ \end{center} \caption{Posterior 5\% and 95\% quantiles and mean, by department and gender.} \label{fig:posterior-probs} \end{figure} \section{Cauchy Location-Scale Model} We are going to do a Cauchy location-scale family objective Bayesianly. \subsection{Data} First we generate some data. <>= n <- 15 mu0 <- 50 sigma0 <- 10 x <- rcauchy(n, mu0, sigma0) round(sort(x), 1) @ \texttt{mu0} and \texttt{sigma0} are the true unknown parameter values (since the data are simulated we actually know these ``unknown'' parameter values, but we must pretend we don't know them and estimate them). \subsection{Prior} The standard objective prior distribution for this situation (insofar as any prior distribution can be said to be an objective standard) is the improper prior $$ g(\mu, \sigma) = \frac{1}{\sigma} $$ which is right Haar measure for the location-scale group, and is the standard prior that comes from the group invariance argument \citep[Section~3.2]{kass-wasserman}. \subsection{Log Unnormalized Posterior} We need a function whose argument is a two-vector <>= lup <- function(theta) { if (any(is.na(theta))) stop("NA or NaN in input to log unnormalized density function") mu <- theta[1] sigma <- theta[2] if (sigma <= 0) return(-Inf) if (any(! is.finite(theta))) return(-Inf) result <- sum(dcauchy(x, mu, sigma, log = TRUE)) - log(sigma) if (! is.finite(result)) { warning(paste("Oops! mu = ", mu, "and sigma =", sigma)) } return(result) } @ \subsection{Laplace Approximation} To have some idea what we are doing, we first maximize the log unnormalized posterior. To do it helps to have good starting points for the optimization. Robust estimators of location and scale are <>= mu.twiddle <- median(x) sigma.twiddle <- IQR(x) c(mu.twiddle, sigma.twiddle) @ The posterior mode is <>= oout <- optim(c(mu.twiddle, sigma.twiddle), lup, control = list(fnscale = -1), hessian = TRUE) stopifnot(oout$convergence == 0) mu.hat <- oout$par[1] sigma.hat <- oout$par[2] c(mu.hat, sigma.hat) @ and the hessian evaluated at the posterior mode (calculated by \texttt{optim} using finite differences) is <>= oout$hessian @ The hessian is nearly diagonal and one can check that theoretically is exactly diagonal. Thus approximate (asymptotic) posterior standard deviations are <>= sqrt(- 1 / diag(oout$hessian)) @ \subsection{Theory} To use the theory in \citet{johnson-geyer} we must verify that the target distribution (the unnormalized posterior) is everywhere positive, and it isn't (it is zero for $\sigma \le 0$). We tried making $\log(\sigma)$ the parameter but this didn't work either because $\log(\sigma)$ goes to infinity so slowly that this stretches out the tails so much that the transformations introduced by \citet{johnson-geyer} can't pull them back in again. We do know \citep[Example~3.4]{johnson-geyer} that if we fix $\sigma$ this is a sub-exponentially light target distribution. Letting $\sigma$ vary can only make this worse. Thus, if we don't do anything and just use the \texttt{metrop} function, then performance will be very bad. So we are going to use the transformations and the \texttt{morph.metrop} function, even though the theory that motivates them does not hold. \subsection{Morph} We want to center the transformation at the posterior mode, and use a radius $r$ that doesn't transform until several approximate standard deviations <>= moo <- morph(b = 0.5, r = 7, center = c(mu.hat, sigma.hat)) mout <- morph.metrop(lup, c(mu.hat, sigma.hat), 1e4, scale = 3, morph = moo) mout$accept mout <- morph.metrop(mout) @ Good enough. An attempt to increase the scale led to error when the transformation functions overflowed. Can't take steps too big with this stuff. The following code <>= acf(mout$batch) @ makes an autocorrelation plot (Figure~\ref{fig:cfig1}). \begin{figure} \begin{center} <>= <> @ \end{center} \caption{Autocorrelation plot. First component is $\mu$, second is $\sigma$.} \label{fig:cfig1} \end{figure} It looks like lag 10 to 15 is enough to get near independence. Now we want to make marginal density plots. If we just feed our MCMC output to the R function \texttt{density} it undersmooths because it expects independent and identically distributed data rather than autocorrelated data. Thus we feed it subsampled, nearly uncorrelated data to select the bandwidth and then use that bandwidth on the full data. Here's how that works. The following code <>= mu <- mout$batch[ , 1] i <- seq(1, mout$nbatch, by = 15) out.sub <- density(mu[i]) out <- density(mu, bw = out.sub$bw) plot(out) @ makes the density plot (Figure~\ref{fig:cfig2}). \begin{figure} \begin{center} <>= <> @ \end{center} \caption{Density plot for the marginal posterior for $\mu$.} \label{fig:cfig2} \end{figure} And a similar plot for $\sigma$ (Figure~\ref{fig:cfig3}) \begin{figure} \begin{center} <>= sigma <- mout$batch[ , 2] out.sub <- density(sigma[i]) out <- density(sigma, bw = out.sub$bw) plot(out) @ \end{center} \caption{Density plot for the marginal posterior for $\sigma$.} \label{fig:cfig3} \end{figure} \begin{thebibliography}{} \bibitem[Jarner and Roberts(2007)]{jarner-roberts} Jarner, S.F., and G.O. Roberts (2007). \newblock Convergence of heavy-tailed Monte Carlo Markov chain algorithms. \newblock \emph{Scandinavian Journal of Statistics}, 34, 781--815. \bibitem[Jarner and Tweedie(2003)]{jarner-tweedie} Jarner, S.~F., and Tweedie, R.~L. (2003). \newblock Necessary conditions for geometric and polynomial ergodicity of random-walk-type Markov chains. \newblock \emph{Bernoulli}, 9, 559--578. \bibitem[Johnson(2011)]{johnson-thesis} Johnson, L.~T. (2011). \newblock Geometric Ergodicity of a Random-Walk Metropolis Algorithm via Variable Transformation and Computer Aided Reasoning in Statistics. \newblock Ph.~D. thesis. University of Minesota. \url{http://purl.umn.edu/113140} \bibitem[Johnson and Geyer(submitted)]{johnson-geyer} Johnson, L.~T., and C.~J. Geyer (submitted). \newblock Variable Transformation to Obtain Geometric Ergodicity in the Random-walk Metropolis Algorithm. \newblock Revised and resubmitted to \emph{Annals of Statistics}. \bibitem[Kass and Wasserman(1996)]{kass-wasserman} Kass, R.~E., and Wasserman, L. (1996). \newblock Formal rules for selecting prior distributions: A review and annotated bibliography. \newblock \emph{Journal of the American Statistical Association}, 435, 1343--1370. \bibitem[Mengersen and Tweedie(1996)]{mengersen-tweedie} Mengersen, K.L., ad R. L. Tweedie (1996). \newblock Rates of convergence of the Hastings and Metropolis algorithms. \newblock \emph{Annals of Statistics}, 24, 101--121. \end{thebibliography} \end{document}