% LaTeX source for review in J. Sci. Explor.
\documentclass{article}
\begin{document}
\noindent
\textbf{Bayesian Statistics: An Introduction} (3rd ed.) by Peter M.\ Lee.
London: Arnold Publishers, 2004. 351 + xv pp. \$17.95. ISBN
0340814055.
\bigskip
Bayesian statistics has made great strides in recent years, due partly
to better understanding of priors (e.g., automatic or reference
priors that can be used in the absence of subjective prior
information), partly due to the introduction of Markov Chain Monte
Carlo (MCMC) techniques for drawing samples from the posterior
distribution even when the sample space is huge, and partly due to the
power of hierarchical Bayes models for describing very complex
problems in a natural way. Added to this, the result of a Bayesian
analysis naturally provides what scientists really would like to
know, whereas the interpretation of the results of a standard
frequentist analysis is often unnatural and confusing, especially to
working scientists, but even to many of those with statistical
training. No longer is the Bayesian approach regarded with suspicion or
disdain by most classically trained statisticians; rather, some of
the most cutting-edge work in statistics is being done by statisticians
of all stripes using the Bayesian point-of-view. This is a tribute to
the power of modern Bayesian methods. The present book is the third
update of a book that has become something of a standard in
introductory texts on the subject. It is not much different from the
second edition, the new material being a chapter on hierarchical Bayes
and an expansion of the chapter on MCMC (which was added in the
second edition). However, the basic approach and most of the topics
remain virtually unchanged from the second edition.
I personally learned a great deal from the earlier editions of this book
when I was discovering the power of Bayesian methods in my own work.
The selection of topics is basic, including chapters on inference for
normally distributed data and for data having other distributions
(e.g., binomial, Poisson and other sorts), hypothesis testing (an area
in which the numerical results can differ substantially from those
provided by standard hypothesis tests, particularly for point null
hypotheses), two-sample problems, and Bayesian results on correlation,
regression and analysis of variance. A chapter on ``other topics''
discusses some of the salient features of Bayesian analysis that
differ from standard statistical discussions, such as the important
Likelihood Principle (which standard methods often violate), the
Stopping Rule Principle (which under mild conditions insulates Bayesian
methods from the problems that arise in standard hypothesis testing
when the experimenter is allowed to stop optionally and
conditionally on the results so far) and the role of decision theory.
The chapter on hierarchical Bayes discusses the surprising Stein
``paradox,'' whereby the obvious estimator for a vector parameter under
square-error loss is inadmissible in classical decision theory; better
estimators can be found naturally using a hierarchical Bayes model.
Finally, a chapter on MCMC rounds out the book; MCMC has
revolutionized Bayesian statistics over the past fifteen years by
providing a practical method for obtaining results, especially
integrals, by posterior simulation. (In spirit these
interpretation is different: Standard statistics regards parameters as
fixed and data as random variables, and averages or simulates over
the data, whereas Bayesian statistics regards parameters as random
variables and the observed data as fixed, and averages or simulates
over the parameters.) The chapter on MCMC introduces most of the
standard techniques (with slice sampling as a notable exception)
using examples coded in the free computer language R.
The book has much to recommend it, but I do have some problems with it.
For one thing, much of the book is devoted to exact or asymptotic
results, often using so-called conjugate priors that with standard
error distributions produce posterior distributions that remain in the
conjugate family and are analytically tractable. For example, using a
normal prior on the location variable in normal inference problems
with a known data variance results in a normal posterior
distribution, so results are easily calculated. But, for all the
importance of such analytical techniques, they are too restrictive for
the vast majority of real-world problems, which generally require
posterior simulation using MCMC to get practical results. A student
reading this book might get an exaggerated idea of the role that these
analytical techniques play in practical problems and might regard
MCMC techniques as an afterthought, whereas the truth is quite the
opposite. It is for this reason that I no longer use it as a textbook in
my Bayesian course, which I teach using simulation as the main
calculational technique, introducing exact analytical results after
discussing problems from a simulation point of view. My aim has
been to prepare the students to tackle real-world problems in their
chosen field after this one-semester graduate course. The post-course
experience of my students attests to the success of this approach.
Thus, I am of two minds when recommending this book. Certainly I
learned much from the first edition, so it can be useful for
self-study by a mature scientist who is aware of its limitations; but
I would be careful about using it as a textbook in a course, at least
without balancing it with other material (e.g., the highly-regarded but
more advanced book by Gelman, Carlin, Stem and Rubin) or with
lectures that placed more emphasis on simulation methods.
\begin{flushright}
\textit{WILLIAM H.\ JEFFERYS \\
Harlan J.\ Smith Centennial Professor in Astronomy, Emeritus \\
Department of Astronomy \\
1 University Station, C1400 \\
University of Texas at Austin \\
Austin, TX 78712-0259}
\end{flushright}
\noindent
\textit{Journal of Scientific Exploration}
\textbf{19} (1) (2005), 131--133.
\end{document}
%