Bayesian Linear Regression with Gibbs Sampling
An in-depth guide on implementing Bayesian linear regression and Gibbs sampling for parameter estimation.
Last updated
An in-depth guide on implementing Bayesian linear regression and Gibbs sampling for parameter estimation.
Last updated
TL; DR
Bayesian linear regression assumes data follows a normal distribution given parameters.
Prior distributions for regression coefficients and variance are normally and inverse-gamma distributed respectively.
Gibbs sampling is used to sample from the posterior distribution when direct computation is infeasible.
The full conditional distributions for coefficients and variance are derived to facilitate sampling.
The Gibbs sampling procedure iterates between sampling regression coefficients and variance to approximate the posterior distribution.
Bayesian linear regression is a statistical method in which the statistical analysis is undertaken within the context of Bayesian inference. When performing Bayesian linear regression, we assume that the observed data, , given the parameters and , follows a normal distribution:
The prior distributions for the parameters are set as follows:
For :
with the prior probability density function (pdf) given by:
For :
with the prior pdf given by:
The prior for is:
The prior for is:
Directly computing can be challenging, so we use Gibbs sampling by sampling from the full conditional distributions of and .
The full conditional for is:
The full conditional for is a normal distribution. If we denote , then:
The full conditional for is:
The Gibbs sampling procedure for obtaining samples from is as follows:
Sample from
Sample from .
Sample from
Sample from
Continue this process for iterations.
After iterations, we have samples from the full posterior distribution.