# Posterior consistency in linear models under shrinkage priors

@article{Armagan2013PosteriorCI, title={Posterior consistency in linear models under shrinkage priors}, author={Artin Armagan and David B. Dunson and Jaeyong Lee and Waheed Uz Zaman Bajwa and Nate Strawn}, journal={Biometrika}, year={2013}, volume={100}, pages={1011-1018} }

We investigate the asymptotic behaviour of posterior distributions of regression coefficients in high-dimensional linear models as the number of dimensions grows with the number of observations. We show that the posterior distribution concentrates in neighbourhoods of the true parameter under simple sufficient conditions. These conditions hold under popular shrinkage priors given some sparsity assumptions. Copyright 2013, Oxford University Press.

#### 23 Citations

High-dimensional multivariate posterior consistency under global-local shrinkage priors

- Mathematics, Computer Science
- J. Multivar. Anal.
- 2018

This paper derives sufficient conditions for posterior consistency under the Bayesian multivariate linear regression framework and proves that the method achieves posterior consistency even when p>n and even whenp grows at nearly exponential rate with the sample size. Expand

Ultra high-dimensional multivariate posterior contraction rate under shrinkage priors

- Mathematics
- Journal of Multivariate Analysis
- 2022

In recent years, shrinkage priors have received much attention in high-dimensional data analysis from a Bayesian perspective. Compared with widely used spike-and-slab priors, shrinkage priors have… Expand

Contraction properties of shrinkage priors in logistic regression

- Mathematics
- 2020

Abstract Bayesian shrinkage priors have received a lot of attention recently because of their efficiency in computation and accuracy in estimation and variable selection. In this paper, we study the… Expand

Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors

- Mathematics
- 2020

We consider a sparse linear regression model with unknown symmetric error under the high-dimensional setting. The true error distribution is assumed to belong to the locally $\beta$-Holder class with… Expand

Nearly optimal Bayesian Shrinkage for High Dimensional Regression

- Mathematics
- 2017

During the past decade, shrinkage priors have received much attention in Bayesian analysis of high-dimensional data. In this paper, we study the problem for high-dimensional linear regression models.… Expand

High-dimensional variable selection via penalized credible regions with global-local shrinkage priors

- Mathematics
- 2016

The method of Bayesian variable selection via penalized credible regions separates model fitting and variable selection. The idea is to search for the sparsest solution within the joint posterior… Expand

Bayes Variable Selection in Semiparametric Linear Models

- Mathematics, Medicine
- Journal of the American Statistical Association
- 2014

This work proposes a semiparametric g-prior which incorporates an unknown matrix of cluster allocation indicators and Bayes’ factor and variable selection consistency is shown to result under a class of proper priors on g even when the number of candidate predictors p is allowed to increase much faster than sample size n, while making sparsity assumptions on the true model size. Expand

High-Dimensional Posterior Consistency in Bayesian Vector Autoregressive Models

- Computer Science, Medicine
- Journal of the American Statistical Association
- 2019

A VAR model with two prior choices for the autoregressive coefficient matrix is considered: a nonhierarchical matrix-normal prior and a hierarchical prior, which corresponds to an arbitrary scale mixture of normals, which establishes posterior consistency for both these priors under standard regularity assumptions. Expand

Fully Bayesian Penalized Regression with a Generalized Bridge Prior

- Computer Science, Mathematics
- 2017

This work proposes a fully Bayesian approach that incorporates both sparse and dense settings and shows how to use a type of model averaging approach to eliminate the nuisance penalty parameters and perform inference through the marginal posterior distribution of the regression coefficients. Expand

Data augmentation for non-Gaussian regression models using variance-mean mixtures

- Mathematics
- 2011

We use the theory of normal variance-mean mixtures to derive a data-augmentation scheme for a class of common regularization problems. This generalizes existing theory on normal variance mixtures for… Expand

#### References

SHOWING 1-10 OF 34 REFERENCES

Asymptotic normality of posterior distributions in high-dimensional linear models

- Mathematics
- 1999

We study consistency and asymptotic normality of posterior distributions of the regression coefficient in a linear model when the dimension of the parameter grows with increasing sample size. Under… Expand

Inference with normal-gamma prior distributions in regression problems

- Mathematics
- 2010

This paper considers the efiects of placing an absolutely continuous prior distribution on the regression coe-cients of a linear model. We show that the posterior expectation is a matrix-shrunken… Expand

Asymptotics for lasso-type estimators

- Mathematics
- 2000

We consider the asymptotic behavior of regression estimators that minimize the residual sum of squares plus a penalty proportional to Σ ∥β j ∥γ for some y > 0. These estimators include the Lasso as a… Expand

GENERALIZED DOUBLE PARETO SHRINKAGE.

- Computer Science, Mathematics
- Statistica Sinica
- 2013

The properties of the maximum a posteriori estimator are investigated, as sparse estimation plays an important role in many problems, connections with some well-established regularization procedures are revealed, and some asymptotic results are shown. Expand

Bernstein von Mises Theorems for Gaussian Regression with increasing number of regressors

- Mathematics
- 2010

This paper brings a contribution to the Bayesian theory of nonparametric and semiparametric estimation. We are interested in the asymptotic normality of the posterior distribution in Gaussian linear… Expand

Generalized Beta Mixtures of Gaussians

- Computer Science, Mathematics
- NIPS
- 2011

A new class of normal scale mixtures is proposed through a novel generalized beta distribution that encompasses many interesting priors as special cases and develops a class of variational Bayes approximations that will scale more efficiently to the types of truly massive data sets that are now encountered routinely. Expand

Bayesian lasso regression

- Mathematics
- 2009

The lasso estimate for linear regression corresponds to a posterior mode when independent, double-exponential prior distributions are placed on the regression coefficients. This paper introduces new… Expand

Mixtures of g Priors for Bayesian Variable Selection

- Mathematics
- 2008

Zellner's g prior remains a popular conventional prior for use in Bayesian variable selection, despite several undesirable consistency issues. In this article we study mixtures of g priors as an… Expand

The Bayesian Lasso

- Mathematics
- 2008

The Lasso estimate for linear regression parameters can be interpreted as a Bayesian posterior mode estimate when the regression parameters have independent Laplace (i.e., double-exponential) priors.… Expand

Variational Bridge Regression

- Mathematics, Computer Science
- AISTATS
- 2009

Results suggest that the proposed method yields an estimator that performs significantly better in sparse underlying setups than the existing state-of-the-art procedures in both n > p and p > n scenarios. Expand