Open Access
December 2017 Deep Learning: A Bayesian Perspective
Nicholas G. Polson, Vadim Sokolov
Bayesian Anal. 12(4): 1275-1304 (December 2017). DOI: 10.1214/17-BA1082

Abstract

Deep learning is a form of machine learning for nonlinear high dimensional pattern matching and prediction. By taking a Bayesian probabilistic perspective, we provide a number of insights into more efficient algorithms for optimisation and hyper-parameter tuning. Traditional high-dimensional data reduction techniques, such as principal component analysis (PCA), partial least squares (PLS), reduced rank regression (RRR), projection pursuit regression (PPR) are all shown to be shallow learners. Their deep learning counterparts exploit multiple deep layers of data reduction which provide predictive performance gains. Stochastic gradient descent (SGD) training optimisation and Dropout (DO) regularization provide estimation and variable selection. Bayesian regularization is central to finding weights and connections in networks to optimize the predictive bias-variance trade-off. To illustrate our methodology, we provide an analysis of international bookings on Airbnb. Finally, we conclude with directions for future research.

Citation

Download Citation

Nicholas G. Polson. Vadim Sokolov. "Deep Learning: A Bayesian Perspective." Bayesian Anal. 12 (4) 1275 - 1304, December 2017. https://doi.org/10.1214/17-BA1082

Information

Published: December 2017
First available in Project Euclid: 16 November 2017

zbMATH: 06843075
MathSciNet: MR3724986
Digital Object Identifier: 10.1214/17-BA1082

Keywords: Artificial intelligence , Bayesian hierarchical models , deep learning , LSTM models , machine learning , pattern matching , prediction , TensorFlow

Vol.12 • No. 4 • December 2017
Back to Top