Electronic Journal of Statistics

Strong consistency of the least squares estimator in regression models with adaptive learning

Norbert Christopeit and Michael Massmann

Full-text: Open access


This paper looks at the strong consistency of the ordinary least squares (OLS) estimator in linear regression models with adaptive learning. It is a companion to Christopeit & Massmann (2018) which considers the estimator’s convergence in distribution and its weak consistency in the same setting. Under constant gain learning, the model is closely related to stationary, (alternating) unit root or explosive autoregressive processes. Under decreasing gain learning, the regressors in the model are asymptotically collinear. The paper examines, first, the issue of strong convergence of the learning recursion: It is argued that, under constant gain learning, the recursion does not converge in any probabilistic sense, while for decreasing gain learning rates are derived at which the recursion converges almost surely to the rational expectations equilibrium. Secondly, the paper establishes the strong consistency of the OLS estimators, under both constant and decreasing gain learning, as well as rates at which the estimators converge almost surely. In the constant gain model, separate estimators for the intercept and slope parameters are juxtaposed to the joint estimator, drawing on the recent literature on explosive autoregressive models. Thirdly, it is emphasised that strong consistency is obtained in all models although the near-optimal condition for the strong consistency of OLS in linear regression models with stochastic regressors, established by Lai & Wei (1982a), is not always met.

Article information

Electron. J. Statist., Volume 13, Number 1 (2019), 1646-1693.

Received: August 2018
First available in Project Euclid: 17 April 2019

Permanent link to this document

Digital Object Identifier

Mathematical Reviews number (MathSciNet)

Zentralblatt MATH identifier

Primary: 62F10: Point estimation 62H12: Estimation 62J05: Linear regression 62M10: Time series, auto-correlation, regression, etc. [See also 91B84]
Secondary: 91A26: Rationality, learning 91B64: Macro-economic models (monetary models, models of taxation) 91B84: Economic time series analysis [See also 62M10]

Adaptive learning almost sure convergence non-stationary regression ordinary least squares

Creative Commons Attribution 4.0 International License.


Christopeit, Norbert; Massmann, Michael. Strong consistency of the least squares estimator in regression models with adaptive learning. Electron. J. Statist. 13 (2019), no. 1, 1646--1693. doi:10.1214/19-EJS1558. https://projecteuclid.org/euclid.ejs/1555466480

Export citation


  • Adam, K., Marcet, A. and Nicolini, J. P. (2016). Stock market volatility and learning., Journal of Finance 71 33–82.
  • Apostol, T. M. (1974)., Mathematical Analysis, 2 ed. Addison-Wesley, Reading, MA.
  • Benveniste, A., Métivier, M. and Priouret, P. (1990)., Adaptive Algorithms and Stochastic Approximation. Springer, Berlin. Orginally published in French in 1987.
  • Chan, N. H. and Wei, C. Z. (1988). Limiting distributions of least squares estimates of unstable autoregressive processes., Annals of Statistics 16 367–401.
  • Chevillon, G., Massmann, M. and Mavroeidis, S. (2010). Inference in models with adaptive learning., Journal of Monetary Economics 57 341–351.
  • Chow, Y. S. (1965). Local convergence of martingales and the law of large numbers., Annals of Mathematical Statistics 36 552–558.
  • Chow, Y. S. and Teicher, H. (1973). Iterated logarithm laws for weighted averages., Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 26 87–94.
  • Christopeit, N. (1986). Quasi-least-squares estimation in semimartingale regression models., Stochastics 16 255–278.
  • Christopeit, N. and Massmann, M. (2012). Strong consistency of the least-squares estimator in simple regression models with stochastic regressors. TI Discussion Paper, 12-109/III.
  • Christopeit, N. and Massmann, M. (2018). Estimating structural parameters in regression models with adaptive learning., Econometric Theory 34 68–111.
  • Donsker, M. D. and Varadhan, S. R. S. (1977). On the laws of the iterated logarithm for local times., Communications on Pure and Applied Mathematics 30 707–753.
  • Evans, G. W. and Honkapohja, S. (2001)., Learning and Expectations in Macroeconomics. Princeton University Press, Princeton.
  • Evans, G. W., Honkapohja, S., Sargent, T. J. and Williams, N. (2013). Bayesian model averaging, learning and model selection. In, Macroeconomics at the Service of Public Policy (T. J. Sargent and J. Vilmunen, eds.) 6, 99–119. Oxford University Press, Oxford, UK.
  • Kottmann, T. (1990). Learning Procedures and Rational Expectations in Linear Models with Forecast Feedback, PhD thesis, University of, Bonn.
  • Lai, T. L. (2003). Stochastic approximation., Annals of Statistics 31 391–406.
  • Lai, T. L. and Wei, C. Z. (1982a). Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems., Annals of Statistics 10 154–166.
  • Lai, T. L. and Wei, C. Z. (1982b). Asymptotic properties of projections with applications to stochastic regression problems., Journal of Multivariate Analysis 12 346–370.
  • Lai, T. L. and Wei, C. Z. (1983a). Asymptotic properties of general autoregressive models and strong consistency of least-squares estimates of their parameters., Journal of Multivariate Analaysis 13 1–23.
  • Lai, T. L. and Wei, C. Z. (1983b). A note on martingale difference sequences satisfying the local Marcinkiewicz-Zygmund condition., Bulletin of the Institute of Mathematics, Academica Sinica 11 1–13.
  • Lai, T. L. and Wei, C. Z. (1985). Asymptotic properties of multivariate weighted sums with applications to stochastic linear regression in linear dynamic stystems. In, Multivariate Analysis, (P. R. Krishnaiah, ed.) VI 375–393. North-Holland, Amsterdam.
  • Ljung, L. (1977). Analysis of recursive stochastic systems., IEEE Transactions on Automatic Control AC-22 551–575.
  • Malmendier, U. and Nagel, S. (2016). Learning from inflation experience., Quarterly Journal of Economics 131 53–87.
  • Marcet, A. and Sargent, T. J. (1995). Speed of convergence of recursive least squares: learning with autoregressive moving-average perceptions. In, Learning and Rationality in Economics (A. Kirman and M. Salmon, eds.) 6, 179–215. Blackwell.
  • Milani, F. (2007). Expectations, learning and macroeconomic persistence., Journal of Monetary Economics 54 2065–2082.
  • Nielsen, B. (2005). Strong consistency results for least squares estimators in general vector autoregressions with deterministic terms., Econometric Theory 21 534–561.
  • Phillips, P. C. B. (1987). Towards a unified asymptotic theory for autoregression., Biometrika 74 535–547.
  • Phillips, P. C. B. (2007). Regression with slowly varying regressors and nonlinear trends., Econometric Theory 23 557–614.
  • Phillips, P. C. B. and Magdalinos, T. (2007). Limit theory for moderate deviations from a unit root., Journal of Econometrics 136 115–130.
  • Phillips, P. C. B. and Magdalinos, T. (2008). Limit theory for explosively cointegrated systems., Econometric Theory 24 865–887.
  • Sargent, T. J. (1993)., Bounded Rationality in Macroeconomics. Clarendon Press, Oxford.
  • Sargent, T. J. (1999)., The Conquest of American Inflation. Princeton University Press, Princeton.
  • Shiryaev, A. N. (1996)., Probability, 2 ed. Springer, New York. 1st edition 1984.
  • Solo, V. and Kong, X. (1995)., Signal Processing Algorithms. Prentice Hall, Upper Saddle River.
  • Stout, W. F. (1970). The Hartmann-Wintner law of the iterated logarithm for martingales., Annals of Mathematical Statistics 41 2158–2160.
  • Wang, X. and Yu, J. (2015). Limit theory for an explosive autoregressive process., Economics Letters 126 176–180.
  • Wei, C. Z. (1985). Asymptotic properties of least-squares estimates in stochastic regression models., Annals of Statistics 13 1498–1508.