Open Access
July, 1974 Stochastic Approximation Algorithms for Constrained Optimization Problems
Harold J. Kushner
Ann. Statist. 2(4): 713-723 (July, 1974). DOI: 10.1214/aos/1176342759

Abstract

The paper gives convergence theorems for several sequential Monte-Carlo or stochastic approximation algorithms for finding a local minimum of a function $f(\bullet)$ on a set $C$ defined by $C = \{x: q^i(x) \leqq 0, i = 1, 2, \cdots, s\}. f(\bullet)$ is unknown, but "noise perturbed" values can be observed at any desired parameter $x \in C$. The algorithms generate a sequence of random variables $\{X_n\}$ such that (for a.a. $\omega$) any convergent subsequence of $\{X_n(\omega)\}$ converges to a point where a certain necessary condition for constrained optimality holds. The techniques are drawn from both stochastic approximation, and non-linear programming.

Citation

Download Citation

Harold J. Kushner. "Stochastic Approximation Algorithms for Constrained Optimization Problems." Ann. Statist. 2 (4) 713 - 723, July, 1974. https://doi.org/10.1214/aos/1176342759

Information

Published: July, 1974
First available in Project Euclid: 12 April 2007

zbMATH: 0296.62077
MathSciNet: MR365955
Digital Object Identifier: 10.1214/aos/1176342759

Keywords: 62-45 , 90-58 , 93-60 , 93-70 , constrained optimization , constrained stochastic approximation , sequential Monte Carlo

Rights: Copyright © 1974 Institute of Mathematical Statistics

Vol.2 • No. 4 • July, 1974
Back to Top