Open Access
Translator Disclaimer
December2000 Adaptive optimization and $D$-optimum experimental design
Luc Pronzato
Ann. Statist. 28(6): 1743-1761 (December2000). DOI: 10.1214/aos/1015957479


We consider the situation where one has to maximize a function $\eta(\theta, \mathbf{x})$ with respect to $\mathbf{x} \epsilon \mathbb{R}^q$, when $\theta$ is unknown and estimated by least squares through observations $y_k = \mathbf{f}^{\top(\mathbf{x}_k)\theta + \varepsilon_k$, with $\varepsilon_k$ some random error. Classical applications are regulation and extremum control problems. The approach we adopt corresponds to maximizing the sum of the current estimated objective and a penalization for poor estimation: $\mathbf{x}_{k + 1}$ maximizes $\eta(\hat{\theta}^k, \mathbf{x}) + (\alpha_k/k), d_k(\mathbf{x})$, with $\hat{\theta}^k$ the estimated value of $\theta$ at step $k$ and $d_k$ the penalization function. Sufficient conditions for strong consistency of $\hat{\theta}^k$ and for almost sure convergence of $(1/k) \Sigma_{i=1}^k \eta(\theta, \mathbf{x}_i)$ to the maximum value of $\eta(\theta, \mathbf{x})$ are derived in the case where $d_k(\cdot)$ is the variance function used in the sequential construction of $D$-optimum designs. A classical sequential scheme from adaptive control is shown not to satisfy these conditions, and numerical simulations confirm that it indeed has convergence problems.


Download Citation

Luc Pronzato. "Adaptive optimization and $D$-optimum experimental design." Ann. Statist. 28 (6) 1743 - 1761, December2000.


Published: December2000
First available in Project Euclid: 12 March 2002

zbMATH: 1103.93320
MathSciNet: MR1835050
Digital Object Identifier: 10.1214/aos/1015957479

Primary: 62F12 , 62L05
Secondary: 93C40

Keywords: $D$-optimum design , adaptive control , least-squares estimation , sequential design , strong consistency

Rights: Copyright © 2000 Institute of Mathematical Statistics


Vol.28 • No. 6 • December2000
Back to Top