This is the first of two papers on boundary optimal control problems with linear state equation and convex cost arising from boundary control of PDEs and the associated Hamilton--Jacobi--Bellman equation. In this paper we study necessary and sufficient conditions of optimality (Pontryagin maximum principle), and study the properties of a family of approximating problems that will be useful both in this paper and in the sequel. In the second paper we will apply dynamic programming to show that the value function of the problem is a solution of an integral version of the HJB equation, and moreover that it is the pointwise limit of classical solutions of approximating equations.
"Boundary-control problems with convex cost and dynamic programming in infinite dimension. I. The maximum principle." Differential Integral Equations 17 (9-10) 1149 - 1174, 2004.