We consider a Markovian stochastic control problem with model uncertainty. The controller (intelligent player) observes only the state, and, therefore, uses feedback (closed-loop) strategies. The adverse player (nature) who does not have a direct interest in the payoff, chooses open-loop controls that parametrize Knightian uncertainty. This creates a two-step optimization problem (like half of a game) over feedback strategies and open-loop controls. The main result is to show that, under some assumptions, this provides the same value as the (half of) the zero-sum symmetric game where the adverse player also plays feedback strategies and actively tries to minimize the payoff. The value function is independent of the filtration accessible to the adverse player. Aside from the modeling issue, the present note is a technical companion to a previous work.
"A note on the strong formulation of stochastic control problems with model uncertainty." Electron. Commun. Probab. 19 1 - 10, 2014. https://doi.org/10.1214/ECP.v19-3436