The dynamic programming principle for a multidimensional singular stochastic control problem is established in this paper. When assuming Lipschitz continuity on the data, it is shown that the value function is continu...
详细信息
The dynamic programming principle for a multidimensional singular stochastic control problem is established in this paper. When assuming Lipschitz continuity on the data, it is shown that the value function is continuous and is the unique viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation.
This paper surveys those aspects of controlled diffusion processes wherein the control problem is treated as an optimization problem on a set of probability measures on the path space. This includes: (i) existence res...
详细信息
This paper surveys those aspects of controlled diffusion processes wherein the control problem is treated as an optimization problem on a set of probability measures on the path space. This includes: (i) existence results for optimal admissible or Markov controls (both in nondegenerate and degenerate cases), (ii) a probabilistic treatment of the dynamic programming principle, (iii) the corresponding results for control under partial observations, (iv) a probabilistic approach to the ergodic control problem. The paper is expository in nature and aims at giving a unified treatment of several old and new results that evolve around certain central ideas.
We prove that the optimal cost function of a deterministic impulse control problem is the unique viscosity solution of a first-order Hamilton–Jacobi quasi-variational inequality in $\mathbb{R}^N $.
We prove that the optimal cost function of a deterministic impulse control problem is the unique viscosity solution of a first-order Hamilton–Jacobi quasi-variational inequality in $\mathbb{R}^N $.
作者:
LIONS, PL1. Ceremade
Universitè Paris IX-Dauphine Place de Lattre de Tassigny 75775 Paris Cedex 16 France
We consider general problems of optimal stochastic control and the associated Hamilton-Jacobi-Bellman equations. We recall first the usual derivation of the Hamilton-Jacobi-Bellman equations from the dynamic Programmi...
详细信息
We consider general problems of optimal stochastic control and the associated Hamilton-Jacobi-Bellman equations. We recall first the usual derivation of the Hamilton-Jacobi-Bellman equations from the dynamic programming principle. We then show and explain various results, including (i) continuity results for the optimal cost function, (ii) characterizations of the optimal cost function as the maximum subsolution, (iii) regularity results, and (iv) uniqueness results. We also develop the recent notion of viscosity solutions of Hamilton-Jacobi-Bellman equations.
暂无评论