In this paper, we study a stochastic optimal control problem under degenerate G-expectation. By using implied partition method, we show that the approximation result for admissible controls still hold. Based on this r...
详细信息
In this paper, we study a stochastic optimal control problem under degenerate G-expectation. By using implied partition method, we show that the approximation result for admissible controls still hold. Based on this result, we obtain the dynamic programming principle, and prove that the value function is the unique viscosity solution to the related HJB equation under degenerate case.
We consider a two-player zero-sum-game in a bounded open domain Omega described as follows: at a point x epsilon Omega, Players I and II play an epsilon-step tug-of-war game with probability alpha, and with probabilit...
详细信息
We consider a two-player zero-sum-game in a bounded open domain Omega described as follows: at a point x epsilon Omega, Players I and II play an epsilon-step tug-of-war game with probability alpha, and with probability beta (alpha + beta = 1), a random point in the ball of radius epsilon centered at x is chosen. Once the game position reaches the boundary, Player II pays Player I the amount given by a fixed payoff function F. We give a detailed proof of the fact that the value functions of this game satisfy the dynamic programming principle u(x) -alpha/2 {sup u(y)(y is an element of(B) over bar epsilon(x)) + inf(y is an element of(B) over bar epsilon(x)) u(y)} + beta f(B epsilon(x)) u(y)dy, for x is an element of Omega with u( y) = F( y) when y is not an element of Omega. This principle implies the existence of quasioptimal Markovian strategies.
In this work we study the stochastic recursive control problem, in which the aggregator (or generator) of the backward stochastic differential equation describing the running cost is continuous but not necessarily Lip...
详细信息
In this work we study the stochastic recursive control problem, in which the aggregator (or generator) of the backward stochastic differential equation describing the running cost is continuous but not necessarily Lipschitz with respect to the first unknown variable and the control, and monotonic with respect to the first unknown variable. The dynamic programming principle and the connection between the value function and the viscosity solution of the associated Hamilton-Jacobi-Bellman equation are established in this setting by the generalized comparison theorem for backward stochastic differential equations and the stability of viscosity solutions. Finally we take the control problem of continuous time Epstein Zin utility with non-Lipschitz aggregator as an example to demonstrate the application of our study.
In this paper, we study a stochastic recursive optimal control problem in which the cost functional is described by the solution of a backward stochastic differential equation driven by G-Brownian motion. Under standa...
详细信息
In this paper, we study a stochastic recursive optimal control problem in which the cost functional is described by the solution of a backward stochastic differential equation driven by G-Brownian motion. Under standard assumptions, we establish the dynamic programming principle and the related fully nonlinear HJB equation in the framework of G-expectation. Finally, we show that the value function is the viscosity solution of the obtained RIB equation. (C) 2016 Elsevier B.V. All rights reserved.
In this paper, we study one kind of stochastic recursive optimal control problem for the systems described by stochastic differential equations with delay (SDDE). In our framework, not only the dynamics of the systems...
详细信息
In this paper, we study one kind of stochastic recursive optimal control problem for the systems described by stochastic differential equations with delay (SDDE). In our framework, not only the dynamics of the systems but also the recursive utility depend on the past path segment of the state process in a general form. We give the dynamic programming principle for this kind of optimal control problems and show that the value function is the viscosity solution of the corresponding infinite dimensional Hamilton-Jacobi-Bellman partial differential equation.
In this paper, we study one kind of stochastic recursive optimal control problem with the obstacle constraint for the cost functional described by the solution of a reflected backward stochastic differential equation....
详细信息
In this paper, we study one kind of stochastic recursive optimal control problem with the obstacle constraint for the cost functional described by the solution of a reflected backward stochastic differential equation. We give the dynamic programming principle for this kind of optimal control problem and show that the value function is the unique viscosity solution of the obstacle problem for the corresponding Hamilton-Jacobi-Bellman equation.
We consider a Bolza-type optimal control problem for a dynamical system described by a fractional differential equation with the Caputo derivative of an order alpha is an element of (0, 1). The value of this problem i...
详细信息
We consider a Bolza-type optimal control problem for a dynamical system described by a fractional differential equation with the Caputo derivative of an order alpha is an element of (0, 1). The value of this problem is introduced as a functional in a suitable space of histories of motions. We prove that this functional satisfies the dynamic programming principle. Based on a new notion of coinvariant derivatives of the order alpha, we associate the considered optimal control problem with a Hamilton-Jacobi-Bellman equation. Under certain smoothness assumptions, we establish a connection between the value functional and a solution to this equation. Moreover, we propose a way of constructing optimal feedback controls. The paper concludes with an example.
We study the dynamic programming principle (DPP for short) on manifolds, obtain the Hamilton-Jacobi-Bellman (HJB for short) equation, and prove that the value function is the only viscosity solution to the HJB equatio...
详细信息
We study the dynamic programming principle (DPP for short) on manifolds, obtain the Hamilton-Jacobi-Bellman (HJB for short) equation, and prove that the value function is the only viscosity solution to the HJB equation. Then, we investigate the relation between DPP and Pontryagin's maximum principle (PMP for short), from which we obtain PMP on manifolds. (C) 2015 Elsevier Inc. All rights reserved.
We construct an abstract framework in which the dynamic programming principle (DPP) can be readily proven. It encompasses a broad range of common stochastic control problems in the weak formulation, and deals with pro...
详细信息
We construct an abstract framework in which the dynamic programming principle (DPP) can be readily proven. It encompasses a broad range of common stochastic control problems in the weak formulation, and deals with problems in the "martingale formulation" with particular ease. We give two illustrations;first, we establish the DPP for general controlled diffusions and show that their value functions are viscosity solutions of the associated Hamilton-Jacobi-Bellman equations under minimal conditions. After that, we show how to treat singular control on the example of the classical monotone-follower problem.
作者:
Li, XiaojuanShandong Univ
Zhongtai Secur Inst Financial Studies Jinan Peoples R China Shandong Univ
Zhongtai Secur Inst Financial Studies Jinan 250100 Peoples R China
In this article, we study the relationship between maximum principle (MP) and dynamic programming principle (DPP) for stochastic recursive optimal control problem driven by G-Brownian motion. Under the smooth assumpti...
详细信息
In this article, we study the relationship between maximum principle (MP) and dynamic programming principle (DPP) for stochastic recursive optimal control problem driven by G-Brownian motion. Under the smooth assumption for the value function, we obtain the connection between MP and DPP under a reference probability P-t,x(*). Within the framework of viscosity solution, we establish the relation between the first-order super-jet, sub-jet of the value function and the solution to the adjoint equation respectively.
暂无评论