In this paper, we study a stochastic optimal control problem under degenerate G-expectation. By using implied partition method, we show that the approximation result for admissible controls still hold. Based on this r...
详细信息
In this paper, we study a stochastic optimal control problem under degenerate G-expectation. By using implied partition method, we show that the approximation result for admissible controls still hold. Based on this result, we obtain the dynamic programming principle, and prove that the value function is the unique viscosity solution to the related HJB equation under degenerate case.
作者:
Li, XiaojuanShandong Univ
Zhongtai Secur Inst Financial Studies Jinan Peoples R China Shandong Univ
Zhongtai Secur Inst Financial Studies Jinan 250100 Peoples R China
In this article, we study the relationship between maximum principle (MP) and dynamic programming principle (DPP) for stochastic recursive optimal control problem driven by G-Brownian motion. Under the smooth assumpti...
详细信息
In this article, we study the relationship between maximum principle (MP) and dynamic programming principle (DPP) for stochastic recursive optimal control problem driven by G-Brownian motion. Under the smooth assumption for the value function, we obtain the connection between MP and DPP under a reference probability P-t,x(*). Within the framework of viscosity solution, we establish the relation between the first-order super-jet, sub-jet of the value function and the solution to the adjoint equation respectively.
We prove the dynamic programming principle (DPP) in a class of problems where an agent controls a d-dimensional diffusive dynamics via both classical and singular controls and, moreover, is able to terminate the optim...
详细信息
We prove the dynamic programming principle (DPP) in a class of problems where an agent controls a d-dimensional diffusive dynamics via both classical and singular controls and, moreover, is able to terminate the optimisation at a time of her choosing, prior to a given maturity. The time-horizon of the problem is random and it is the smallest between a fixed terminal time and the first exit time of the state dynamics from a Borel set. We consider both the cases in which the total available fuel for the singular control is either bounded or unbounded. We build upon existing proofs of DPP and extend results available in the traditional literature on singular control (Haussmann and Suo in SIAM J Control Optim 33(3):916-936, 1995;SIAM J Control Optim 33(3):937-959, 1995) by relaxing some key assumptions and including the discretionary stopping feature. We also connect with more general versions of the DPP (e.g., Bouchard and Touzi in SIAM J Control Optim 49(3):948-962, 2011) by showing in detail how our class of problems meets the abstract requirements therein.
How to compute (super) hedging costs in rather general financial market models with transaction costs in discrete-time? Despite the huge literature on this topic, most of results are characterizations of the super-hed...
详细信息
How to compute (super) hedging costs in rather general financial market models with transaction costs in discrete-time? Despite the huge literature on this topic, most of results are characterizations of the super-hedging prices while it remains difficult to deduce numerical procedure to estimate them. We establish here a dynamic programming principle and we prove that it is possible to implement it under some conditions on the conditional supports of the price and volume processes for a large class of market models including convex costs such as order books but also non convex costs, e.g. fixed cost models.(c) 2023 Elsevier Inc. All rights reserved.
In this paper, we study a stochastic recursive optimal control problem in which the value functional is defined by the solution of a backward stochastic differential equation (BSDE) under G-expectation. Under standard...
详细信息
In this paper, we study a stochastic recursive optimal control problem in which the value functional is defined by the solution of a backward stochastic differential equation (BSDE) under G-expectation. Under standard assumptions, we establish the comparison theorem for this kind of BSDE and give a novel and simple method to obtain the dynamic programming principle. Finally, we prove that the value function is the unique viscosity solution to a type of fully nonlinear HJB equation.
In this paper, we investigate a backward doubly stochastic recursive optimal control problem wherein the cost function is expressed as the solution to a backward doubly stochastic differential equation. We present the...
详细信息
We study the McKean-Vlasov optimal control problem with common noise which allow the law of the control process to appear in the state dynamics under various formulations: strong and weak ones, Markovian or non-Markov...
详细信息
We study the McKean-Vlasov optimal control problem with common noise which allow the law of the control process to appear in the state dynamics under various formulations: strong and weak ones, Markovian or non-Markovian. By interpreting the controls as probability measures on an appropriate canonical space with two filtrations, we then develop the classical measurable selection, conditioning and concatenation arguments in this new context, and establish the dynamic programming principle under general conditions.
We consider a Bolza-type optimal control problem for a dynamical system described by a fractional differential equation with the Caputo derivative of an order alpha is an element of (0, 1). The value of this problem i...
详细信息
We consider a Bolza-type optimal control problem for a dynamical system described by a fractional differential equation with the Caputo derivative of an order alpha is an element of (0, 1). The value of this problem is introduced as a functional in a suitable space of histories of motions. We prove that this functional satisfies the dynamic programming principle. Based on a new notion of coinvariant derivatives of the order alpha, we associate the considered optimal control problem with a Hamilton-Jacobi-Bellman equation. Under certain smoothness assumptions, we establish a connection between the value functional and a solution to this equation. Moreover, we propose a way of constructing optimal feedback controls. The paper concludes with an example.
作者:
Dong, YuchaoMeng, QingxinZhang, QiTongji Univ
Sch Math Sci Key Lab Intelligent Comp & Applicat Minist Educ Shanghai 200092 Peoples R China Huzhou Univ
Dept Math Sci Huzhou 313000 Zhejiang Peoples R China Fudan Univ
Sch Math Sci Shanghai 200433 Peoples R China Fudan Univ
Lab Math Nonlinear Sci Shanghai 200433 Peoples R China
This paper aims to explore the relationship between maximum principle and dynamic programming principle for stochastic recursive control problem with random coefficients. Under certain regular conditions for the coeff...
详细信息
This paper aims to explore the relationship between maximum principle and dynamic programming principle for stochastic recursive control problem with random coefficients. Under certain regular conditions for the coefficients, the relationship between the Hamiltonian system with random coefficients and stochastic Hamilton-Jacobi-Bellman equation is obtained. It is very different from the deterministic coefficients case since stochastic Hamilton-Jacobi-Bellman equation is a backward stochastic partial differential equation with solution being a pair of random fields rather than a deterministic function. A linear quadratic recursive optimization problem is given as an explicitly illustrated example based on this kind of relationship.
We construct an abstract framework in which the dynamic programming principle (DPP) can be readily proven. It encompasses a broad range of common stochastic control problems in the weak formulation, and deals with pro...
详细信息
We construct an abstract framework in which the dynamic programming principle (DPP) can be readily proven. It encompasses a broad range of common stochastic control problems in the weak formulation, and deals with problems in the "martingale formulation" with particular ease. We give two illustrations;first, we establish the DPP for general controlled diffusions and show that their value functions are viscosity solutions of the associated Hamilton-Jacobi-Bellman equations under minimal conditions. After that, we show how to treat singular control on the example of the classical monotone-follower problem.
暂无评论