A new type of controlled fully coupled forward-backward stochastic differential equations is discussed, namely those involving the value function. With a new iteration method, we prove an existence and uniqueness theo...
详细信息
A new type of controlled fully coupled forward-backward stochastic differential equations is discussed, namely those involving the value function. With a new iteration method, we prove an existence and uniqueness theorem of a solution for this type of equations. Using the notion of extended "backward semigroup", we prove that the value function satisfies the dynamic programming principle and is a viscosity solution of the associated nonlocal Hamilton-Jacobi-Bellman equation.
We establish a new type of backward stochastic differential equations(BSDEs)connected with stochastic differential games(SDGs), namely, BSDEs strongly coupled with the lower and the upper value functions of SDGs, wher...
详细信息
We establish a new type of backward stochastic differential equations(BSDEs)connected with stochastic differential games(SDGs), namely, BSDEs strongly coupled with the lower and the upper value functions of SDGs, where the lower and the upper value functions are defined through this BSDE. The existence and the uniqueness theorem and comparison theorem are proved for such equations with the help of an iteration method. We also show that the lower and the upper value functions satisfy the dynamic programming principle. Moreover, we study the associated Hamilton-Jacobi-Bellman-Isaacs(HJB-Isaacs)equations, which are nonlocal, and strongly coupled with the lower and the upper value functions. Using a new method, we characterize the pair(W, U) consisting of the lower and the upper value functions as the unique viscosity solution of our nonlocal HJB-Isaacs equation. Furthermore, the game has a value under the Isaacs’ condition.
In this paper we study zero-sum two-player stochastic differential games with the help of the theory of backward stochastic differential equations (BSDEs). More precisely, we generalize the results of the pioneering w...
详细信息
In this paper we study zero-sum two-player stochastic differential games with the help of the theory of backward stochastic differential equations (BSDEs). More precisely, we generalize the results of the pioneering work of Fleming and Souganidis [Indiana Univ. Math. J., 38 (1989), pp. 293-314] by considering cost functionals defined by controlled BSDEs and by allowing the admissible control processes to depend on events occurring before the beginning of the game. This extension of the class of admissible control processes has the consequence that the cost functionals become random variables. However, by making use of a Girsanov transformation argument, which is new in this context, we prove that the upper and the lower value functions of the game remain deterministic. Apart from the fact that this extension of the class of admissible control processes is quite natural and reflects the behavior of the players who always use the maximum of available information, its combination with BSDE methods, in particular that of the notion of stochastic "backward semigroups" introduced by Peng [BSDE and stochastic optimizations, in Topics in Stochastic Analysis, Science Press, Beijing, 1997], allows us then to prove a dynamic programming principle for both the upper and the lower value functions of the game in a straightforward way. The upper and the lower value functions are then shown to be the unique viscosity solutions of the upper and the lower Hamilton-Jacobi-Bellman-Isaacs equations, respectively. For this Peng's BSDE method is extended from the framework of stochastic control theory into that of stochastic differential games.
In this paper we study the integral partial differential equations of Isaacs' type by zero-sum two-player stochastic differential games (SDGs) with jump-diffusion. The results of Fleming and Souganidis (1989) [9] ...
详细信息
In this paper we study the integral partial differential equations of Isaacs' type by zero-sum two-player stochastic differential games (SDGs) with jump-diffusion. The results of Fleming and Souganidis (1989) [9] and those of Biswas (2009) [3] are extended, we investigate a controlled stochastic system with a Brownian motion and a Poisson random measure, and with nonlinear cost functionals defined by controlled backward stochastic differential equations (BSDEs). Furthermore, unlike the two papers cited above the admissible control processes of the two players are allowed to rely on all events from the past. This quite natural generalization permits the players to consider those earlier information, and it makes more convenient to get the dynamic programming principle (DPP). However, the cost functionals are not deterministic anymore and hence also the upper and the lower value functions become a priori random fields. We use a new method to prove that, indeed, the upper and the lower value functions are deterministic. On the other hand, thanks to BSDE methods (Peng, 1997) [18] we can directly prove a DPP for the upper and the lower value functions, and also that both these functions are the unique viscosity solutions of the upper and the lower integral partial differential equations of Hamilton Jacobi Bellman Isaacs' type, respectively. Moreover, the existence of the value of the game is got in this more general setting under Isaacs' condition. (C) 2011 Elsevier B.V. All rights reserved.
The authors prove a sufficient stochastic maximum principle for the optimal control of a forward-backward Markov regime switching jump diffusion system and show its connection to dynamic programming principle. The res...
详细信息
The authors prove a sufficient stochastic maximum principle for the optimal control of a forward-backward Markov regime switching jump diffusion system and show its connection to dynamic programming principle. The result is applied to a cash flow valuation problem with terminal wealth constraint in a financial market. An explicit optimal strategy is obtained in this example.
This paper is concerned with stochastic differential games (SDGs) defined through fully coupled forward-backward stochastic differential equations (FBSDEs) which are governed by Brownian motion and Poisson random meas...
详细信息
This paper is concerned with stochastic differential games (SDGs) defined through fully coupled forward-backward stochastic differential equations (FBSDEs) which are governed by Brownian motion and Poisson random measure. The upper and the lower value functions are defined by the doubly controlled fully coupled FBSDEs with jumps. Using a new transformation introduced in Buchdahn (Stocha Process Appl 121:2715-2750, 2011), we prove that the upper and the lower value functions are deterministic. Then, after establishing the dynamic programming principle for the upper and the lower value functions of this SDGs, we prove that the upper and the lower value functions are the viscosity solutions to the associated upper and the lower second order integral-partial differential equations of Isaacs' type combined with an algebraic equation, respectively. Furthermore, for a special case (when and do not depend on ), under the Isaacs' condition, we get the existence of the value of the game.
We study the sufficient conditions for the existence of a saddle point of a time-dependent discrete Markov zero-sum game up to a given stopping time. The stopping time is allowed to take either a finite or an infinite...
详细信息
We study the sufficient conditions for the existence of a saddle point of a time-dependent discrete Markov zero-sum game up to a given stopping time. The stopping time is allowed to take either a finite or an infinite non-negative random variable with its associated objective function being well-defined. The result enables us to show the existence of the saddle points of discrete games constructed by Markov chain approximation of a class of stochastic differential games. (C) 2012 Elsevier Ltd. All rights reserved.
In this paper, a stochastic optimal control problem is investigated in which the system is governed by a stochastic functional differential equation. In the framework of functional Ito calculus, we build the dynamic p...
详细信息
In this paper, a stochastic optimal control problem is investigated in which the system is governed by a stochastic functional differential equation. In the framework of functional Ito calculus, we build the dynamic programming principle and the related path-dependent Hamilton-Jacobi-Bellman equation. We prove that the value function is the viscosity solution of the path-dependent Hamilton-Jacobi-Bellman equation. Copyright (c) 2013 John Wiley & Sons, Ltd.
The finite time horizon singular linear quadratic (LQ) optimal control problem is investigated for singular stochastic discrete-time systems. The problem is transformed into positive LQ one for standard stochastic sys...
详细信息
The finite time horizon singular linear quadratic (LQ) optimal control problem is investigated for singular stochastic discrete-time systems. The problem is transformed into positive LQ one for standard stochastic systems via two equivalent transformations. It is proved that the singular LQ optimal control problem is solvable under two reasonable rank conditions. Via dynamic programming principle, the desired optimal controller is presented in terms of matrix iterative form. One simulation is provided to show the effectiveness of the proposed approaches. Copyright (c) 2012 John Wiley & Sons, Ltd.
We provide a deterministic-control-based interpretation for a broad class of fully nonlinear parabolic and elliptic PDEs with continuous Neumann boundary conditions in a smooth domain. We construct families of two-per...
详细信息
We provide a deterministic-control-based interpretation for a broad class of fully nonlinear parabolic and elliptic PDEs with continuous Neumann boundary conditions in a smooth domain. We construct families of two-person games depending on a small parameter e which extend those proposed by Kohn and Serfaty [21]. These new games treat a Neumann boundary condition by introducing some specific rules near the boundary. We show that the value function converges, in the viscosity sense, to the solution of the PDE as e tends to zero. Moreover, our construction allows us to treat both the oblique and the mixed type Dirichlet-Neumann boundary conditions.
暂无评论