We study a robust optimal stopping problem with respect to a set P of mutually singular probabilities. This can be interpreted as a zero-sum controller-stopper game in which the stopper is trying to maximize its payof...
详细信息
We study a robust optimal stopping problem with respect to a set P of mutually singular probabilities. This can be interpreted as a zero-sum controller-stopper game in which the stopper is trying to maximize its payoff while an adverse player wants to minimize this payoff by choosing an evaluation criteria from P. We show that the upper Snell envelope (Z) over bar of the reward process Y is a supermartingale with respect to an appropriately defined nonlinear expectation E, and (K) over bar is further an (E) under bar- martingale up to the first time tau* when (Z) over bar meets Y. Consequently, tau* is the optimal stopping time for the robust optimal stopping problem and the corresponding zero-sum game has a value. Although the result seems similar to the one obtained in the classical optimal stopping theory, the mutual singularity of probabilities and the game aspect of the problem give rise to major technical hurdles, which we circumvent using some new methods.
This article is a survey of the early development of selected areas in nonlinear continuous-time stochastic control. Key developments in optimal control and the dynamic programming principle, existence of optimal cont...
详细信息
This article is a survey of the early development of selected areas in nonlinear continuous-time stochastic control. Key developments in optimal control and the dynamic programming principle, existence of optimal controls under complete and partial observations, nonlinear filtering, stochastic stability, the stochastic maximum principle and ergodic control are discussed. Issues concerning wide bandwidth noise for stability, modeling, filtering and ergodic control are dealt with. The focus is on the earlier work, but many important topics are omitted for lack of space. (C) 2013 Elsevier Ltd. All rights reserved.
In this paper we study the optimal stochastic control problem for stochastic differential equations on Riemannian manifolds. The cost functional is specified by controlled backward stochastic differential equations in...
详细信息
In this paper we study the optimal stochastic control problem for stochastic differential equations on Riemannian manifolds. The cost functional is specified by controlled backward stochastic differential equations in Euclidean space. Under some suitable assumptions, we conclude that the value function is the unique viscosity solution to the associated Hamilton-Jacobi-Bellman equation which is a fully nonlinear parabolic partial differential equation on Riemannian manifolds. (C) 2014 Elsevier B.V. All rights reserved.
We consider the optimal asset allocation problem in a continuous-time regime-switching market. The problem is to maximize the expected utility of the terminal wealth of a portfolio that contains an option, an underlyi...
详细信息
We consider the optimal asset allocation problem in a continuous-time regime-switching market. The problem is to maximize the expected utility of the terminal wealth of a portfolio that contains an option, an underlying stock and a risk-free bond. The difficulty that arises in our setting is finding a way to represent the return of the option by the returns of the stock and the risk-free bond in an incomplete regime-switching market. To overcome this difficulty, we introduce a functional operator to generate a sequence of value functions, and then show that the optimal value function is the limit of this sequence. The explicit form of each function in the sequence can be obtained by solving an auxiliary portfolio optimization problem in a single-regime market. And then the original optimal value function can be approximated by taking the limit. Additionally, we can also show that the optimal value function is a solution to a dynamicprogramming equation, which leads to the explicit forms for the optimal value function and the optimal portfolio process. Furthermore, we demonstrate that, as long as the current state of the Markov chain is given, it is still optimal for an investor in a multiple-regime market to simply allocate his/her wealth in the same way as in a single-regime market. (C) 2013 Elsevier B.V. All rights reserved.
In the present work, we consider 2-person zero-sum stochastic differential games with a nonlinear pay-off functional which is defined through a backward stochastic differential equation. Our main objective is to study...
详细信息
In the present work, we consider 2-person zero-sum stochastic differential games with a nonlinear pay-off functional which is defined through a backward stochastic differential equation. Our main objective is to study for such a game the problem of the existence of a value without Isaacs condition. Not surprising, this requires a suitable concept of mixed strategies which, to the authors' best knowledge, was not known in the context of stochastic differential games. For this, we consider nonanticipative strategies with a delay defined through a partition pi of the time interval [0, T]. The underlying stochastic controls for the both players are randomized along pi by a hazard which is independent of the governing Brownian motion, and knowing the information available at the left time point t(j-i) of the subintervals generated by pi, the controls of Players 1 and 2 are conditionally independent over [t(j-1), t(j)). It is shown that the associated lower and upper value functions W-pi and U-pi converge uniformly on compacts to a function V, the so-called value in mixed strategies, as the mesh of pi tends to zero. This function V is characterized as the unique viscosity solution of the associated Hamilton Jacobi-Bellman-Isaacs equation.
In this paper, we study theoretical and computational aspects of risk minimization in financial market models operating in discrete time. To define the risk, we consider a class of convex risk measures defined on L-p(...
详细信息
In this paper, we study theoretical and computational aspects of risk minimization in financial market models operating in discrete time. To define the risk, we consider a class of convex risk measures defined on L-p(P) in terms of shortfall risk. Under mild assumptions, namely, the absence of arbitrage opportunity and the nondegeneracy of the price process, we prove the existence of an optimal strategy by performing a dynamicprogramming argument in a non-Markovian framework. In a Markovian framework, the shortfall risk and optimal dynamic strategies are estimated using three main tools: Newton-Raphson Monte Carlo-based procedure, stochastic approximation algorithm, and Markovian quantization scheme. Finally, we illustrate our approach by considering several shortfall risk measures and portfolios inspired by energy and financial markets.
This paper investigates the relationship between the stochastic maximum principle and the dynamic programming principle for singular stochastic control problems. The state of the system under consideration is governed...
详细信息
This paper investigates the relationship between the stochastic maximum principle and the dynamic programming principle for singular stochastic control problems. The state of the system under consideration is governed by a stochastic differential equation, with nonlinear coefficients, allowing both classical control and singular control. We show that the necessary conditions for optimality, obtained earlier, are in fact sufficient provided some concavity conditions are fulfilled. In a second step, we prove a verification theorem and we show that the solution of the adjoint equation coincides with the derivative of the value function. Finally, using these results, we solve explicitly an example.
In this paper, we study one kind of stochastic recursive optimal control problem with the obstacle constraint for the cost functional described by the solution of a reflected backward stochastic differential equation....
详细信息
In this paper, we study one kind of stochastic recursive optimal control problem with the obstacle constraint for the cost functional described by the solution of a reflected backward stochastic differential equation. We give the dynamic programming principle for this kind of optimal control problem and show that the value function is the unique viscosity solution of the obstacle problem for the corresponding Hamilton-Jacobi-Bellman equation.
The aim of the paper is to provide a linearization approach to the L-infinity-control problems. We begin by proving a semigroup-type behaviour of the set of constraints appearing in the linearized formulation of (stan...
详细信息
The aim of the paper is to provide a linearization approach to the L-infinity-control problems. We begin by proving a semigroup-type behaviour of the set of constraints appearing in the linearized formulation of (standard) control problems. As a byproduct we obtain a linear formulation of the dynamic programming principle. Then, we use the L-p approach and the associated linear formulations. This seems to be the most appropriate tool for treating L-infinity problems in continuous and lower semicontinuous setting.
In this paper, we investigate Nash equilibrium payoffs for nonzero-sum stochastic differential games with reflection. We obtain an existence theorem and a characterization theorem of Nash equilibrium payoffs for nonze...
详细信息
In this paper, we investigate Nash equilibrium payoffs for nonzero-sum stochastic differential games with reflection. We obtain an existence theorem and a characterization theorem of Nash equilibrium payoffs for nonzero-sum stochastic differential games with nonlinear cost functionals defined by doubly controlled reflected backward stochastic differential equations.
暂无评论