This paper is concerned with the stochastic optimal control problem of jump diffusions. The relationship between stochastic maximum principle and dynamic programming principle is discussed. Without involving any deriv...
详细信息
This paper is concerned with the stochastic optimal control problem of jump diffusions. The relationship between stochastic maximum principle and dynamic programming principle is discussed. Without involving any derivatives of the value function, relations among the adjoint processes, the generalized Hamiltonian and the value function are investigated by employing the notions of semijets evoked in defining the viscosity solutions. Stochastic verification theorem is also given to verify whether a given admissible control is optimal.
In the present paper, we study a two-player, zero-sum, deterministic differential game with both players adopting impulse controls in infinite-time horizon, under rather weak assumptions on the cost functions. We prov...
详细信息
In the present paper, we study a two-player, zero-sum, deterministic differential game with both players adopting impulse controls in infinite-time horizon, under rather weak assumptions on the cost functions. We prove by means of the dynamic programming principle that the lower and upper value functions are continuous and viscosity solutions to the corresponding Hamilton-Jacobi-Bellman-Isaacs (HJBI) quasi-variational inequality (QVI). We define a new HJBI-QVI for which, under a proportional property assumption on the maximizing player cost, the value functions are the unique viscosity solution. We then prove that the lower and upper value functions coincide.
This paper is concerned with the Sobolev weak solutions of the Hamilton-Jacobi-Bellman (HJB) equations. These equations are derived from the dynamic programming principle in the study of stochastic optimal control pro...
详细信息
This paper is concerned with the Sobolev weak solutions of the Hamilton-Jacobi-Bellman (HJB) equations. These equations are derived from the dynamic programming principle in the study of stochastic optimal control problems. Adopting the Doob-Meyer decomposition theorem as one of the main tools, we prove that the optimal value function is the unique Sobolev weak solution of the corresponding HJB equation. In the recursive optimal control problem, the cost function is described by the solution of a backward stochastic differential equation (BSDE). This problem has a practical background in economics and finance. We prove that the value function is the unique Sobolev weak solution of the related HJB equation by virtue of the nonlinear Doob-Meyer decomposition theorem introduced in the study of BSDEs.
This paper is concerned with the singular linear quadratic (SLQ) optimal control problem for stochastic nonregular descriptor systems with time-delay. By means of some reasonable assumptions and a series of equivalent...
详细信息
This paper is concerned with the singular linear quadratic (SLQ) optimal control problem for stochastic nonregular descriptor systems with time-delay. By means of some reasonable assumptions and a series of equivalent transformations, the problem is finally transformed into a positive linear quadratic (LQ) problem for standard stochastic systems. Then dynamic programming principle is used to establish the solvability of the original problem, and the desired explicit presentation of the optimal controller is given in terms of matrix iterative form. The results due to Feng etal. are generalized and improved. As an application, a numerical example is presented to demonstrate the efficiency of the proposed approach.
This paper considers a framework in which an insurer determines the optimal investment strategies to maximize the expected utility of terminal wealth. We obtain the optimal investment strategies assuming that both the...
详细信息
This paper considers a framework in which an insurer determines the optimal investment strategies to maximize the expected utility of terminal wealth. We obtain the optimal investment strategies assuming that both the capital market and the insurance market are partially observable. By employing Bayesian method and filtering theory, we first transform the optimization problem with partial information into the one with complete information. We then achieve the explicit expression of the optimal investment strategy by using dynamic programming principle. In addition, we also derive the optimal investment strategies with complete information in both markets as well as partial information in either market. Finally, we compare the optimal strategies in different models and study value functions numerically to illustrate our results.
We analyze an optimal stopping problem sup(gamma is an element of T) (xi) over bar 0[y(gamma Lambda tau 0)] with random maturity to under a nonlinear expectation (xi) over bar0[.] := sup(P is an element of P) Eg[.], w...
详细信息
We analyze an optimal stopping problem sup(gamma is an element of T) (xi) over bar 0[y(gamma Lambda tau 0)] with random maturity to under a nonlinear expectation (xi) over bar0[.] := sup(P is an element of P) Eg[.], where P is a weakly compact set of mutually singular probabilities. The maturity tau(0) is specified as the hitting time to level 0 of some continuous index process X at which the payoff process g is even allowed to have a positive jump. When P collects a variety of semimartingale measures, the optimal stopping problem can be viewed as a discretionary stopping problem for a player who can influence both drift and volatility of the dynamic of underlying stochastic flow. We utilize a martingale approach to construct an optimal pair (P-*, y(*)) for sup((P, gamma)is an element of P X T) Ep[y(gamma Lambda tau 0)], in which y(*) is the first time y meets the limit. L of its approximating (xi) over bar -Snell envelopes. To overcome the technical subtleties caused by the mutual singularity of probabilities in P and the discontinuity of the payoff process y, we approximate tau(0) by an increasing sequence of Lipschitz continuous stopping times and approximate y by a sequence of uniformly continuous processes. (C) 2016 Elsevier B.V. All rights reserved.
In this paper, we study a stochastic recursive optimal control problem in which the objective functional is described by the solution of a backward stochastic differential equation driven by -Brownian motion. Under st...
详细信息
In this paper, we study a stochastic recursive optimal control problem in which the objective functional is described by the solution of a backward stochastic differential equation driven by -Brownian motion. Under standard assumptions, we establish the dynamic programming principle and the related Hamilton-Jacobi-Bellman (HJB) equation in the framework of -expectation. Finally, we show that the value function is the viscosity solution of the obtained HJB equation.
As a functional device, dielectric elastomer balloon with surface electrodes operates under impressed pressure and voltage. The unavoidable pressure disturbance will induce prominent vibration around the equilibrium p...
详细信息
As a functional device, dielectric elastomer balloon with surface electrodes operates under impressed pressure and voltage. The unavoidable pressure disturbance will induce prominent vibration around the equilibrium position, which may deteriorate its operating performance and accelerate its failure. This manuscript concentrates on the vibration suppression by slightly adjusting the voltage under certain given constraint. The displacement perturbation is governed by a nonlinear stochastic differential equation which is determined by expanding technique at equilibrium points. The pressure disturbance described by ideal Gaussian white noise is a parametric excitation and the function of control voltage directly multiplies with a function of the displacement perturbation. This comes down to an optimal bounded parametric control problem. The optimal control is first formally determined by the extremum condition in dynamicprogramming equation. The nonlinear dynamicprogramming equation is then approximately solved by the pseudo-inverse algorithm. The good control effectiveness and high robustness to the intensity of pressure disturbance are verified by numerical examples. Particularly, the optimal control strategy with not too small bound can effectively avoid the interwell motion for the cases with bistable potential.
In this article, we consider a two-player zero-sum stochastic differential game with regime-switching. Different from the results in existing literature on stochastic differential games with regime-switching, we consi...
详细信息
In this article, we consider a two-player zero-sum stochastic differential game with regime-switching. Different from the results in existing literature on stochastic differential games with regime-switching, we consider a game between a Markov chain and a state process which are two fully coupled stochastic processes. The payoff function is given by an integral with random terminal horizon. We first study the continuity of the lower and upper value functions under some additional conditions, based on which we establish the dynamic programming principle. We further prove that the lower and upper value functions are unique viscosity solutions of the associated lower and upper Hamilton-Jacobi-Bellman-Isaacs equations with regime-switching, respectively. These two value functions coincide under the Isaacs condition, which implies that the game admits a value. We finally apply our results to an example.
We study a robust optimal stopping problem with respect to a set P of mutually singular probabilities. This can be interpreted as a zero-sum controller-stopper game in which the stopper is trying to maximize its payof...
详细信息
We study a robust optimal stopping problem with respect to a set P of mutually singular probabilities. This can be interpreted as a zero-sum controller-stopper game in which the stopper is trying to maximize its payoff while an adverse player wants to minimize this payoff by choosing an evaluation criteria from P. We show that the upper Snell envelope (Z) over bar of the reward process Y is a supermartingale with respect to an appropriately defined nonlinear expectation E, and (K) over bar is further an (E) under bar- martingale up to the first time tau* when (Z) over bar meets Y. Consequently, tau* is the optimal stopping time for the robust optimal stopping problem and the corresponding zero-sum game has a value. Although the result seems similar to the one obtained in the classical optimal stopping theory, the mutual singularity of probabilities and the game aspect of the problem give rise to major technical hurdles, which we circumvent using some new methods.
暂无评论