This paper develops a model for the bid and ask prices of a European-type asset by formulating a stochastic control problem. The state process is governed by a modified geometric Brownian motion whose drift and diffus...
详细信息
This paper develops a model for the bid and ask prices of a European-type asset by formulating a stochastic control problem. The state process is governed by a modified geometric Brownian motion whose drift and diffusion coefficients depend on a Markov chain. A Girsanov theorem for Markov chains is implemented for the change of coefficients, including the diffusion coefficient which cannot be changed by the usual Girsanov theorem for Brownian motion. The price of a European-type asset is then determined using an Esscher transform and a system of partial differential equations. A dynamic programming principle and a maximum/minimum principle associated with the stochastic control problem are then derived to model bid and ask prices. These prices are not quotes of traders or market makers but represent estimates in our model on which reasonable quantities could be traded.
In this paper, we study two kinds of singular optimal controls (SOCs for short) problems where the systems governed by forward-backward stochastic differential equations (FBSDEs for short), in which the control has tw...
详细信息
In this paper, we study two kinds of singular optimal controls (SOCs for short) problems where the systems governed by forward-backward stochastic differential equations (FBSDEs for short), in which the control has two components: the regular control, and the singular one. Both drift and diffusion terms may involve the regular control variable. The regular control domain is postulated to be convex. Under certain assumptions, in the framework of the Malliavin calculus, we derive the pointwise second-order necessary conditions for stochastic SOC in the classical sense. This condition is described by two adjoint processes, a maximum condition on the Hamiltonian supported by an illustrative example. A new necessary condition for optimal singular control is obtained as well. Besides, as a by-product, a verification theorem for SOCs is derived via viscosity solutions without involving any derivatives of the value functions. It is worth pointing out that this theorem has wider applicability than the restrictive classical verification theorems. Finally, we focus on the connection between the maximum principle and the dynamic programming principle for such SOCs problem without the assumption that the value function is smooth enough. (C) 2020 Elsevier Inc. All rights reserved.
We prove a new asymptotic mean value formula for the p-Laplace operator, D(p)u = div(vertical bar del u vertical bar(p-2)del u), 1 < p < infinity valid in the viscosity sense. In the plane, and for a certain ran...
详细信息
We prove a new asymptotic mean value formula for the p-Laplace operator, D(p)u = div(vertical bar del u vertical bar(p-2)del u), 1 < p < infinity valid in the viscosity sense. In the plane, and for a certain range of p, the mean value formula holds in the pointwise sense. We also study the existence, uniqueness and convergence of the related dynamic programming principle.
Recently, Hao and Li [Fully coupled forward-backward SDEs involving the value function. Nonlocal Hamilton-Jacobi-Bellman equations, ESAIM: Control Optim, Calc. Var. 22 (2016) 519-538] studied a new kind of forward-bac...
详细信息
Recently, Hao and Li [Fully coupled forward-backward SDEs involving the value function. Nonlocal Hamilton-Jacobi-Bellman equations, ESAIM: Control Optim, Calc. Var. 22 (2016) 519-538] studied a new kind of forward-backward stochastic differential equations (FBSDEs), namely the fully coupled FBSDEs involving the value function in the case where the diffusion coefficient sigma in forward stochastic differential equations depends on control, but does not depend on z. In our paper, we generalize their work to the case where sigma depends on both control and z, which is called the general fully coupled FBSDEs involving the value function. The existence and uniqueness theorem of this kind of equations under suitable assumptions is proved. After obtaining the dynamic programming principle for the value function W, we prove that the value function W is the minimum viscosity solution of the related nonlocal Hamilton-Jacobi-Bellman equation combined with an algebraic equation.
We consider the problem of optimally stopping a continuous-time process with a stopping time satisfying a given expectation cost constraint. We show, by introducing a new state variable, that one can transform the pro...
详细信息
We consider the problem of optimally stopping a continuous-time process with a stopping time satisfying a given expectation cost constraint. We show, by introducing a new state variable, that one can transform the problem into an unconstrained control problem and hence obtain a dynamic programming principle. We characterize the value function in terms of the dynamicprogramming equation, which turns out to be an elliptic, fully non-linear partial differential equation of second order. We prove a classical verification theorem and illustrate its applicability with several examples.
In this paper, we study the existence and uniqueness of viscosity solutions to a kind of Hamilton-Jacobi-Bellman (HJB) equation combined with algebra equations. This HJB equation is related to a stochastic optimal con...
详细信息
In this paper, we study the existence and uniqueness of viscosity solutions to a kind of Hamilton-Jacobi-Bellman (HJB) equation combined with algebra equations. This HJB equation is related to a stochastic optimal control problem for which the state equation is described by a fully coupled forward-backward stochastic differential equation (FBSDE). By extending Peng's backward semigroup approach to this problem, we obtain the dynamic programming principle and show that the value function is a viscosity solution to this HJB equation. As for the proof of the uniqueness of viscosity solution, the analysis method in Barles, Buckdahn, and Pardoux [Stochastics, 60 (1997), pp. 57-83] usually does not work for this fully coupled case. With the help of the uniqueness of the solution to FBSDEs, we propose a novel probabilistic approach to study the uniqueness of the solution to this HJB equation. We obtain that the value function is the minimum viscosity solution to this HJB equation. Especially, when the coefficients are independent of the control variable or the solution is smooth, the value function is the unique viscosity solution.
作者:
Khlopin, D. V.Russian Acad Sci
Krasovskii Inst Math & Mech 16 S Kovalevskaja St Ekaterinburg 620990 Russia Ural Fed Univ
Inst Math & Comp Sci Chair Appl Math & Mech 4 Turgeneva St Ekaterinburg 620083 Russia
We consider general n-player nonzero-sum dynamic games, which is broader than differential games and could accommodate both discrete and continuous time. Assuming common dynamics, we study the long run average family ...
详细信息
We consider general n-player nonzero-sum dynamic games, which is broader than differential games and could accommodate both discrete and continuous time. Assuming common dynamics, we study the long run average family and discounting average family of the running costs. For each of these game families, we investigate asymptotic properties of its Nash equilibria. We analyze asymptotic Nash equilibria-strategy profiles that are approximately optimal if the planning horizon tends to infinity in long run average games and if the discount tends to zero in discounting games. Moreover, we also assume that this strategy profile is stationary. Under a mild assumption on players' strategy sets, we prove a uniform Tauberian theorem for stationary asymptotic Nash equilibrium. If a stationary strategy profile is an asymptotic Nash equilibrium and the corresponding Nash value functions converge uniformly for one of the families (when discount goes to zero for discounting games, when planning horizon goes to infinity in long run average games), then for the other family this strategy profile is also an asymptotic Nash equilibrium, and its Nash value functions converge uniformly to the same limit. As an example of application of this theorem, we consider Sorger' model of competition of two firms.
The paper develops a generalized model of coupled dynamics for addressing the choice theoretic problems of economics and other behavioural sciences. The model extends the framework of coupled system to include an exte...
详细信息
The paper develops a generalized model of coupled dynamics for addressing the choice theoretic problems of economics and other behavioural sciences. The model extends the framework of coupled system to include an external sector that generates a richer dynamics. The model is applied to explain foreign capital inflow from the developed to the developing countries. Under certain regularity conditions, the existence of the solutions to the dynamic choice problem is proved and they are then obtained by numerical technique because of the non-linearity of the related functions. Robustness results are achieved via additional simulations by perturbation of the baseline parameter values. (C) 2019 Elsevier B.V. All rights reserved.
Soft robotics include soft actuators and possess the capacity of large deformation and environmental compatibility. Weak environmental disturbances may deteriorate the operating performance of soft robotics due to the...
详细信息
Soft robotics include soft actuators and possess the capacity of large deformation and environmental compatibility. Weak environmental disturbances may deteriorate the operating performance of soft robotics due to the low stiffness of soft actuators. This manuscript investigates multi-degree-of-freedom nonlinear mechanical systems with dielectric elastomer actuators and establishes the bounded/unbounded optimal control strategies to suppress the random vibration around the equilibrium position by adjusting the imposed voltage in real time. First, the constitutive relation of a plan-type dielectric elastomer actuator and then the vibrating equation of the multi-degree-of-freedom nonlinear system around the equilibrium position are derived successively. The bounded/unbounded optimal control problems are then established by adopting the corresponding performance indexes. The bounded/unbounded optimal control strategies are then derived by combining the stochastic averaging technique and stochastic dynamic programming principle. Numerical results on a two-degree-of-freedom nonlinear system illustrate the application and efficacy of the proposed optimal control strategies.
We study a stochastic optimal control problem for a partially observed diffusion. By using the control randomization method in Bandini et al. (2018), we prove a corresponding randomized dynamic programming principle (...
详细信息
We study a stochastic optimal control problem for a partially observed diffusion. By using the control randomization method in Bandini et al. (2018), we prove a corresponding randomized dynamic programming principle (DPP) for the value function, which is obtained from a flow property of an associated filter process. This DPP is the key step towards our main result: a characterization of the value function of the partial observation control problem as the unique viscosity solution to the corresponding dynamicprogramming Hamilton-Jacobi-Bellman (HJB) equation. The latter is formulated as a new, fully non linear partial differential equation on the Wasserstein space of probability measures. An important feature of our approach is that it does not require any non-degeneracy condition on the diffusion coefficient, and no condition is imposed to guarantee existence of a density for the filter process solution to the controlled Zakai equation. Finally, we give an explicit solution to our HJB equation in the case of a partially observed non Gaussian linear-quadratic model. (C) 2018 Elsevier B.V. All rights reserved.
暂无评论