Pension schemes all over the world are under increasing pressure to efficiently hedge longevity risk imposed by ageing populations. In this work, we study an optimal investment problem for a defined contribution pensi...
详细信息
Pension schemes all over the world are under increasing pressure to efficiently hedge longevity risk imposed by ageing populations. In this work, we study an optimal investment problem for a defined contribution pension scheme that decides to hedge longevity risk using a mortality-linked security, typically a longevity bond. The pension scheme promises a minimum guarantee which allows the members to purchase lifetime annuities upon retirement. The scheme manager invests in the risky and riskless assets available on the market, including the longevity bond. We transform the corresponding constrained optimal investment problem into a single investment portfolio optimization problem by replicating future contributions from members and the minimum guarantee provided by the scheme. We solve the resulting optimization problem using the dynamic programming principle. Through a series of numerical studies, we show that the longevity risk has an important impact on the investment strategy performance. Our results add to the growing evidence supporting the use of mortality-linked securities for efficient hedging of longevity risk.
We consider the optimal control problem for stochastic differential equations (SDEs) with random coefficients under the recursive-type objective functional captured by the backward SDE (BSDE). Due to the random coeffi...
详细信息
We consider the optimal control problem for stochastic differential equations (SDEs) with random coefficients under the recursive-type objective functional captured by the backward SDE (BSDE). Due to the random coefficients, the associated Hamilton-Jacobi-Bellman (HJB) equation is a class of second-order stochastic PDEs (SPDEs) driven by Brownian motion, which we call the stochastic HJB (SHJB) equation. In addition, as we adopt the recursive-type objective functional, the drift term of the SHJB equation depends on the second component of its solution. These two generalizations cause several technical intricacies, which do not appear in the existing literature. We prove the dynamic programming principle (DPP) for the value function, for which unlike the existing literature we have to use the backward semigroup associated with the recursive-type objective functional. By the DPP, we are able to show the continuity of the value function. Using the Ito-Kunita's formula, we prove the verification theorem, which constitutes a sufficient condition for optimality and characterizes the value function, provided that the smooth (classical) solution of the SHJB equation exists. In general, the smooth solution of the SHJB equation may not exist. Hence, we study the existence and uniqueness of the solution to the SHJB equation under two different weak solution concepts. First, we show, under appropriate assumptions, the existence and uniqueness of the weak solution via the Sobolev space technique, which requires converting the SHJB equation to a class of backward stochastic evolution equations. The second result is obtained under the notion of viscosity solutions, which is an extension of the classical one to the case for SPDEs. Using the DPP and the estimates of BSDEs, we prove that the value function is the viscosity solution to the SHJB equation. For applications, we consider the linear-quadratic problem, the utility maximization problem, and the European option pricing problem
We consider a two-player zero-sum-game in a bounded open domain Omega described as follows: at a point x epsilon Omega, Players I and II play an epsilon-step tug-of-war game with probability alpha, and with probabilit...
详细信息
We consider a two-player zero-sum-game in a bounded open domain Omega described as follows: at a point x epsilon Omega, Players I and II play an epsilon-step tug-of-war game with probability alpha, and with probability beta (alpha + beta = 1), a random point in the ball of radius epsilon centered at x is chosen. Once the game position reaches the boundary, Player II pays Player I the amount given by a fixed payoff function F. We give a detailed proof of the fact that the value functions of this game satisfy the dynamic programming principle u(x) -alpha/2 {sup u(y)(y is an element of(B) over bar epsilon(x)) + inf(y is an element of(B) over bar epsilon(x)) u(y)} + beta f(B epsilon(x)) u(y)dy, for x is an element of Omega with u( y) = F( y) when y is not an element of Omega. This principle implies the existence of quasioptimal Markovian strategies.
The assertions of Proposition 3.7 in our paper ``The robust superreplication problem: A dynamic approach"" [L. Carassus, J. Ob\lo'\j, and J. Wiesel, SIAM J. Financial Math., 10 (2019), pp. 907--941] may ...
详细信息
The assertions of Proposition 3.7 in our paper ``The robust superreplication problem: A dynamic approach"" [L. Carassus, J. Ob\lo'\j, and J. Wiesel, SIAM J. Financial Math., 10 (2019), pp. 907--941] may fail to hold without an additional assumption, which we detail in this erratum.
In this paper, we consider an insurance company that is active in multiple dependent lines. We assume that the risk process in each line is a Cramer-Lundberg process. We use a common shock dependency structure to cons...
详细信息
In this paper, we consider an insurance company that is active in multiple dependent lines. We assume that the risk process in each line is a Cramer-Lundberg process. We use a common shock dependency structure to consider the possibility of simultaneous claims in different lines. According to a vector of reinsurance strategies, the insurer transfers some part of its risk to a reinsurance company. Our goal is to maximize our objective function (expected discounted surplus level integrated over time) using a dynamicprogramming method. The optimal objective function (value function) is characterized as the unique solution of the corresponding Hamilton-Jacobi-Bellman equation with some boundary conditions. Moreover, an algorithm is proposed to numerically obtain the optimal solution of the objective function, which corresponds to the optimal reinsurance strategies.
This article is concerned with the stochastic recursive optimal control problem with mixed delay. The connection between Pontryagin's maximum principle and Bellman's dynamic programming principle is discussed....
详细信息
This article is concerned with the stochastic recursive optimal control problem with mixed delay. The connection between Pontryagin's maximum principle and Bellman's dynamic programming principle is discussed. Without containing any derivatives of the value function, relations among the adjoint processes and the value function are investigated by employing the notions of super- and sub-jets introduced in defining the viscosity solutions. Stochastic verification theorem is also given to verify whether a given admissible control is really optimal.
We unify and establish equivalence between the pathwise and the quasi-sure approaches to robust modelling of financial markets in finite discrete time. In particular, we prove a fundamental theorem of asset pricing an...
详细信息
We unify and establish equivalence between the pathwise and the quasi-sure approaches to robust modelling of financial markets in finite discrete time. In particular, we prove a fundamental theorem of asset pricing and a superhedging theorem which encompass the formulations of Bouchard and Nutz [12] and Burzoni et al. [13]. In bringing the two streams of literature together, we examine and compare their many different notions of arbitrage. We also clarify the relation between robust and classical P-specific results. Furthermore, we prove when a superhedging property with respect to the set of martingale measures supported on a set Omega of paths may be extended to a pathwise superhedging on Omega without changing the superhedging price.
We prove a new asymptotic mean value formula for the p-Laplace operator, D(p)u = div(vertical bar del u vertical bar(p-2)del u), 1 < p < infinity valid in the viscosity sense. In the plane, and for a certain ran...
详细信息
We prove a new asymptotic mean value formula for the p-Laplace operator, D(p)u = div(vertical bar del u vertical bar(p-2)del u), 1 < p < infinity valid in the viscosity sense. In the plane, and for a certain range of p, the mean value formula holds in the pointwise sense. We also study the existence, uniqueness and convergence of the related dynamic programming principle.
Multi-agent reinforcement learning (MARL), despite its popularity and empirical success, suffers from the curse of dimensionality. This paper builds the mathematical framework to approximate cooperative MARL by a mean...
详细信息
Multi-agent reinforcement learning (MARL), despite its popularity and empirical success, suffers from the curse of dimensionality. This paper builds the mathematical framework to approximate cooperative MARL by a mean-field control (MFC) approach and shows that the approximation error is of O(1/root N). By establishing an appropriate form of the dynamic programming principle for both the value function and the Q function, it proposes a model-free kernel-based Q-learning algorithm (MFC-K-Q), which is shown to have a linear convergence rate for the MFC problem, the first of its kind in the MARL literature. It further establishes that the convergence rate and the sample complexity of MFC-K-Q are independent of the number of agents N, which provides an O(1/root N) approximation to the MARL problem with N agents in the learning environment. Empirical studies for the network traffic congestion problem demonstrate that MFC-K-Q outperforms existing MARL algorithms when N is large, for instance, when N > 50.
We study a new class of two-player, zero-sum, deterministic differential games where each player uses both continuous and impulse controls in an infinite horizon with discounted payoff. We assume that the form and cos...
详细信息
We study a new class of two-player, zero-sum, deterministic differential games where each player uses both continuous and impulse controls in an infinite horizon with discounted payoff. We assume that the form and cost of impulses depend on nonlinear functions and the state of the system, respectively. We use Bellman's dynamic programming principle (DPP) and viscosity solutions approach to show, for this class of games, the existence and uniqueness of a solution for the associated Hamilton-Jacobi-Bellman-Isaacs (HJBI) partial differential equations (PDEs). We then, under Isaacs' condition, deduce that the lower and upper value functions coincide, and we give a computational procedure with a numerical test for the game.
暂无评论