The finite time horizon singular linear quadratic (LQ) optimal control problem is investigated for singular stochastic discrete-time systems. The problem is transformed into positive LQ one for standard stochastic sys...
详细信息
The finite time horizon singular linear quadratic (LQ) optimal control problem is investigated for singular stochastic discrete-time systems. The problem is transformed into positive LQ one for standard stochastic systems via two equivalent transformations. It is proved that the singular LQ optimal control problem is solvable under two reasonable rank conditions. Via dynamic programming principle, the desired optimal controller is presented in terms of matrix iterative form. One simulation is provided to show the effectiveness of the proposed approaches. Copyright (c) 2012 John Wiley & Sons, Ltd.
We provide a deterministic-control-based interpretation for a broad class of fully nonlinear parabolic and elliptic PDEs with continuous Neumann boundary conditions in a smooth domain. We construct families of two-per...
详细信息
We provide a deterministic-control-based interpretation for a broad class of fully nonlinear parabolic and elliptic PDEs with continuous Neumann boundary conditions in a smooth domain. We construct families of two-person games depending on a small parameter e which extend those proposed by Kohn and Serfaty [21]. These new games treat a Neumann boundary condition by introducing some specific rules near the boundary. We show that the value function converges, in the viscosity sense, to the solution of the PDE as e tends to zero. Moreover, our construction allows us to treat both the oblique and the mixed type Dirichlet-Neumann boundary conditions.
We give a proof of asymptotic Lipschitz continuity of p-harmonious functions, that are tug-of-war game analogies of ordinary p-harmonic functions. This result is used to obtain a new proof of Lipschitz continuity and ...
详细信息
We give a proof of asymptotic Lipschitz continuity of p-harmonious functions, that are tug-of-war game analogies of ordinary p-harmonic functions. This result is used to obtain a new proof of Lipschitz continuity and Harnack's inequality for p-harmonic functions in the case p>2. The proof avoids classical techniques like Moser iteration, but instead relies on suitable choices of strategies for the stochastic tug-of-war game.
We develop a method for solving stochastic control problems under one-dimensional Levy processes. The method is based on the dynamic programming principle and a Fourier cosine expansion method. Local errors in the vic...
详细信息
We develop a method for solving stochastic control problems under one-dimensional Levy processes. The method is based on the dynamic programming principle and a Fourier cosine expansion method. Local errors in the vicinity of the domain boundaries may disrupt the algorithm. For efficient computation of matrix-vector products with Hankel and Toeplitz structures, we use a fast Fourier transform algorithm. An extensive error analysis provides new insights based on which we develop an extrapolation method to deal with the propagation of local errors. Copyright (c) 2013 John Wiley & Sons, Ltd.
We consider the optimal dividend distribution problem of a financial corporation whose surplus is modeled by a general diffusion process with both the drift and diffusion coefficients depending on the external economi...
详细信息
We consider the optimal dividend distribution problem of a financial corporation whose surplus is modeled by a general diffusion process with both the drift and diffusion coefficients depending on the external economic regime as well as the surplus itself through general functions. The aim is to find a dividend payout scheme that maximizes the present value of the total dividends until ruin. We show that, depending on the configuration of the model parameters, there are two exclusive scenarios: (i) the optimal strategy uniquely exists and corresponds to paying out all surpluses in excess of a critical level (barrier) dependent on the economic regime and paying nothing when the surplus is below the critical level;(ii) there are no optimal strategies. (C) 2013 Elsevier B.V. All rights reserved.
In the present paper we investigate the problem of the existence of a value for differential games without Isaacs condition. For this we introduce a suitable concept of mixed strategies along a partition of the time i...
详细信息
In the present paper we investigate the problem of the existence of a value for differential games without Isaacs condition. For this we introduce a suitable concept of mixed strategies along a partition of the time interval, which are associated with classical nonanticipative strategies (with delay). Imposing on the underlying controls for both players a conditional independence property, we obtain the existence of the value in mixed strategies as the limit of the lower as well as of the upper value functions along a sequence of partitions which mesh tends to zero. Moreover, we characterize this value in mixed strategies as the unique viscosity solution of the corresponding Hamilton-Jacobi-Isaacs equation.
In this paper we concentrate on testing for multiple changes in the mean of a series of independent random variables. Suggested method applies a maximum type test statistic. Our primary focus is on an effective calcul...
详细信息
In this paper we concentrate on testing for multiple changes in the mean of a series of independent random variables. Suggested method applies a maximum type test statistic. Our primary focus is on an effective calculation of critical values for very large sample sizes comprising (tens of) thousands of observations and a moderate to large number of segments. To that end, Monte Carlo simulations and a modified Bellman's principle of optimality are used. It is shown that, indisputably, computer memory becomes a critical bottleneck in solving a problem of such a size. Thus, minimization of the memory requirements and appropriate order of calculations appear to be the keys to success. In addition, the formula that can be used to get approximate asymptotic critical values using the theory of exceedance probability of Gaussian fields over a high level is presented.
We study the stochastic control problem of maximizing expected utility from terminal wealth under a nonbankruptcy constraint. The problem of the agent is to derive the optimal insurance strategy which reduces his expo...
详细信息
We study the stochastic control problem of maximizing expected utility from terminal wealth under a nonbankruptcy constraint. The problem of the agent is to derive the optimal insurance strategy which reduces his exposure to the risk. This optimization problem is related to a suitable dual stochastic control problem in which the delicate boundary constraints disappear. We characterize the dual value function as the unique viscosity solution of the corresponding Hamilton Jacobi Bellman Variational Inequality (HJBVI in short). We characterize the optimal insurance strategy by the solution of the variational inequality which we solve numerically by using an algorithm based on policy iterations.
This work is devoted to the study of a class of Hamilton-Jacobi-Bellman equations associated to an optimal control problem where the state equation is a stochastic differential inclusion with a maximal monotone operat...
详细信息
This work is devoted to the study of a class of Hamilton-Jacobi-Bellman equations associated to an optimal control problem where the state equation is a stochastic differential inclusion with a maximal monotone operator. We show that the value function minimizing a Bolza-type cost functional is a viscosity solution of the HJB equation. The proof is based on the perturbation of the initial problem by approximating the unbounded operator. Finally, by providing a comparison principle we are able to show that the solution of the equation is unique. (C) 2012 Elsevier Ltd. All rights reserved.
We study the sufficient conditions for the existence of a saddle point of a time-dependent discrete Markov zero-sum game up to a given stopping time. The stopping time is allowed to take either a finite or an infinite...
详细信息
We study the sufficient conditions for the existence of a saddle point of a time-dependent discrete Markov zero-sum game up to a given stopping time. The stopping time is allowed to take either a finite or an infinite non-negative random variable with its associated objective function being well-defined. The result enables us to show the existence of the saddle points of discrete games constructed by Markov chain approximation of a class of stochastic differential games. (C) 2012 Elsevier Ltd. All rights reserved.
暂无评论