This paper proposes a multi-quantile approach for solving open-loop continuous-variable discrete-time stochastic dynamic programming problems in systems with non-standard probability distribution functions. Instead of...
详细信息
This paper proposes a multi-quantile approach for solving open-loop continuous-variable discrete-time stochastic dynamic programming problems in systems with non-standard probability distribution functions. Instead of using the expected value of the objective function for building the optimization criterion, the decision maker performs a choice on the decision variables over the objective function value quantiles. The proposed procedure relies on a Monte Carlo simulation of the unknown process input outcomes, associated with an open-loop multiobjective optimization. The optimal control comes from a trade-off analysis that considers, for instance, the risk associated with each policy versus its yield.
A production system consisting of two parallel machines with production-dependent failure rates is investigated in this paper. The machines produce one type of final product and unmet demand is backlogged. The objecti...
详细信息
A production system consisting of two parallel machines with production-dependent failure rates is investigated in this paper. The machines produce one type of final product and unmet demand is backlogged. The objective of the system is to find a productivity policy for both machines that will minimize the inventory and shortage costs over an infinite horizon. The failure rate of the main machine depends on its productivity, while the failure rate of the second machine is constant. In the proposed model, the main machine is characterized by a higher productivity. This paper proposes a stochastic dynamic programming formulation of the problem and derives the optimal policies numerically. A numerical example is included and sensitivity analyses with respect to the system parameters are examined to illustrate the importance and effectiveness of the proposed methodology. (C) 2014 Elsevier B.V. All rights reserved.
We consider a single-product make-to-stock manufacturing-remanufacturing system. Returned products require remanufacturing before they can be sold. The manufacturing and remanufacturing operations are executed by the ...
详细信息
We consider a single-product make-to-stock manufacturing-remanufacturing system. Returned products require remanufacturing before they can be sold. The manufacturing and remanufacturing operations are executed by the same single server, where switching from one activity to another does not involve time or cost and can be done at an arbitrary moment in time. Customer demand can be fulfilled by either newly manufactured or remanufactured products. The times for manufacturing and remanufacturing a product are exponentially distributed. Demand and used products arrive via mutually independent Poisson processes. Disposal of products is not allowed and all used products that are returned have to be accepted. Using Markov decision processes, we investigate the optimal manufacture-remanufacture policy that minimizes holding, backorder, manufacturing and remanufacturing costs per unit of time over an infinite horizon. For a subset of system parameter values we are able to completely characterize the optimal continuous-review dynamic preemptive policy. We provide an efficient algorithm based on quasi-birth-death processes to compute the optimal policy parameter values. For other sets of system parameter values, we present some structural properties and insights related to the optimal policy and the performance of some simple threshold policies. (C) 2013 Elsevier B.V. All rights reserved.
Traditionally inventory management models have focused on risk-neutral decision making with the objective of maximizing the expected rewards or minimizing costs over a specified time horizon. However, for items marked...
详细信息
Traditionally inventory management models have focused on risk-neutral decision making with the objective of maximizing the expected rewards or minimizing costs over a specified time horizon. However, for items marked by high demand volatility such as fashion goods and technology products, this objective needs to be balanced against the risk associated with the decision. Depending on how the product performs vis-a-vis the seller's original forecast, the seller could end up with losses due to either short or surplus supply. Unfortunately, traditional models do not address this issue. stochastic dynamic programming models have been extensively used for sequential derision making in the context of multi-period inventory management, but in the traditional way where one either minimizes costs or maximizes profits. Risk is implicitly considered by accounting for stock-out costs. Considering risk and reward simultaneously and explicitly in a stochasticdynamic setting is a cumbersome task and often difficult to implement for practical purposes, since dynamicprogramming is designed to optimize on one variable, not two. In this paper we develop an algorithm, Variance-Retentive stochastic dynamic programming that tracks variance as well as expected reward in a stochastic dynamic programming model for inventory control. We use the mean-variance solutions in a heuristic. RiskTrackr, to construct efficient frontiers which could be an ideal derision support tool for risk-reward analysis. (C) 2013 Elsevier B.V. All rights reserved.
In the future, renewable energy (RE) generation systems will be constructed and interconnected to the power system by nonutility entities such as generation companies and residential customers. Therefore, generation e...
详细信息
In the future, renewable energy (RE) generation systems will be constructed and interconnected to the power system by nonutility entities such as generation companies and residential customers. Therefore, generation expansion planning (GEP) should consider the effects of RE generation. The purpose of this paper is to discuss the best GEP from the economic, supply reliability, and environmental aspects considering the penetration of RE generation. For this purpose, this paper proposes a model of a utility's GEP process considering RE generation. The proposed method is based on stochastic dynamic programming (SDP) in which long-term uncertainties are modeled by geometric Brownian motion (GBM) and the binomial lattice process. The variation of RE generation output due to the weather conditions (short-term uncertainty) is also considered in the proposed method by means of the net load curve.
The problem of state tracking with active observation control is considered for a system modeled by a discrete-time, finite-state Markov chain observed through conditionally Gaussian measurement vectors. The measureme...
详细信息
The problem of state tracking with active observation control is considered for a system modeled by a discrete-time, finite-state Markov chain observed through conditionally Gaussian measurement vectors. The measurement model statistics are shaped by the underlying state and an exogenous control input, which influence the observations' quality. Exploiting an innovations approach, an approximate minimum mean-squared error (MMSE) filter is derived to estimate the Markov chain system state. To optimize the control strategy, the associated mean-squared error is used as an optimization criterion in a partially observable Markov decision process formulation. A stochastic dynamic programming algorithm is proposed to solve for the optimal solution. To enhance the quality of system state estimates, approximate MMSE smoothing estimators are also derived. Finally, the performance of the proposed framework is illustrated on the problem of physical activity detection in wireless body sensing networks. The power of the proposed framework lies within its ability to accommodate a broad spectrum of active classification applications, including sensor management for object classification and tracking, estimation of sparse signals, and radar scheduling.
The combination of electric vehicles and renewable energy is taking shape as a potential driver for a future free of fossil fuels. However, the efficient management of the electric vehicle fleet is not exempt from cha...
详细信息
The combination of electric vehicles and renewable energy is taking shape as a potential driver for a future free of fossil fuels. However, the efficient management of the electric vehicle fleet is not exempt from challenges. It calls for the involvement of all actors directly or indirectly related to the energy and transportation sectors, ranging from governments, automakers and transmission system operators, to the ultimate beneficiary of the change: the end-user. An electric vehicle is primarily to be used to satisfy driving needs, and accordingly charging policies must be designed primarily for this purpose. The charging models presented in the technical literature, however, overlook the stochastic nature of driving patterns. Here we introduce an efficient stochastic dynamic programming model to optimally charge an electric vehicle while accounting for the uncertainty inherent to its use. With this aim in mind, driving patterns are described by an inhomogeneous Markov model that is fitted using data collected from the utilization of an electric vehicle. We show that the randomness intrinsic to driving needs has a substantial impact on the charging strategy to be implemented. (C) 2014 Elsevier Ltd. All rights reserved.
This paper deals with the coordination of manufacturing, remanufacturing and returns acceptance control in a hybrid production-inventory system. We use a queuing control framework, where manufacturing and remanufactur...
详细信息
This paper deals with the coordination of manufacturing, remanufacturing and returns acceptance control in a hybrid production-inventory system. We use a queuing control framework, where manufacturing and remanufacturing are modelled by single servers with exponentially distributed processing times. Customer demand and returned products arrive in the system according to independent Poisson processes. A returned product can be either accepted or rejected. When accepted, a return is placed in a remanufacturable product inventory. Customer demand can be satisfied as well by new and remanufactured products. The following costs are included: stock keeping, backorder, manufacturing, remanufacturing, acceptance and rejection costs. We show that the optimal policy is characterized by two state-dependent base-stock thresholds for manufacturing and remanufacturing and one state-dependent return acceptance threshold. We also derive monotonicity results for these thresholds. Based on these theoretical results, we introduce several relevant heuristic control rules for manufacturing, remanufacturing and returns acceptance. In an extensive numerical study we compare these policies with the optimal policy and provide several insights. (C) 2013 Elsevier B.V. All rights reserved.
The fundamental goal of conservation planning is biodiversity persistence, yet most reserve selection methods prioritize sites using occurrence data. Numerous empirical studies support the notion that defining and mea...
详细信息
The fundamental goal of conservation planning is biodiversity persistence, yet most reserve selection methods prioritize sites using occurrence data. Numerous empirical studies support the notion that defining and measuring objectives in terms of species richness (where the value of a site is equal to the number of species it contains, or contributes to an existing reserve network) can be inadequate for maintaining biodiversity in the long-term. An existing site-assessment framework that implicitly maximized the persistence probability of multiple species was integrated with a dynamic optimization model. The problem of sequential reserve selection as a Markov decision process was combined with stochastic dynamic programming to find the optimal solution. The approach represents a compromise between representation-based approaches (maximizing occurrences) and more complex tools, like spatially-explicit population models. The method, the inherent problems and interesting conclusions are illustrated with a land acquisition case study on the central Platte River.
We present a framework for obtaining fully polynomial time approximation schemes (FPTASs) for stochastic univariate dynamic programs with either convex or monotone single-period cost functions. This framework is devel...
详细信息
We present a framework for obtaining fully polynomial time approximation schemes (FPTASs) for stochastic univariate dynamic programs with either convex or monotone single-period cost functions. This framework is developed through the establishment of two sets of computational rules, namely, the calculus of K-approximation functions and the calculus of K-approximation sets. Using our framework, we provide the first FPTASs for several NP-hard problems in various fields of research such as knapsack models, logistics, operations management, economics, and mathematical finance. Extensions of our framework via the use of the newly established computational rules are also discussed.
暂无评论