This paper introduces disjunctive decomposition for two-stage mixed 0-1 stochastic integer programs (SIPs) with random recourse. Disjunctive decomposition allows for cutting planes based on disjunctive programming to ...
详细信息
This paper introduces disjunctive decomposition for two-stage mixed 0-1 stochastic integer programs (SIPs) with random recourse. Disjunctive decomposition allows for cutting planes based on disjunctive programming to be generated for each scenario subproblem under a temporal decomposition setting of the SIP problem. A new class of valid inequalities for mixed 0-1 SIP with random recourse is presented. In particular, we derive valid inequalities that allow for scenario subproblems for SIP with random recourse but deterministic technology matrix and right-hand side vector to share cut coefficients. The valid inequalities are used to derive a disjunctive decomposition method whose derivation has been motivated by real-life stochastic server location problems with random recourse, which find many applications in operations research. Computational results with large-scale instances to demonstrate the potential of the method are reported.
We consider the steady-state simulation output analysis problem for a process that satisfies a functional central limit theorem. We construct an estimator for the time-average variance constant that is based on excurs...
详细信息
We consider the steady-state simulation output analysis problem for a process that satisfies a functional central limit theorem. We construct an estimator for the time-average variance constant that is based on excursions of a process above the minimum. The resulting estimator does not require a fixed run length, and the memory requirement can be dynamically bounded. Standardized time series methods based on excursions are also described.
A growing interest in security and occupant exposure to contaminants revealed a need for fast and reliable identification of contaminant sources during incidental situations. To determine potential contaminant source ...
详细信息
A growing interest in security and occupant exposure to contaminants revealed a need for fast and reliable identification of contaminant sources during incidental situations. To determine potential contaminant source positions in outdoor environments, current state-of-the-art modeling methods use computational fluid dynamic simulations on parallel processors. In indoor environments, current tools match accidental contaminant distributions with cases from precomputed databases of possible concentration distributions. These methods require intensive computations in pre- and postprocessing. On the other hand, neural networks emerged as a tool for rapid concentration forecasting of outdoor environmental contaminants such as nitrogen oxides or sulfur dioxide. All of these modeling methods depend on the type of sensors used for real-time measurements of contaminant concentrations. A review of the existing sensor technologies revealed that no perfect sensor exists, but intensity of work in this area provides promising results in the near future. The main goal of the presented research study was to extend neural network modeling from the outdoor to the indoor identification of source positions, making this technology applicable to building indoor environments. The developed neural network Locator of Contaminant Sources was also used to optimize number and allocation of contaminant concentration sensors for real-time prediction of indoor contaminant source positions. Such prediction should take place within seconds after receiving real-time contaminant concentration sensor data. For the purpose of neural network training, a multizone program provided distributions of contaminant concentrations for known source positions throughout a test building. Trained networks had an output indicating contaminant source positions based on measured concentrations in different building zones. A validation case based on a real building layout and experimental data demonstrated the ability o
We present a risk-group oriented chronic disease progression model embedded within a metaheuristic-based optimization of the policy variables. Policy-makers are provided with Pareto-optimal screening schedules for ris...
详细信息
We present a risk-group oriented chronic disease progression model embedded within a metaheuristic-based optimization of the policy variables. Policy-makers are provided with Pareto-optimal screening schedules for risk groups by considering cost and effectiveness outcomes as well as budget constraints. The quality of the screening technology depends on risk group, disease stage, and time. As the metaheuristic solution technique, we use the Pareto ant colony optimization (P-ACO) algorithm for multiobjective combinatorial optimization problems, which is based on the ant colony optimization paradigm. Our approach is illustrated by a numerical example for breast cancer. For a 10-year time horizon, we provide cost-effective screening schedules for selected annual and total budgets. We then discuss policy implications of 16 mammography screening scenarios varying the screening schedule (annual, biennial, triennial, quadrennial) and the rate of women tested (25%, 50%, 75%, 100%). Due to the model's flexible structure, interventions for multiple chronic diseases can be considered simultaneously.
Time domain reflectometry (TDR) is commonly used to determine the soil bulk electrical conductivity. To obtain accurate measurements, the three parameters of a series resistor model (probe constant, K-p;cable resistan...
详细信息
Time domain reflectometry (TDR) is commonly used to determine the soil bulk electrical conductivity. To obtain accurate measurements, the three parameters of a series resistor model (probe constant, K-p;cable resistance, R-C;and remaining resistance, R-0) are typically calibrated using liquids with known electrical conductivity. Several studies have reported discrepancies between calibrated and directly measured parameters of the series resistor model. In this study, we examined the possibility that a technical issue with the TDR100 cable tester contributed to part of these inconsistencies. Our results showed that with an increasing level of waveform averaging, the reflection coefficient, as well as K-p, R-C, and R-0, approached a maximum value. A comparison with independently determined values indicated that a high level of waveform averaging provided the physically most plausible results. Based on our results, we propose averaging at least 16 waveforms, each consisting of at least 250 points. An oscilloscope-based signal analysis showed that the increase in the reflection coefficient with increasing waveform averaging in saline media is related to a capacitance associated with electrode polarization in combination with a change in the pulse period of the pulse train when the TDR100 starts collecting data points. This capacitance resulted in a slow change of the average voltage in the TDR pulse train until a stable average voltage was reached. Higher levels of waveform averaging cancel the impact of the first erroneous voltage measurements out. In practical applications, the errors in the determination of the bulk electrical conductivity can be as high as 5% for the low-conductivity range (<0.1 S m(-1)) and up to 370% in saline media (1.4 S m(-1)) when waveform averaging is changed after calibration.
In order to obtain accurate and reliable network planning and optimization results. The characteristics of WCDMA networks such as power control, soft handover (SHO) and the strong couplings between coverage and capaci...
详细信息
ISBN:
(纸本)1424402360
In order to obtain accurate and reliable network planning and optimization results. The characteristics of WCDMA networks such as power control, soft handover (SHO) and the strong couplings between coverage and capacity have to be modelled accurately. These characteristics lead to unprecedented complexity of WCDMA radio network planning and optimisation that has not been seen in previous cellular networks. In this paper, we will present mathematical models that consider the characteristics of WCDMA radio networks. We will also present and compare the performance of four optimisation algorithms based on meta-heuristics that can be used to find solutions for practical WCDMA radio network planning and optimisation.
Flux is a key measure of the metabolic phenotype. Recently, complete (genome-scale) metabolic network models have been established for Arabidopsis (Arabidopsis thaliana), and flux distributions have been predicted usi...
详细信息
Flux is a key measure of the metabolic phenotype. Recently, complete (genome-scale) metabolic network models have been established for Arabidopsis (Arabidopsis thaliana), and flux distributions have been predicted using constraints-based modeling and optimization algorithms such as linear programming. While these models are useful for investigating possible flux states under different metabolic scenarios, it is not clear how close the predicted flux distributions are to those occurring in vivo. To address this, fluxes were predicted for heterotrophic Arabidopsis cells and compared with fluxes estimated in parallel by (13) C-metabolic flux analysis (MFA). Reactions of the central carbon metabolic network (glycolysis, the oxidative pentose phosphate pathway, and the tricarboxylic acid [TCA] cycle) were independently analyzed by the two approaches. Net fluxes in glycolysis and the TCA cycle were predicted accurately from the genome-scale model, whereas the oxidative pentose phosphate pathway was poorly predicted. MFA showed that increased temperature and hyperosmotic stress, which altered cell growth, also affected the intracellular flux distribution. Under both conditions, the genome-scale model was able to predict both the direction and magnitude of the changes in flux: namely, increased TCA cycle and decreased phosphoenolpyruvate carboxylase flux at high temperature and a general decrease in fluxes under hyperosmotic stress. MFA also revealed a 3-fold reduction in carbon-use efficiency at the higher temperature. It is concluded that constraints-based genome-scale modeling can be used to predict flux changes in central carbon metabolism under stress conditions.
As more processing cores are added to embedded systems processors, the relationships between cores and memories have more influence on the energy consumption of the processor. In this paper, we conduct fundamental res...
详细信息
As more processing cores are added to embedded systems processors, the relationships between cores and memories have more influence on the energy consumption of the processor. In this paper, we conduct fundamental research to explore the effects of memory sharing on energy in a multicore processor. We study the Memory Arrangement (MA) Problem. We prove that the general case of MA is NP-complete. We present an optimal algorithm for solving linear MA and optimal and heuristic algorithms for solving rectangular MA. On average, we can produce arrangements that consume 49% less energy than an all shared memory arrangement and 14% less energy than an all private memory arrangement for randomly generated instances. For DSP benchmarks, we can produce arrangements that, on average, consume 20% less energy than an all shared memory arrangement and 27% less energy than an all private memory arrangement.
The paper analyzes the performance of a new estimation method for vehicle suspensions, which incorporates three parallel Kalman filters and takes into account the nonlinear damper characteristic of the suspension. For...
详细信息
ISBN:
(纸本)9781424477456
The paper analyzes the performance of a new estimation method for vehicle suspensions, which incorporates three parallel Kalman filters and takes into account the nonlinear damper characteristic of the suspension. For the performance evaluation, an Extended Kalman filter (nonlinear estimator) is utilized as a benchmark. The estimator structures are tuned by means of a multiobjective genetic optimization algorithm in order to maximize their performance. The advantages of the parallel Kalman filter concept are its low computational effort and good estimation accuracy despite the presence of nonlinearities in the suspension setup. Both estimators are compared to a computationally simple concept that gains the estimates directly from measurement signals by conventional filtering techniques. The performance of the estimators is analyzed in simulations and experiments using a quarter-vehicle test rig and excitation signals gained from measurements of real road profiles.
This paper presents an algorithm for computing the weights of a loudspeaker array based on a given set of listening locations and their respective desired sound field distribution. We achieve this by introducing the E...
详细信息
This paper presents an algorithm for computing the weights of a loudspeaker array based on a given set of listening locations and their respective desired sound field distribution. We achieve this by introducing the E-norm condition number to control the desired sound field distribution for our application. We show that the conditioning of the problem is determined by this E-norm condition number and that part of the computation of this number is independent of frequency. Exploiting this intrinsic property and through the use of an optimization algorithm, we develop an efficient algorithm for the computation of array weights to achieve desired sound field distribution in the spatial domain. The proposed algorithm can also be used to search for a set of alternative listening locations that give rise to a well-conditioned solution for the loudspeaker weights.
暂无评论