This paper presents a new approach for resource optimization by combining a flow-chart based simulation tool with a powerful genetic optimization procedure. The proposed approach determines the least costly, and most ...
详细信息
This paper presents a new approach for resource optimization by combining a flow-chart based simulation tool with a powerful genetic optimization procedure. The proposed approach determines the least costly, and most productive, amount of resources that achieve the highest benefit/cost ratio in individual construction operations. To further incorporate resource optimization into construction planning, various genetic algorithms (GA)-optimized simulation models are integrated with commonly used project management software. Accordingly, these models are activated from within the scheduling software to optimize the plan. The result is a hierarchical work-breakdown-structure tied to GA-optimized simulation models. Various optimization experiments with a prototype system on two case studies revealed its ability to optimize resources within the real-life constraints set in the simulation models. The prototype is easy to use and can be used on large size projects. Based on this research, computer simulation and genetic algorithms can be an effective combination with great potential for improving productivity and saving construction time and cost.
In this paper, we develop algorithms for robust linear regression by leveraging the connection between the problems of robust regression and sparse signal recovery. We explicitly model the measurement noise as a combi...
详细信息
ISBN:
(纸本)9781424442959
In this paper, we develop algorithms for robust linear regression by leveraging the connection between the problems of robust regression and sparse signal recovery. We explicitly model the measurement noise as a combination of two terms;the first term accounts for regular measurement noise modeled as zero mean Gaussian noise, and the second term captures the impact of outliers. The fact that the latter outlier component could indeed be a sparse vector provides the opportunity to leverage sparse signal reconstruction methods to solve the problem of robust regression. Maximum a posteriori (MAP) based and empirical Bayesian inference based algorithms are developed for this purpose. Experimental studies on simulated and real data sets are presented to demonstrate the effectiveness of the proposed algorithms.
The K* algorithm provably approximates partition functions for a set of states (e.g., protein, ligand, and protein-ligand complex) to a user-specified accuracy epsilon. Often, reaching an epsilon-approximation for a p...
详细信息
The K* algorithm provably approximates partition functions for a set of states (e.g., protein, ligand, and protein-ligand complex) to a user-specified accuracy epsilon. Often, reaching an epsilon-approximation for a particular set of partition functions takes a prohibitive amount of time and space. To alleviate some of this cost, we introduce two new algorithms into the osprey suite for protein design: FRIES, a Fast Removal of Inadequately Energied Sequences, and EWAK*, an Energy Window Approximation to K*. FRIES pre-processes the sequence space to limit a design to only the most stable, energetically favorable sequence possibilities. EWAK* then takes this pruned sequence space as input and, using a user-specified energy window, calculates K* scores using the lowest energy conformations. We expect FRIES/EWAK* to be most useful in cases where there are many unstable sequences in the design sequence space and when users are satisfied with enumerating the low-energy ensemble of conformations. In combination, these algorithms provably retain calculational accuracy while limiting the input sequence space and the conformations included in each partition function calculation to only the most energetically favorable, effectively reducing runtime while still enriching for desirable sequences. This combined approach led to significant speed-ups compared to the previous state-of-the-art multi-sequence algorithm, BBK*, while maintaining its efficiency and accuracy, which we show across 40 different protein systems and a total of 2,826 protein design problems. Additionally, as a proof of concept, we used these new algorithms to redesign the protein-protein interface (PPI) of the c-Raf-RBD:KRas complex. The Ras-binding domain of the protein kinase c-Raf (c-Raf-RBD) is the tightest known binder of KRas, a protein implicated in difficult-to-treat cancers. FRIES/EWAK* accurately retrospectively predicted the effect of 41 different sets of mutations in the PPI of the c-Raf-RBD:KRas c
Traditional analyses of in vivo ID MR spectroscopy of brain metabolites have been limited to the inspection of one-dimensional free induction decay (FID) signals from which only a limited number of metabolites are cle...
详细信息
ISBN:
(纸本)9781424441211
Traditional analyses of in vivo ID MR spectroscopy of brain metabolites have been limited to the inspection of one-dimensional free induction decay (FID) signals from which only a limited number of metabolites are clearly observable. In this article we introduce a novel set of algorithms to process and characterize two-dimensional in vivo MR correlation spectroscopy (2D COSY) signals. 2D COSY data was collected from phantom solutions of topical metabolites found in the brain, namely glutamine, glutamate, and creatine. A statistical peak-detection and object segmentation algorithm is adapted for 2D COSY signals and applied to phantom solutions containing varied concentrations of glutamine and glutamate. Additionally, quantitative features are derived from peak and object structures, and we show that these measures are correlated with known phantom metabolite concentrations. These results are encouraging for future studies focusing on neurological disorders that induce subtle changes in brain metabolite concentrations and for which accurate quantitation is important.
Forecasts of future events are required in many activities associated with planning and operation of the components of a water resources system. For the hydrologic component, there is a need for both short. term and l...
详细信息
Forecasts of future events are required in many activities associated with planning and operation of the components of a water resources system. For the hydrologic component, there is a need for both short. term and long term forecasts of streamflow events in order to optimize the system or to plan for future expansion or reduction. This paper presents a comparison of different artificial neural networks (ANNs) algorithms for short term daily streamflow forecasting. Four different ANN algorithms, namely, backpropagation, conjugate gradient, cascade correlation, and Levenberg-Marquardt are applied to continuous streamflow data of the North Platte River in the United States. The models are verified with untrained data. The results from the different algorithms are compared with each other. The correlation analysis was used in the study and found to be useful for determining appropriate input vectors to the ANNs.
A number of processors working simultaneously make sequential random samples from a large basic space and test the sampled elements in order to discover at least one possessing a given investigated property. In the mo...
详细信息
A number of processors working simultaneously make sequential random samples from a large basic space and test the sampled elements in order to discover at least one possessing a given investigated property. In the most simple case, all the random samples are statistically independent, but more sophisticated arrays with dependent samples are also considered. The sampled elements with the tested property, or pieces of information saying that particular processors have not discovered such an element, are processed by a hierarchy of higher-level processors, finally, an output processor yields a ''yes'' or ''no'' answer to the question whether there is atleast one element possessing the tested property in the basic space. Due to the random nature of the sampling mechanism and due to the fact that communications among processors are supposed to be weighted by a positive probability of error, the final answer may be wrong with a positive probability. The aim is to minimize the time computational complexity of the statistical decision function defined by the systolic array in question under the condition that the probability of error be kept below an a priori given threshold value.
A new approach for reliability-based optimization of water distribution networks is presented. The approach links a genetic algorithm (GA) as the optimization tool with the first-order reliability method (FORM) for es...
详细信息
A new approach for reliability-based optimization of water distribution networks is presented. The approach links a genetic algorithm (GA) as the optimization tool with the first-order reliability method (FORM) for estimating network capacity reliability. Network capacity reliability in this case study refers to the probability of meeting minimum allowable pressure constraints across the network under uncertain nodal demands and uncertain pipe roughness conditions. The critical node capacity reliability approximation for network capacity reliability is closely examined and new methods for estimating the critical nodal and overall network capacity reliability using FORM are presented. FORM approximates Monte Carlo simulation reliabilities accurately and efficiently. In addition, FORM can be used to automatically determine the critical node location and corresponding capacity reliability. Network capacity reliability approximations using FORM are improved by considering two failure modes. This research demonstrates the novel combination of a GA with FORM as an effective approach for reliability-based optimization of water distribution networks. Correlations between random variables are shown to significantly increase optimal network costs.
The identification of network applications through observation of associated packet traffic flows is vital to the areas of network management and surveillance. Currently popular methods such as port number and payload...
详细信息
The identification of network applications through observation of associated packet traffic flows is vital to the areas of network management and surveillance. Currently popular methods such as port number and payload-based identification exhibit a number of shortfalls. An alternative is to use machine learning (ML) techniques and identify network applications based on per-flow statistics, derived from payload-independent features such as packet length and inter-arrival time distributions. The performance impact of feature set reduction, using Consistency-based and Correlation-based feature selection, is demonstrated on Naive Bayes, C4.5, Bayesian Network and Naive Bayes Tree algorithms. We then show that it is useful to differentiate algorithms based on computational performance rather than classification accuracy alone, as although classification accuracy between the algorithms is similar, computational performance can differ significantly.
We study contextual bandits with budget and time constraints, referred to as constrained contextual bandits. The time and budget constraints significantly complicate the exploration and exploitation tradeoff because t...
详细信息
ISBN:
(纸本)9781510825024
We study contextual bandits with budget and time constraints, referred to as constrained contextual bandits. The time and budget constraints significantly complicate the exploration and exploitation tradeoff because they introduce complex coupling among contexts over time. To gain insight, we first study unit-cost systems with known context distribution. When the expected rewards are known, we develop an approximation of the oracle, referred to Adaptive-Linear-Programming (ALP), which achieves near-optimality and only requires the ordering of expected rewards. With these highly desirable features, we then combine ALP with the upper-confidence-bound (UCB) method in the general case where the expected rewards are unknown a priori. We show that the proposed UCB-ALP algorithm achieves logarithmic regret except for certain boundary cases. Further, we design algorithms and obtain similar regret bounds for more general systems with unknown context distribution and heterogeneous costs. To the best of our knowledge, this is the first work that shows how to achieve logarithmic regret in constrained contextual bandits. Moreover, this work also sheds light on the study of computationally efficient algorithms for general constrained contextual bandits.
We study the importance of interference for the performance of Shor’s factoring algorithm and Grover’s search algorithm using a recently proposed interference measure. To this aim we introduce systematic unitary err...
详细信息
We study the importance of interference for the performance of Shor’s factoring algorithm and Grover’s search algorithm using a recently proposed interference measure. To this aim we introduce systematic unitary errors, random unitary errors, and decoherence processes in these algorithms. We show that unitary errors which destroy the interference destroy the efficiency of the algorithm, too. However, unitary errors may also create useless additional interference. In such a case the total amount of interference can increase, while the efficiency of the quantum computation decreases. For decoherence due to phase flip errors, interference is destroyed for small error probabilities, and converted into destructive interference for error probabilities approaching 1, leading to success probabilities which can even drop below the classical value. Our results show that in general, interference is necessary in order for a quantum algorithm to outperform classical computation, but large amounts of interference are not sufficient and can even lead to destructive interference with worse than classical success rates.
暂无评论