This article mitigates the challenges of previously reported literature by reducing the operating cost and improving the performance of network. A genetic algorithm-based tabu search methodology is proposed to solve t...
详细信息
This article mitigates the challenges of previously reported literature by reducing the operating cost and improving the performance of network. A genetic algorithm-based tabu search methodology is proposed to solve the link capacity and traffic allocation (CFA) problem in a computer communication network. An efficient modern super-heuristic search method is used to influence the fixed cost, delay cost, and variable cost of a link on the total operating cost in the computer communication network are discussed. The article analyses a large number of computer simulation results to verify the effectiveness of the tabu search algorithm for CFA problems and also improves the quality of solutions significantly compared with traditional Lagrange relaxation and subgradient optimization algorithms. The experimental results show that with the increase of the weighted coefficient of variable cost, the proportion of variable cost in the total cost increases from 10 to 35%. The growth is relatively slow, and the fixed cost is still the main component. In addition, due to the increase in the variable cost, the tabu search algorithm will also choose the link with large luxury to reduce the variable cost, which makes the fixed cost slightly increase, while the network delay cost and average delay slightly decrease. The proposed method, when compared with the genetic algorithm, has more advantages for large-scale or heavy-load networks.
Codon optimized genes have two major advantages: they simplify de novo gene synthesis and increase the expression level in target hosts. Often they achieve this by altering codon usage in a given gene. Codon optimizat...
详细信息
Codon optimized genes have two major advantages: they simplify de novo gene synthesis and increase the expression level in target hosts. Often they achieve this by altering codon usage in a given gene. Codon optimization is complex because it usually needs to achieve multiple opposing goals. In practice, finding an optimal sequence from the massive number of possible combinations of synonymous codons that can code for the same amino acid sequence is a challenging task. In this article, we introduce COStar, a D-star Lite-based dynamic search algorithm for codon optimization. The algorithm first maps the codon optimization problem into a weighted directed acyclic graph using a sliding window approach. Then, the D-star Lite algorithm is used to compute the shortest path from the start site to the target site in the resulting graph. Optimizing a gene is thus converted to a search in real-time for a shortest path in a generated graph. Using in silico experiments, the performance of the algorithm was shown by optimizing the different genes including the human genome. The results suggest that COStar is a promising codon optimization tool for de nova gene synthesis and heterologous gene expression. (C) 2013 Elsevier Ltd. All rights reserved.
This paper presents an algorithm, called progressive dimension growth (PDG), for the construction of linear codes with a pre-specified length and a minimum distance. A number of new linear codes over GF(5) that have b...
详细信息
This paper presents an algorithm, called progressive dimension growth (PDG), for the construction of linear codes with a pre-specified length and a minimum distance. A number of new linear codes over GF(5) that have been discovered via this algorithm are also presented.
Predicting shallow landslide size and location across landscapes is important for understanding landscape form and evolution and for hazard identification. We test a recently developed model that couples a search algo...
详细信息
Predicting shallow landslide size and location across landscapes is important for understanding landscape form and evolution and for hazard identification. We test a recently developed model that couples a search algorithm with 3-D slope stability analysis that predicts these two key attributes in an intensively studied landscape with a 10 year landslide inventory. We use process-based submodels to estimate soil depth, root strength, and pore pressure for a sequence of landslide-triggering rainstorms. We parameterize submodels with field measurements independently of the slope stability model, without calibrating predictions to observations. The model generally reproduces observed landslide size and location distributions, overlaps 65% of observed landslides, and of these predicts size to within factors of 2 and 1.5 in 55% and 28% of cases, respectively. Five percent of the landscape is predicted unstable, compared to 2% recorded landslide area. Missed landslides are not due to the search algorithm but to the formulation and parameterization of the slope stability model and inaccuracy of observed landslide maps. Our model does not improve location prediction relative to infinite-slope methods but predicts landslide size, improves process representation, and reduces reliance on effective parameters. Increasing rainfall intensity or root cohesion generally increases landslide size and shifts locations down hollow axes, while increasing cohesion restricts unstable locations to areas with deepest soils. Our findings suggest that shallow landslide abundance, location, and size are ultimately controlled by covarying topographic, material, and hydrologic properties. Estimating the spatiotemporal patterns of root strength, pore pressure, and soil depth across a landscape may be the greatest remaining challenge.
This paper presents a quick search algorithm of the fracture angle of inter-fiber fracture (IFF) criterion and a progressive damage model considering the dynamic fracture toughness effect. The fracture angle determina...
详细信息
This paper presents a quick search algorithm of the fracture angle of inter-fiber fracture (IFF) criterion and a progressive damage model considering the dynamic fracture toughness effect. The fracture angle determination of the IFF criterion is necessary and time-consuming, which has limited its practicality. Therefore, to accurately calculate stress exposure factors under various combinations of stress states, a highly efficient algorithm is needed. In this paper, a novel algorithm using stress tractions on the fracture plane and the golden section search (GSS) is promoted. Through a series of simulation tests, this novel algorithm has been proven to be exact and efficient compared with other two common algorithms, which saves an average of 63.2 % and 28.1 % of the time in 1 million calculations and 2 h 2 min 4 sec and 39 min 2 sec in simulations. Additionally, fracture toughness, which controls material damage evolution process, also has strain-rate effect like elastic modulus and strength. Hence, a new progressive damage model considering the dynamic fracture toughness effect is proposed. The results of dynamic compression and ballistic impact tests and simulations demonstrate the fidelity of the novel damage model.
Negative electron-transfer dissociation (NETD) has emerged as a premier tool for peptide anion analysis, offering access to acidic post-translational modifications and regions of the proteome that are intractable with...
详细信息
Negative electron-transfer dissociation (NETD) has emerged as a premier tool for peptide anion analysis, offering access to acidic post-translational modifications and regions of the proteome that are intractable with traditional positive-mode approaches. Whole-proteome scale characterization is now possible with NETD, but proper informatic tools are needed to capitalize on advances in instrumentation. Currently only one database search algorithm (OMSSA) can process NETD data. Here we implement NETD search capabilities into the Byonic platform to improve the sensitivity of negative-mode data analyses, and we benchmark these improvements using 90 min LC-MS/MS analyses of tryptic peptides from human embryonic stem cells. With this new algorithm for searching NETD data, we improved the number of successfully identified spectra by as much as 80% and identified 8665 unique peptides, 24 639 peptide spectral matches, and 1338 proteins in activated-ion NETD analyses, more than doubling identifications from previous negative-mode characterizations of the human proteome. Furthermore, we reanalyzed our recently published large-scale, multienzyme negative-mode yeast proteome data, improving peptide and peptide spectral match identifications and considerably increasing protein sequence coverage. In all, we show that new informatics tools, in combination with recent advances in data acquisition, can significantly improve proteome characterization in negative-mode approaches.
An assembly-type just-in-time (JIT) supply chain system is composed of a main serial supply chain and several branching serial supply chains merging to the main supply chain. Under the assumption of constant demand ra...
详细信息
An assembly-type just-in-time (JIT) supply chain system is composed of a main serial supply chain and several branching serial supply chains merging to the main supply chain. Under the assumption of constant demand rate, the replenishment problem in the assembly-type JIT supply chain systems can be formulated as a mixed-integer non-linear programming problem. We conduct thorough theoretical analysis on the properties of the objective function value curve in this study. Following our theoretical analysis, we propose a search algorithm that effectively solves an optimal solution. Our numerical experiments show that the average run time of the proposed algorithm grows in a linear order of the problem size, and the proposed algorithm may serve as an efficient solution approach for the decision-maker.
We show that by adding a workspace qubit to Ahmed Younes, et al. algorithm (Younes et al. AIP Conf. Proc. 734:171, 2004, 2008), and applying newly defined partial diffusion operators on subsystems, the algorithm's...
详细信息
We show that by adding a workspace qubit to Ahmed Younes, et al. algorithm (Younes et al. AIP Conf. Proc. 734:171, 2004, 2008), and applying newly defined partial diffusion operators on subsystems, the algorithm's performance is improved. We consider an unstructured list of N items and M matches, 1 M N.
Clustering is an important unsupervised analysis technique for big data mining. It finds its application in several domains including biomedical documents of the MEDLINE database. Document clustering algorithms based ...
详细信息
Clustering is an important unsupervised analysis technique for big data mining. It finds its application in several domains including biomedical documents of the MEDLINE database. Document clustering algorithms based on metaheuristics is an active research area. However, these algorithms suffer from the problems of getting trapped in local optima, need many parameters to adjust, and the documents should be indexed by a high dimensionality matrix using the traditional vector space model. In order to overcome these limitations, in this paper a new documents clustering algorithm (ASOS-LSI) with no parameters is proposed. It is based on the recent symbiotic organisms search metaheuristic (SOS) and enhanced by an acceleration technique. Furthermore, the documents are represented by semantic indexing based on the famous latent semantic indexing (LSI). Conducted experiments on well-known biomedical documents datasets show the significant superiority of ASOS-LSI over five famous algorithms in terms of compactness, f-measure, purity, misclassified documents, entropy, and runtime.
In unstructured P2P file sharing systems, the ununiform distribution of file popularity causes damage to the usability of the system. The search for rare files has more chance to fail. Flooding algorithm with indices ...
详细信息
ISBN:
(纸本)9780769535579
In unstructured P2P file sharing systems, the ununiform distribution of file popularity causes damage to the usability of the system. The search for rare files has more chance to fail. Flooding algorithm with indices which are exchanged randomly based on Gossip protocol provides a success guarantee in a probability sense. However, the problem of low success rate and large numbers of messages still exist for queries of rare files. In this paper, we propose an index scheme and IBFS, a search algorithm based on node's information capacity. In contrast to the former, our method reduces 50% bandwidth consumption on average and performs efficient object discovery, making the system more stable and available..
暂无评论