An ameliorative cultural algorithm (CA) based on particle swarm optimization (PSO) and whale optimization algorithm (WOA) is raised (CA-PSOWOA), so as to conquer the defects of WOA and PSO, such as poor global explora...
An ameliorative cultural algorithm (CA) based on particle swarm optimization (PSO) and whale optimization algorithm (WOA) is raised (CA-PSOWOA), so as to conquer the defects of WOA and PSO, such as poor global exploration ability and easy fall into local optimal solution. Firstly, a nonlinear inertia weight strategy is leaded to optimize the PSO and WOA, then CA is introduced to regulate the ability of global exploration and local exploitation of PSO and WOA. By testing on benchmark functions, it is proved that CA-PSOWOA improves the global exploration ability and solution accuracy, and its performance is better than the traditional PSO and WOA, and other algorithms.
Machine learning technology is indispensable for big data. In machine learning, data on a large scale can improve the accuracy of the model. However, complex machine learning algorithms require the key technology of d...
Machine learning technology is indispensable for big data. In machine learning, data on a large scale can improve the accuracy of the model. However, complex machine learning algorithms require the key technology of distributed memory computing in time and performance. Big data memory computing can implement the parallel operation of the algorithm, which is conducive to the processing of big data sets by the machine learning algorithm. Hence, a nonlinear machine learning algorithm implemented in the big data memory environment is proposed in this paper, where data compression, biased sampling or loading based on the implementation is optimized. To fully configure resources for the script running by batch, we also implemented a machine learning framework to schedule the optimized algorithm mentioned above. The experimental results showed that the mean error of the three algorithms after optimization was reduced by 40%, and the mean time was reduced by 90%.
Parallel component applications are often deployed on heterogeneous clusters. Load balancing is very important for their performance requirement. Existing load balancing methods have highperformance cost and poor bal...
Parallel component applications are often deployed on heterogeneous clusters. Load balancing is very important for their performance requirement. Existing load balancing methods have highperformance cost and poor balance effect. Based on the analysis of structures of parallel component applications, we established the mathematical model of load balancing for parallel components on heterogeneous clusters. We use the quantum particle swarm optimization algorithm to search the optimal solution of the proposed mathematical model and determine the best load balancing scheme. Comparing with the methods based on real-time detection and other swarm intelligence optimizationalgorithms, our method has lower balance cost, less number of iterations and better performance.
In this paper, an efficient hydraulic optimization procedure is presented and applied to the design of hydraulic turbines. For computationally expensive industrial design optimization problems, an advanced optimizatio...
In this paper, an efficient hydraulic optimization procedure is presented and applied to the design of hydraulic turbines. For computationally expensive industrial design optimization problems, an advanced optimization tool (EASY software) and a fast CFD evaluation tool are required. EASY optimizationsoftware is a Hierarchical Metamodel-Assisted Evolutionary Algorithm (HMAEA) that can be used in both single-(SOO) and multi-objective optimization (MOO) problems. In order to minimize the CFD solver calls during the optimization design, the MAEA rely on local metamodels, trained on the fly, that are used to identify the most promising members in each population and then only these are to be re-evaluated by the CPU costly CFD solver. For additional economy in the CPU cost, the hierarchical (two-level) optimization scheme is used in this paper, where at each level, a different evaluation tool, a low and a high fidelity specific software, can be linked. The low level utilizes a low-CPU cost and low-accuracy tool to explore the design space with a minimum impact to the wall clock time and the high level, using the high fidelity, high-CPU cost tool is used to exploit the information from the low level. For the applications presented in this paper, the high fidelity model is an incompressible Navier-Stokes equation solver and the low fidelity model is based on the solution of the incompressible Euler equations. In order to optimize the geometry of hydraulic machines, an in-house automatic geometry and mesh generation tool has been integrated in the optimization tool chain. In what follows, 2 three-objective design optimization problems of 3D Francis hydraulic turbines are presented. The optimization objective functions concern the 'quality' of the runner outlet velocity profile, the cavitation behavior and efficiency of the runner. The optimization results of the hydraulic turbine components along with the performance of the presented optimization procedure are shown in the
Multiple biological sequences alignment is one of the fundamental tasks in computational biology. The problem is NP-hard and has been subjected to intense research during the last 10 years. Exact MSA algorithms such a...
Multiple biological sequences alignment is one of the fundamental tasks in computational biology. The problem is NP-hard and has been subjected to intense research during the last 10 years. Exact MSA algorithms such as Clustawl have considerable serial sections (for building up the guide tree) which limit the efficiency of code parallelization and optimization. In the era of Big data, the Big genomic data ecosystem has accumulated huge amounts of genomic data provoking the challenge for innovative massively parallel algorithmic paradigms targeted for the efficient exploitation of the abandon parallel hardware resources within the highperformance computing infrastructure. The goal of our investigation is to design and implement software tool for massively parallel multiple sequence alignment based on our randomized method for massively parallel multiple sequence alignment targeted for GPU accelerated computing infrastructures. Parallel performance evaluation analysis shows efficient scalability in respect to data size and machine size.
particle swarm optimization (PSO) is a kind of swarm optimization algorithm with fast search speed, high efficiency and suitable for practical optimization problems. Once it was put forward, it has attracted extensive...
particle swarm optimization (PSO) is a kind of swarm optimization algorithm with fast search speed, high efficiency and suitable for practical optimization problems. Once it was put forward, it has attracted extensive attention of scholars in various fields. A quorum sensing particle swarm optimization (CPSOQS) algorithm based on chaos is proposed to solve the shortcomings of traditional PSO algorithms, such as poor handling of discrete optimization problems and the loss of searching ability due to the fast particle velocity decline in the later stage. Firstly, the search ability of the algorithm is improved by generating chaotic sequences and mapping them to the solution space of the definition domain; Secondly, the quorum sensing mechanism of bacteria is introduced into PSO. Chaos search was used twice in the algorithm, which improved the global search ability. CEC 2005 benchmark is used to test the performance of the algorithm. The experimental results show that among the 14 selected test functions, 11 test functions have better experimental results than the comparison algorithm.
Data Acquisition (DAQ) and Data Quality Monitoring (DQM) are key parts in the HEP data chain, where the data are processed and analyzed to obtain accurate monitoring quality indicators. Such stages are complex, includ...
Data Acquisition (DAQ) and Data Quality Monitoring (DQM) are key parts in the HEP data chain, where the data are processed and analyzed to obtain accurate monitoring quality indicators. Such stages are complex, including an intense processing work-flow and requiring a high degree of interoperability between software and hardware facilities. Data recorded by DAQ sensors and devices are sampled to perform live (and offline) DQM of the status of the detector during data collection providing to the system and scientists the ability to identify problems with extremely low latency, minimizing the amount of data that would otherwise be unsuitable for physical analysis. DQM stage performs a large set of operations (Fast Fourier Transform (FFT), clustering, classification algorithms, Region of Interest, particles tracking, etc.) involving the use of computing resources and time, depending on the number of events of the experiment, sampling data, complexity of the tasks or the quality performance. The objective of our work is to show a proposal with aim of developing a general optimization of the DQM stage considering all these elements. Techniques based on computational intelligence like EA can help improve the performance and therefore achieve an optimization of task scheduling in DQM.
At present, China's power system is developing in the direction of AC-DC parallel connection. AC-DC hybrid transmission has become the inevitable pattern of China's future power grid. In the operation of AC/DC...
At present, China's power system is developing in the direction of AC-DC parallel connection. AC-DC hybrid transmission has become the inevitable pattern of China's future power grid. In the operation of AC/DC hybrid system, the optimization of multiple indexes is often considered. One of the economic problems of system operation is how to minimize the operation cost of the system. This requires the optimal solution of a system of nonlinear equations with constraints. Among the existing algorithms, the interior point method has excellent convergence performance and has been widely used in power system. In this paper, the basic principle of interior point method is introduced, and the interior point method is used to optimize the AC and DC power flow obtained. With the network operation cost as the objective optimization index. The Matlab software is used for programming and simulation calculation. On the basis of verifying the effectiveness of the algorithm, the algorithm is transplanted into the large network for calculation and analysis, and the adaptability of the test algorithm.
Scintillating media are used in numbers of fields, from the high-energy physics to the biomedical devices, like those for the cancer detection and characterization, passing through the laser technology and industry pr...
详细信息
ISBN:
(纸本)9781509035977
Scintillating media are used in numbers of fields, from the high-energy physics to the biomedical devices, like those for the cancer detection and characterization, passing through the laser technology and industry processes control. Their performances must be enhanced in order to accomplish to the more and more sophisticated needs of the applications. Residual stress conditions are crucial for the correct performance of these materials. Moreover, an accurate study of the internal stress leads to a proper setting of the production process and prevent unwanted fractures of the brittle media. Non-invasive photoelastic methods are particularly suitable for the purpose;producing fringe pattern images which are a signature of the crystal state. These methods have to be supported of algorithms dedicated to the image analysis, so to extract geometrical parameters of the fringe pattern. Here an optimized algorithm to analyze the fringe pattern acquired by a laser based polariscope is discussed. This algorithm has been optimized and automated so to reduce the measurement uncertainty, considering a time saving procedure. The implemented procedure makes the software able to recognize the ROI in the acquired image and reduces the uncertainty by a factor 3.5.
As the core of semiconductor equipment, the vibration of precision positioning stage has great influence on its comprehensive performance. The research object of this paper is a high-precision positioning stage for ch...
As the core of semiconductor equipment, the vibration of precision positioning stage has great influence on its comprehensive performance. The research object of this paper is a high-precision positioning stage for chip detection. The precision positioning stage is driven by linear motors and supported by rolling linear guides. During the movement, vibration will be generated due to the influence of cable disturbance force and motor thrust harmonics. Based on Newton mechanics, the dynamic formula group of the motion table is established, and the reasons for the vibration of each degree are analyzed. Based on the geometric nonlinear theory, the cable model is established with the help of finite element software, and the influence of installation dimensions on the dynamic change of the cable assembly is analyzed. The motor thrust formula is established based on the equivalent magnetizing current method, the characteristics and influencing factors of the thrust harmonics are analyzed, and the method to suppress the thrust harmonics from the source is sought. In this paper, the vibration source characteristics of the stage are obtained through theory and simulation, which lays a foundation for the subsequent use of control strategy compensation.
暂无评论