Reservoir history matching represents a crucial stage in the reservoir development process and purposes to match model predictions with various observed field data, including production, seismic, and electromagnetic d...
详细信息
Reservoir history matching represents a crucial stage in the reservoir development process and purposes to match model predictions with various observed field data, including production, seismic, and electromagnetic data. In contrast to the traditional manual approach, automatic history matching (AHM) significantly reduces the workload of reservoir engineers by automatically tuning the reservoir model parameters. AHM can be viewed as an automated solution to an inverse problem, and the selection of optimization algorithms is crucial for achieving effective model matching. However, the optimization process requires running numerous simulations. Surrogate models, achieved through simplification or approximation of the realistic model, offer a significant reduction in computational costs during the simulation process. In this paper, we provide an overview of commonly prevalent optimization algorithms and surrogate models in the AHM process, presenting the latest advancements in these methods. We analyze the strengths and limitations of these approaches and discuss the future challenges and directions of AHM, aiming to provide valuable references for further research and applications in this field.
The Bayesian optimization algorithm, proposed in the 1980s, is known for its advantages in optimizing parameters and conserving system resources. However, the specific application of this algorithm in various deep-lea...
详细信息
The Bayesian optimization algorithm, proposed in the 1980s, is known for its advantages in optimizing parameters and conserving system resources. However, the specific application of this algorithm in various deep-learning models has not been thoroughly evaluated. This study proposes an experimental model to assess the dependency on reducing hyperparameters in Dense layers and dropout layers, as well as the performance and accuracy across 13 deep learning models, implemented on two datasets with different characteristics and sample sizes (Cucumber disease recognition dataset and tomato physiological states dataset). Additionally, the research utilizes Grad-CAM to clarify the positive impact of parameter reduction when using the BO algorithm. Through evaluations based on hyperparameter data, performance by each iteration, accuracy, and model explainability, the study demonstrates that the BO algorithm optimizes hyperparameters through selective processes. However, it significantly impacts the model's accuracy. This is particularly evident in the InceptionResNetV2 and NasNetMobile models. The research results contribute to a clearer understanding of the impact of optimization algorithms on deep learning models, opening up new research directions in optimizing and elucidating models through the use of explainable artificial intelligence.
This study used a convolutional neural network (CNN) and optimization algorithms to develop an automatic gripping system for a robotic arm, overcoming the limitations of traditional learning methods. Unlike existing a...
详细信息
This study used a convolutional neural network (CNN) and optimization algorithms to develop an automatic gripping system for a robotic arm, overcoming the limitations of traditional learning methods. Unlike existing approaches that require retraining and parameter adjustments for various scenarios, this research demonstrates more flexible and accurate performance. Subsequently, this study built an experimental setting in a physical simulator so that multiple optimization algorithms could be constructed and object detection algorithms could be trained to extract required data. Since the real-world environment inevitably differed slightly from the simulation environment, this study extracted a small amount of data from the real-world environment to perform transfer learning. Finally, this study applied the gripping system to a robotic arm to verify its performance. The experimental results revealed that the deep learning model developed in this study had 96 % success rate for gripping objects. The mean absolute error was 4.06, the root mean squared error was 5.03, and the R2 was 0.99. optimization algorithms were used in the simulation environment to collect training data, which in turn enabled the system to automatically find the optimal posture for gripping objects. When the object to be gripped changed, the automatic gripping system could automatically retune itself accordingly;thus, it did not require manual tuning. Finally, the automatic gripping system was able to control a robotic arm for the automatic gripping of objects. This system is suitable for application in industry to enhance the performance of production lines and render the arrangement of production lines more flexible.
Multinode upset induced by radiation on integrated circuits has caused many circuit reliability issues. This article proposes a single-event quadruple-node upset (QNU) recovery latch (NEST), based on four circular fee...
详细信息
Multinode upset induced by radiation on integrated circuits has caused many circuit reliability issues. This article proposes a single-event quadruple-node upset (QNU) recovery latch (NEST), based on four circular feedback loops that are formed by 25 C-elements to realize high robustness. NEST achieves 29.02% reduction in power consumption compared to the latch design and algorithm-based verification protected against multiple-node upset (LDAVPM) latch and 51.44% reduction in setup time compared to the quadruple-node upset recoverable and high-impedance-state insensitive latch (QRHIL) latch. NEST also achieves a 99.29% QNU recovery rate. Furthermore, a high-speed, high-precision optimization algorithm for multinode upset recovery is also proposed and implemented. This algorithm achieves 99.84 reduction in simulation time for exhaustive fault injections having equivalent accuracy with high performance simulation program with integrated circuit emphasis (HSPICE).
Wireless network has emancipated people from the bondage of wired network and enhanced the quality of human life. However, there remain a few areas pertaining to wireless network that require attention, such as networ...
详细信息
Wireless network has emancipated people from the bondage of wired network and enhanced the quality of human life. However, there remain a few areas pertaining to wireless network that require attention, such as network congestion, low communication reliability and security. With the increasing scale and extended applications of Internet of Things (IoT), there is a growing demand for reliability, stability and security within the network. Wireless network monitoring is an effective system put in place which involves distributed sniffers that capture the transmitted data of wireless users, facilitating status analysis, fault diagnosis and resource management of the network system. Due to the limited number of sniffers, optimization of hardware configuration and channel assignment of sniffers is paramount, which can significantly improve the amount of captured data and the monitoring quality of the wireless network. Primarily, the concept, classification and characteristics of wireless network monitoring are introduced. Secondarily, the application of optimization algorithms in wireless network monitoring, particularly the channel selection algorithms during the data collection process and the channels and time-slot scheduling algorithms during the data aggregation process are summarized. Finally, the challenges faced when building a wireless network monitoring are discussed and to conclude prospects aimed towards the development of this research field are put forward.(c) 2022 Elsevier B.V. All rights reserved.
In mechanical optimization design problems, there are often some non-continuous or non-differentiable objective functions. For these non-continuous and non-differentiable optimization objectives, it is often difficult...
详细信息
In mechanical optimization design problems, there are often some non-continuous or non-differentiable objective functions. For these non-continuous and non-differentiable optimization objectives, it is often difficult for existing optimal design algorithms to find the desired optimal solutions. In this paper, we incorporate the idea of gradient descent into cellular automata and propose a Cellular Gradient (CG) method. First, we have given the basic rules and algorithmic framework of CG and designed three kinds of growth and extinction rules respectively. Then, the three evolutionary rules for cellular within a single cycle are analyzed separately for form and ordering. The best expressions for the cellular jealous neighbor rule and the solitary regeneration rule are given, and the most appropriate order in which the rules are run is selected. Finally, the solution results of the cellular gradient algorithm and other classical optimization design algorithms are compared with a multi-objective multiparameter mechanical optimization design problem as an example. The computational results show that the cellular gradient algorithm has an advantage over other algorithms in solving global and dynamic mechanical optimal design problems. The novelty of CG is to provide a new way of thinking for solving optimization problems with global discontinuities.
In recent years, several surrogate assisted evolutionary algorithms (SAEAs) have been proposed to solve expensive optimization problems. These problems lack explicit expressions and are characterized by high invocatio...
详细信息
In recent years, several surrogate assisted evolutionary algorithms (SAEAs) have been proposed to solve expensive optimization problems. These problems lack explicit expressions and are characterized by high invocation costs. SAEAs leverage surrogate models to accelerate convergence towards the optimal region and reduce the number of function evaluations. While Gaussian Processes (GPs) are widely used due to their robustness and capability of providing uncertainty estimates, their applicability becomes limited in scenarios involving a large number of samples or high-dimensional spaces. This is due to their cubic time complexity in relation to the number of samples, which results in prohibitive computational demands for large-scale problems. To address the challenge, this work presents an efficient surrogate model-assisted estimation of the distribution algorithm (ESAEDA). This method employs a random forest as a surrogate model and combines it with a GP-hedge acquisition strategy to ensure the efficiency and accuracy of model-assisted selection. An improved EDA model called the variable-width histogram model with some unevaluated solutions is used to generate new solutions. To demonstrate the benefits of the proposed method, we compared ESAEDA with several state-of-the-art surrogate-assisted evaluation algorithms and the Bayesian optimization method. Experimental results demonstrate the superiority of the proposed algorithm over these comparison algorithms for two well-known test suites.
Quantum computing has become a pivotal innovation in computational science, offering novel avenues for tackling the increasingly complex and high-dimensional optimization challenges inherent in engineering design. Thi...
详细信息
Quantum computing has become a pivotal innovation in computational science, offering novel avenues for tackling the increasingly complex and high-dimensional optimization challenges inherent in engineering design. This paradigm shift is particularly pertinent in the domain of structural optimization, where the intricate interplay of design variables and constraints necessitates advanced computational strategies. In this vein, the gate-based variational quantum algorithm utilizes quantum superposition and entanglement to improve search efficiency in large solution spaces. This paper delves into the gate-based variational quantum algorithm for the discrete variable truss structure size optimization problem. By reformulating this optimization challenge into a quadratic, unconstrained binary optimization framework, we bridge the gap between the discrete nature of engineering optimization tasks and the quantum computational paradigm. A detailed algorithm is outlined, encompassing the translation of the truss optimization problem into the quantum problem, the initialization and iterative evolution of a quantum circuit tailored to this problem, and the integration of classical optimization techniques for parameter tuning. The proposed approach demonstrates the feasibility and potential of quantum computing to transform engineering design and optimization, with numerical experiments validating the effectiveness of the method and paving the way for future explorations in quantum-assisted engineering optimizations.
The ultra-thin vapor chamber(UTVC) is extensively utilized across various fields due to its excellent heat dissipation performance and good temperature uniformity. The data-driven modeling approach is well suited to p...
详细信息
The ultra-thin vapor chamber(UTVC) is extensively utilized across various fields due to its excellent heat dissipation performance and good temperature uniformity. The data-driven modeling approach is well suited to predict the thermal resistance of the UTVC due to its great flexibility and accuracy. In this paper, a novel approach is proposed to optimize the UTVC thermal resistance, which combines the radial basis function neural network(RBFNN) model with an improved adaptive differential fish swarm evolution algorithm(ADFEA). The mean square error of the RBFNN model was 0.00016 on the training set and 0.00027 on the test sets, which indicates that the model is able to accurately predict the thermal resistance of the UTVC. The data are obtained from experiments on a mesh wick UTVC with dimensions of 124 x 14 x 1 mm. A novel optimization algorithm, ADFEA, has been designed to enhance optimization capability and convergence accuracy. This algorithm combines differential algorithm and artificial fish swarm algorithm, incorporating a parameter adaptation mechanism. The optimal operating parameters of the UTVC are obtained by ADFEA optimization and the accuracy of the optimized results is verified by experiment. The proposed optimization method provides new insights for the design and optimization of UTVC.
In this study, a novel artificial meerkat optimization algorithm (AMA) is proposed to simulate the cooperative behaviors of meerkat populations. The AMA algorithm is designed with two sub-populations, multiple search ...
详细信息
In this study, a novel artificial meerkat optimization algorithm (AMA) is proposed to simulate the cooperative behaviors of meerkat populations. The AMA algorithm is designed with two sub-populations, multiple search strategies, a multi-stage elimination mechanism, and a combination of information sharing and greedy selection strategies. Drawing inspiration from the intra-population learning behavior, the algorithm introduces two search mechanisms: single-source learning and multi-source learning. Additionally, inspired by the sentinel behavior of meerkat populations, a search strategy is proposed that combines Gaussian and L & eacute;vy variations. Furthermore, inspired by the inter-population aggression behavior of meerkat populations, the AMA algorithm iteratively applies these four search strategies, retaining the most suitable strategy while eliminating others to enhance its applicability across complex optimization problems. Experimental results comparing the AMA algorithm with seven state-of-the-art algorithms on 53 test functions demonstrate that the AMA algorithm outperforms others on 71.7% of the test functions. Moreover, experiments on challenging engineering optimization problems confirm the superior performance of the AMA algorithm over alternative algorithms.
暂无评论