ROP (Rate of Penetration) is a comprehensive indicator of the rock drilling process and how efficiently predicting drilling rates is important to optimize resource allocation, reduce drilling costs and manage drilling...
详细信息
ROP (Rate of Penetration) is a comprehensive indicator of the rock drilling process and how efficiently predicting drilling rates is important to optimize resource allocation, reduce drilling costs and manage drilling hazards. However, the traditional model is difficult to consider the multiple factors, which makes the prediction accuracy difficult to meet the real drilling requirements. In order to provide efficient, accurate and comprehensive information for drilling operation decision-making, this study evaluated the applicability of four typical regression algorithms based on machine learning for predicting pore pressure in Troll West field, namely SVR (Support Vector Regression), Linear regression, Regression Tree and Gradient Boosting regression. These methods allow more parameters input. By comparing the prediction results of these typical regression algorithms based on R-2(R-Square), explained variance, mean absolute error, mean squared error, median absolute error and other performance indicators, it was found that each method predicted different results, among which Gradient Boosting regression has the best results, their prediction accuracy is high and the error is very low. The prediction accuracy of these methods is positively correlated with the proportion of the training data set. With the increase of logging features, the prediction accuracy is gradually improved. In the prediction of adjacent wells, the ROP prediction methods can achieve a certain prediction effect, which shows that this method is suitable for ROP prediction in Troll West field.
The Capacitated Vehicle Routing Problem (CVRP) is an optimization problem that involves arranging vehicle routes while considering vehicle capacity. This research aims to compare the effectiveness of several heuristic...
详细信息
The Capacitated Vehicle Routing Problem (CVRP) is an optimization problem that involves arranging vehicle routes while considering vehicle capacity. This research aims to compare the effectiveness of several heuristic (Path Cheapest Arc, Path Most Constrained Arc, Savings, Christofides) and metaheuristic (Greedy Descent, Guided Local Search, Simulated Annealing, Tabu Search) algorithms for determining the routing scenarios and vehicle types for faculty transportation between the male campus in Ponorogo and the female campus in Mantingan Ngawi at Universitas Darussalam Gontor. The research involves decision variables for vehicle routing determination and the objective of minimizing the distance traveled. The constraint function includes two options: one vehicle with a capacity of 60 passengers and four vehicles. This research utilizes Google OR Tools with the Python programming language using Google Colab to facilitate the calculation process. The research results indicate that metaheuristic algorithms outperform heuristics for complex case studies (four vehicles). This study recommends using metaheuristic methods, specifically Christofides Guided Local Search and Christofides Simulated Annealing, for determining the best routes with the shortest distance and time. Further research was developed using algorithms such as hyperheuristics or matheuristics.
As the disassembly of end-of-life products is affected by several dynamic and uncertain issues, many mathe-matical models and solution approaches have been established. However, with more extended objectives, constrai...
详细信息
As the disassembly of end-of-life products is affected by several dynamic and uncertain issues, many mathe-matical models and solution approaches have been established. However, with more extended objectives, constraints and different methods of disassembly, inconsistent models relating to product representations and types of disassembly lines have become the main barriers for the transfer of research to practise. In this paper, a systematic overview of recent models to summarise the input data, parameters, decision variables, constraints and objectives of disassembly line balancing are presented. After discussing the adaptation and extensibility of these models for different environments, a unified encoding scheme is designed to apply typical multi-objective evolutionary algorithms on this problem with extensive decision variables and seven significant objectives. algorithm comparison on four typical cases is then carried out based on seven commonly used products to verify the optimisation process for the integrated version of existing models and demonstrate the overall performance of the typical multi-objective evolutionary algorithms on this problem. Experimental results can be a baseline for further algorithm design and practical algorithm selection on these disassembly line balancing scenarios.
In recent years, meta-heuristic (MH) algorithms have emerged as powerful optimization tools, enabling efficient solutions to complex truss optimization tasks. In this study, a performance assessment of eight newly dev...
详细信息
In recent years, meta-heuristic (MH) algorithms have emerged as powerful optimization tools, enabling efficient solutions to complex truss optimization tasks. In this study, a performance assessment of eight newly developed MH algorithms is presented for the optimal design of large-scale truss structures. The algorithms selected for testing include the Manta-ray Foraging Optimization (MRFO), Artificial Gorilla Troops Optimizer (GTO), Equilibrium Optimizer (EO), Henry Gas Solubility Optimizer (HGSO), Aquila Optimizer (AO), Heap-based Optimizer (HBO), Snake Optimizer (SO), and Artificial Hummingbird algorithm (AHA). Collectively, the eight techniques cover recent advances in nature-inspired MH approaches and use diverse search mechanisms for their optimization procedure. To effectively compare the performance of the eight techniques, five large-scale truss benchmarks (including the 4666-bar truss tower) were employed as test beds. For statistical significance, the Friedman ranking test was used to quantitatively compare the performance of the eight techniques. The results of the comparison show HBO as the best-performing method by consistently providing the lightest truss designs with the least computational effort. Quantitatively, HBO produced structures that were (on average) 21% lighter than the other seven techniques. In contrast, both AO and HGSO suffered from poor results and slow convergence speeds. HGSO in particular emerged as the worst-performing method and was prone to falling into local optima. In light of this, recommendations for improving the optimization performance for the eight techniques were made within the article.
Recently, blockchain has been widely considered as a promising technology to cope with the security issues in Internet of Vehicles (IoV). However, due to the high energy consumption, large data storage and heavy trans...
详细信息
Recently, blockchain has been widely considered as a promising technology to cope with the security issues in Internet of Vehicles (IoV). However, due to the high energy consumption, large data storage and heavy transmission load of blockchain and the limited resources of IoV devices, the resource management is urgently to be studied. In this paper, we propose a blockchain-based trust trading platform in IoV scenario and formulate the Task Scheduling (TS) problem which selects the transactions to assemble block and effects the utilization of the wireless resources and the performance of blockchain system. The optimization object is designed by jointly considering the characteristics of wireless communications, the Quality of Service (QoS) and the implementation process of blockchain. The DRL algorithms are utilized as the solutions, and MCQ-TS, PG-TS, TDQ-TS and TDAC-TS algorithms are proposed base on several typical DRL methods. The computational complexity of the proposed algorithms is analyzed mathematically. Additionally, a fair and comprehensive comparison of the various proposed DRL methods is also conducted through the complexity analysis and the simulation results. Accordingly, the features and approriate applied scenarios of the proposed algorithms are summarized at last. PG-TS has the best optimization performance while MCQ-TS, TDQ-TS and TDAC-TS perform similar. MCQ-TS has the smallest complexity, and TDQ-TS and TDAC-TS have the best training efficiency.
Mixture choice experiments investigate people's preferences for products composed of different ingredients. To ensure the quality of the experimental design, many researchers use Bayesian optimal design methods. E...
详细信息
Mixture choice experiments investigate people's preferences for products composed of different ingredients. To ensure the quality of the experimental design, many researchers use Bayesian optimal design methods. Efficient search algorithms are essential for obtaining such designs. Yet, research in the field of mixture choice experiments is not extensive. Our paper pioneers the use of a simulated annealing (SA) algorithm to construct Bayesian optimal designs for mixture choice experiments. Our SA algorithm not only accepts better solutions, but also has a certain probability of accepting inferior solutions. This approach effectively prevents rapid convergence, enabling broader exploration of the solution space. Although our SA algorithm may start more slowly than the widely used mixture coordinate-exchange method, it generally produces higher-quality mixture choice designs after a reasonable runtime. We demonstrate the superior performance of our SA algorithm through extensive computational experiments and a real-life example.
A lattice-based target design is presented for expanding research capabilities in subpixel target detection. The targets generate large numbers of subpixel samples with a priori knowledge of the exact subpixel fractio...
详细信息
A lattice-based target design is presented for expanding research capabilities in subpixel target detection. The targets generate large numbers of subpixel samples with a priori knowledge of the exact subpixel fractions. This contrasts with traditional targets, where subpixel fractions are either unknown or estimated with significant uncertainty, with limited samples available in historical datasets. The subpixel targets diminish these drawbacks and generate constant subpixel samples invariant to effects of the system (e.g., image distortions, scan pattern) which would typically induce uncertainty. Simulations were performed to assess the accuracy of the proposed method of achieving samples with constant fractions. To validate and demonstrate the functionality of the design, four targets were fabricated with constant subpixel fractions (0.2, 0.4, 0.6, 0.8) and were deployed into a hyperspectral data collection. Spectral unmixing validated the retrieval of samples with constant fractions, and a general target detection scenario was demonstrated using 300-400 samples of each constant fraction. The impacts of a limited number of target samples (e.g., n = 5,10 ) on receiver operating characteristic (ROC) curves were empirically assessed, with a significant reduction of variability observed when n > 100 , illustrating the advantages when large sample sizes are available. Design limitations are discussed, along with applications (e.g., algorithm comparison) for the community.
This study investigates the application of machine learning (ML) algorithms to enhance the precision of wine quality assessment, focusing specifically on Portuguese red wine. Amidst the growing interest in leveraging ...
详细信息
This study investigates the application of machine learning (ML) algorithms to enhance the precision of wine quality assessment, focusing specifically on Portuguese red wine. Amidst the growing interest in leveraging artificial intelligence (AI) for sensory analysis, our research distinguishes itself by employing a rigorous methodological framework. Our approach, named the 'Incremental Analysis of Baseline Accuracy,' identifies the chemical variables most predictive of wine quality. This framework aims to streamline the predictive process by pinpointing key variables that significantly influence quality assessments. In this paper, we demonstrate the feasibility of a methodology that precisely determines the criticality of chemical inputs, both their exact values and their correct order, to identify which inputs significantly contribute to the quality assessment of a sensory perception, such as taste. The centerpiece of our paper is a vibrant 3D pie chart that illustrates the percentage criticality of different input variables for perceiving the quality of red wine. This chart symbolizes the essence of our paper: a 'pie' representing the empirical conclusion, not mere conjecture. Through this paper, we have shown that it is possible to quantify a qualitative, perceptual aspect like taste perception, which is often believed to be assessable only through subjective conjecture. Moreover, our findings, facilitated by the Incremental Analysis of the Baseline Accuracy method, demonstrate that this perception can be systematically quantified, challenging traditional assumptions about sensory analysis.
We propose a framework for descriptively analyzing sets of partial orders based on the concept of depth functions. Despite intensive studies in linear and metric spaces, there is very little discussion on depth functi...
详细信息
We propose a framework for descriptively analyzing sets of partial orders based on the concept of depth functions. Despite intensive studies in linear and metric spaces, there is very little discussion on depth functions for non-standard data types such as partial orders. We introduce an adaptation of the well-known simplicial depth to the set of all partial orders, the union -free generic (ufg) depth. Moreover, we utilize our ufg depth for a comparison of machine learning algorithms based on multidimensional performance measures. Concretely, we provide two examples of classifier comparisons on samples of standard benchmark data sets. Our results demonstrate promisingly the wide variety of different analysis approaches based on ufg methods. Furthermore, the examples outline that our approach differs substantially from existing benchmarking approaches, and thus adds a new perspective to the vivid debate on classifier comparison. 1
The genetic algorithm (GA), particle swarm optimization (PSO) algorithm, and BOX algorithm have been used in natural gas liquefaction process optimization. Three algorithms all can find a solution by adopting differen...
详细信息
The genetic algorithm (GA), particle swarm optimization (PSO) algorithm, and BOX algorithm have been used in natural gas liquefaction process optimization. Three algorithms all can find a solution by adopting different strategies and computational efforts. Therefore, it is necessary to compare their performance. This article presents a performance comparison of the GA, PSO, and BOX algorithms for the optimization of four natural gas mixed refrigerant liquefaction processes. The results show that PSO has the best optimization performance, reducing the specific energy consumption (SEC) to 0.3233 kWh/kg, 0.2351 kWh/kg, 0.2489 kWh/kg, and 0.2427 kWh/kg for single mixed refrigerant (SMR), dual mixed refrigerant (DMR), propane pre-cooling mixed refrigerant (C3MR), and mixed fluid cascade (MFC), respectively. Furthermore, PSO also improved the exergy efficiency of the four processes to 35.34%, 48.59%, 45.90%, and 47.07%. The composite curve analysis shows that the heat transfer efficiency of the heat exchanger optimized by PSO is more efficient. The study also discovered that the total optimization performance of PSO and GA is better than BOX algorithm, and the GA optimization performance is second only to PSO. This research would greatly assist process engineers in making the right decision on process optimization to overcome energy efficiency challenges. (c) 2022 The Authors. Published by Elsevier Ltd.
暂无评论