In the big data era, mode division multiplexing, as a technology for extended channel capacity, demonstrates potential in enhancing parallel data processing capability. Consequently, developing a compact, high-perform...
详细信息
In the big data era, mode division multiplexing, as a technology for extended channel capacity, demonstrates potential in enhancing parallel data processing capability. Consequently, developing a compact, high-performance mode converter through efficient design methods is an urgent requirement. However, traditional design methodologies for these converters face significant computational complexities and inefficiencies. Addressing this challenge, this paper introduces a novel topology optimization design method for mode converters employing a Dynamic Adjustment of Update Rate (DAUR). This approach markedly reduces computational overhead, accelerating the design process while ensuring high performance and compactness. As a proof-of-concept, an ultra-compact dual-mode converter was designed. The DAUR method demonstrated an 80% reduction in computational time compared to traditional methods, while maintaining a compact design (only 1.4 mu m x 1.4 mu m) and an insertion loss under 0.68 dB across a wavelength range of 1525 nm to 1575 nm. Meanwhile, simulated inter-mode crosstalk remained below - 24 dB across a 40 nm bandwidth. A comprehensive comparison with traditional inverse design algorithms is presented, demonstrating our method's superior efficiency and effectiveness. Our findings suggest that DAUR not only streamlines the design process but also facilitates exploration into more complex micro-nano photonic structures with reduced resource investment.
Network planning technology could be used to represent project plan management, such Critical Path Method (CPM for short) and Performance Evaluation Review Technique (PERT for short) etc. Aiming at problem that how to...
详细信息
Network planning technology could be used to represent project plan management, such Critical Path Method (CPM for short) and Performance Evaluation Review Technique (PERT for short) etc. Aiming at problem that how to find hypo-critical path in network planning, firstly, properties of total float. free float and safety float are analyzed, and total float theorem is deduced on the basis of above analysis; and secondly, simple algorithm of finding the hypo-critical path is designed by using these properties of float and total theorem, and correctness of the algorithm is analyzed. Proof shows that the algorithm could realize effect of whole optimization could be realized by part optimization. Finally, one illustration is given to expatiate the algorithm.
In gas condensate reservoirs, gas flow at large velocities enhances the gas permeability due to gas-liquid positive coupling which results in near-miscible flow condition. On the other hand, augmented pressure drop du...
详细信息
In gas condensate reservoirs, gas flow at large velocities enhances the gas permeability due to gas-liquid positive coupling which results in near-miscible flow condition. On the other hand, augmented pressure drop due to non-Darcy flow, reduces the gas permeability. Models for the Positive Coupling or non-Darcy flow include several parameters, which are rarely known from reliable lab special core analysis. We offer a good alternative for tuning of these parameters in which the observed production history data are reproduced from the readjusted simulation model. In this study, history matching on observed production data was carried out using evolutionary optimization algorithms including genetic algorithms, neighborhood algorithm, differential evolution algorithm and particle swarm optimization algorithm, where a faster convergence and lower misfit value were obtained from a genetic algorithm. Then, the Neighborhood algorithm-Bayes was used to perform Bayesian posterior inference on the history matched models and create the posterior cumulative probability distributions for all uncertain parameters. Finally, Bayesian credible intervals for production rate and wellhead pressure were computed in the long-range forecast. Our new approach enables to not only calibrate the gas effective permeability parameters to dynamic reservoir data, but allows to capture the uncertainty with parameter estimation and production forecast.
Bat algorithm is one of the optimization techniques that mimic the behavior of bat. Bat algorithm is a powerful algorithm in finding the optimum feature data collection. Classification is one of the data mining tasks ...
详细信息
Bat algorithm is one of the optimization techniques that mimic the behavior of bat. Bat algorithm is a powerful algorithm in finding the optimum feature data collection. Classification is one of the data mining tasks that useful in knowledge representation. But, the high dimensional data become the issue in the classification that interrupt classification accuracy. From the literature, feature selection and discretization able to overcome the problem. Therefore, this study aims to show Bat algorithm is potential as a discretization approach and as a feature selection to improve classification accuracy. In this paper, a new hybrid Bat-K-Mean algorithm refer as hBA is proposed to convert continuous data into discrete data called as optimize discrete dataset. Then, Bat is used as feature selection to select the optimum feature from the optimized discrete dataset in order to reduce the dimension of data. The experiment is conducted by using k-Nearest Neighbor to evaluate the effectiveness of discretization and feature selection in classification by comparing with continuous dataset without feature selection, discrete dataset without feature selection, and continuous dataset without discretization and feature selection. Also, to show Bat is potential as a discretization approach and feature selection method. The experiments were carried out using a number of benchmark datasets from the UCI machine learning repository. The results show the classification accuracy is improved with the Bat-K-Means optimized discretization and Bat optimized feature selection.
INTRODUCTION: The research on the multi-mode fusion of college students' independent learning ability cultivation method is conducive to college students' change of learning mode and learning thinking, improve...
详细信息
INTRODUCTION: The research on the multi-mode fusion of college students' independent learning ability cultivation method is conducive to college students' change of learning mode and learning thinking, improvement of the utilization rate of educational resources, and the development of the academic environment as well as the reform of the educational concept. OBJECTIVES: Aiming at the problems of college students' current independent learning mode, such as the need for more in-depth research and the single study means. METHODS: A method for cultivating college students' autonomous learning ability through the integration of intelligent optimization algorithms and multiple modes has been proposed. Firstly, the practices of analyzing the current college students' autonomous learning mode and multiple learning modes are analyzed;then, using the butterfly optimization algorithm, a weight optimization method for the cultivation of college students' independent learning ability based on the fusion of multiple modes is proposed;finally, the validity and robustness of the proposed method are verified through experimental analysis. RESULTS: The results show that the proposed method has a high cultivation effect. CONCLUSION: Solves the problem of fusion of college students' independent learning ability cultivation modes.
The global spread of COVID-19 has profoundly affected health and economies, highlighting the need for precise epidemic trend predictions for effective interventions. In this study, we used infectious disease models to...
详细信息
The global spread of COVID-19 has profoundly affected health and economies, highlighting the need for precise epidemic trend predictions for effective interventions. In this study, we used infectious disease models to simulate and predict the trajectory of COVID-19. An SEIR (susceptible, exposed, infected, removed) model was established using Wuhan data to reflect the pandemic. We then trained a genetic algorithm-based SEIR (GA-SEIR) model using data from a specific U.S. region and focused on individual susceptibility and infection dynamics. By integrating socio-psychological factors, we achieved a significant enhancement to the GA-SEIR model, leading to the development of an optimized version. This refined GA-SEIR model significantly improved our ability to simulate the spread and control of the epidemic and to effectively track trends. Remarkably, it successfully predicted the resurgence of COVID-19 in mainland China in April 2023, demonstrating its robustness and reliability. The refined GA-SEIR model provides crucial insights for public health authorities, enabling them to design and implement proactive strategies for outbreak containment and mitigation. Its substantial contributions to epidemic modelling and public health planning are invaluable, particularly in managing and controlling respiratory infectious diseases such as COVID-19.
Industrial advancements and utilization of large amount of fossil fuels, vehicle pollution, and other calamities increases the Air Quality Index (AQI) of major cities in a drastic manner. Major cities AQI analysis is ...
详细信息
Industrial advancements and utilization of large amount of fossil fuels, vehicle pollution, and other calamities increases the Air Quality Index (AQI) of major cities in a drastic manner. Major cities AQI analysis is essential so that the government can take proper preventive, proactive measures to reduce air pollution. This research incorporates artificial intelligence in AQI prediction based on air pollution data. An optimized machine learning model which combines Grey Wolf optimization (GWO) with the Decision Tree (DT) algorithm for accurate prediction of AQI in major cities of India. Air quality data available in the Kaggle repository is used for experimentation, and major cities like Delhi, Hyderabad, Kolkata, Bangalore, Visakhapatnam, and Chennai are considered for analysis. The proposed model performance is experimentally verified through metrics like R-Square, RMSE, MSE, MAE, and accuracy. Existing machine learning models, like k-nearest Neighbor, Random Forest regressor, and Support vector regressor, are compared with the proposed model. The proposed model attains better prediction performance compared to traditional machine learning algorithms with maximum accuracy of 88.98% for New Delhi city, 91.49% for Bangalore city, 94.48% for Kolkata, 97.66% for Hyderabad, 95.22% for Chennai and 97.68% for Visakhapatnam city.
This study presents the K-means clustering-based grey wolf optimizer, a new algorithm intended to improve the optimization capabilities of the conventional grey wolf optimizer in order to address the problem of data c...
详细信息
This study presents the K-means clustering-based grey wolf optimizer, a new algorithm intended to improve the optimization capabilities of the conventional grey wolf optimizer in order to address the problem of data clustering. The process that groups similar items within a dataset into non-overlapping groups. Grey wolf hunting behaviour served as the model for grey wolf optimizer, however, it frequently lacks the exploration and exploitation capabilities that are essential for efficient data clustering. This work mainly focuses on enhancing the grey wolf optimizer using a new weight factor and the K-means algorithm concepts in order to increase variety and avoid premature convergence. Using a partitional clustering-inspired fitness function, the K-means clustering-based grey wolf optimizer was extensively evaluated on ten numerical functions and multiple real-world datasets with varying levels of complexity and dimensionality. The methodology is based on incorporating the K-means algorithm concept for the purpose of refining initial solutions and adding a weight factor to increase the diversity of solutions during the optimization phase. The results show that the K-means clustering-based grey wolf optimizer performs much better than the standard grey wolf optimizer in discovering optimal clustering solutions, indicating a higher capacity for effective exploration and exploitation of the solution space. The study found that the K-means clustering-based grey wolf optimizer was able to produce high-quality cluster centres in fewer iterations, demonstrating its efficacy and efficiency on various datasets. Finally, the study demonstrates the robustness and dependability of the K-means clustering-based grey wolf optimizer in resolving data clustering issues, which represents a significant advancement over conventional techniques. In addition to addressing the shortcomings of the initial algorithm, the incorporation of K-means and the innovative weight factor into the gre
Image compression is one of the essential requirements for the efficient use of storage space and bandwidth. A new technique based on fractal theory is proposed for encoding the image;it is known as fractal image comp...
详细信息
Image compression is one of the essential requirements for the efficient use of storage space and bandwidth. A new technique based on fractal theory is proposed for encoding the image;it is known as fractal image compression. In the procedure of encoding, the mechanism of search is considered as one of the main problems of this technique. In this work, an attempt to speed up the encoding process with minimal loss of the compressed image quality is adopted based on the Scatter Search algorithm. It is a sibling of Tabu search based on similar origins. The experimental results show a significant reduction in the computation time, where the mean square error measures be-tween blocks are decreased after comparing them to full search methods. Consequently, the decoding process evinced that the reconstructed images were of high quality.
暂无评论