There has been an exponential increase in discussions about bias in Artificial Intelligence (AI) systems. Bias in AI has typically been defined as a divergence from standard statistical patterns in the output of an AI...
详细信息
The studypresents theHalfMax InsertionHeuristic (HMIH) as a novel approach to solving theTravelling SalesmanProblem (TSP). The goal is to outperform existing techniques such as the Farthest Insertion Heuristic (FIH) a...
详细信息
The studypresents theHalfMax InsertionHeuristic (HMIH) as a novel approach to solving theTravelling SalesmanProblem (TSP). The goal is to outperform existing techniques such as the Farthest Insertion Heuristic (FIH) andNearest Neighbour Heuristic (NNH). The paper discusses the limitations of current construction tour heuristics,focusing particularly on the significant margin of error in FIH. It then proposes HMIH as an alternative thatminimizes the increase in tour distance and includes more nodes. HMIH improves tour quality by starting withan initial tour consisting of a ‘minimum’ polygon and iteratively adding nodes using our novel Half Max *** paper thoroughly examines and compares HMIH with FIH and NNH via rigorous testing on standard TSPbenchmarks. The results indicate that HMIH consistently delivers superior performance, particularly with respectto tour cost and computational efficiency. HMIH’s tours were sometimes 16% shorter than those generated by FIHand NNH, showcasing its potential and value as a novel benchmark for TSP solutions. The study used statisticalmethods, including Friedman’s Non-parametric Test, to validate the performance of HMIH over FIH and *** guarantees that the identified advantages are statistically significant and consistent in various situations. Thiscomprehensive analysis emphasizes the reliability and efficiency of the heuristic, making a compelling case for itsuse in solving TSP issues. The research shows that, in general, HMIH fared better than FIH in all cases studied,except for a few instances (pr439, eil51, and eil101) where FIH either performed equally or slightly better thanHMIH. HMIH’s efficiency is shown by its improvements in error percentage (δ) and goodness values (g) comparedto FIH and NNH. In the att48 instance, HMIH had an error rate of 6.3%, whereas FIH had 14.6% and NNH had20.9%, indicating that HMIH was closer to the optimal solution. HMIH consistently showed superior performanceacross many benchmarks,
Edge computing has gained widespread attention in cloud computing due to the increasing demands of AIoT applications and the evolution of edge architectures. One prevalent application in this domain is neural network ...
详细信息
Data compression plays a vital role in datamanagement and information theory by reducing ***,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable t...
详细信息
Data compression plays a vital role in datamanagement and information theory by reducing ***,it lacks built-in security features such as secret keys or password-based access control,leaving sensitive data vulnerable to unauthorized access and *** the exponential growth of digital data,robust security measures are *** encryption,a widely used approach,ensures data confidentiality by making it unreadable and unalterable through secret key *** their individual benefits,both require significant computational ***,performing them separately for the same data increases complexity and processing *** the need for integrated approaches that balance compression ratios and security levels,this research proposes an integrated data compression and encryption algorithm,named IDCE,for enhanced security and *** on 128-bit block sizes and a 256-bit secret key *** combines Huffman coding for compression and a Tent map for ***,an iterative Arnold cat map further enhances cryptographic confusion *** analysis validates the effectiveness of the proposed algorithm,showcasing competitive performance in terms of compression ratio,security,and overall efficiency when compared to prior algorithms in the field.
Recent progress made in the prediction,characterisation,and mitigation of multipactor discharge is reviewed for single‐and two‐surface ***,an overview of basic concepts including secondary electron emission,electron...
详细信息
Recent progress made in the prediction,characterisation,and mitigation of multipactor discharge is reviewed for single‐and two‐surface ***,an overview of basic concepts including secondary electron emission,electron kinetics under the force law,multipactor susceptibility,and saturation mechanisms is provided,followed by a discus-sion on multipactor mitigation *** strategies are categorised into two broad areas–mitigation by engineered devices and engineered radio frequency(rf)*** approach is useful in different *** advances in multipactor physics and engineering during the past decade,such as novel multipactor prediction methods,un-derstanding space charge effects,schemes for controlling multipacting particle trajec-tories,frequency domain analysis,high frequency effects,and impact on rf signal quality are *** addition to vacuum electron multipaction,multipactor‐induced ioni-zation breakdown is also reviewed,and the recent advances are summarised.
Knowledge selection is a challenging task that often deals with semantic drift issues when knowledge is retrieved based on semantic similarity between a fact and a question. In addition, weak correlations embedded in ...
详细信息
Knowledge selection is a challenging task that often deals with semantic drift issues when knowledge is retrieved based on semantic similarity between a fact and a question. In addition, weak correlations embedded in pairs of facts and questions and gigantic knowledge bases available for knowledge search are also unavoidable issues. This paper presents a scalable approach to address these issues. A sparse encoder and a dense encoder are coupled iteratively to retrieve fact candidates from a large-scale knowledge base. A pre-trained language model with two rounds of fine-tuning using results of the sparse and dense encoders is then used to re-rank fact candidates. Top-k facts are selected by a specific re-ranker. The scalable approach is applied on two textual inference datasets and one knowledge-grounded question answering dataset. Experimental results demonstrate that (1) the proposed approach can improve the performance of knowledge selection by reducing the semantic drift;(2) the proposed approach produces outstanding results on the benchmark datasets. The code is available at https://***/hhhhzs666/KSIHER.
The U.S. National science Foundation (NSF) celebrated the 20th anniversary of its research funding programs in cybersecurity, and more generally, secure and trustworthy computing, with a panel session at its conferenc...
详细信息
Perovskite solar cells represent a revolutionary class of photovoltaic devices that have gained substantial attention for their exceptional performance and potential to provide an affordable and efficient solution for...
详细信息
Perovskite solar cells represent a revolutionary class of photovoltaic devices that have gained substantial attention for their exceptional performance and potential to provide an affordable and efficient solution for harnessing solar energy. These cells utilize perovskite-structured materials, typically hybrid organicinorganic lead halide compounds, as the light-absorbing layer.
In this study,we employ advanced data-driven techniques to investigate the complex relationships between the yields of five major crops and various geographical and spatiotemporal features in *** analyze how these fea...
详细信息
In this study,we employ advanced data-driven techniques to investigate the complex relationships between the yields of five major crops and various geographical and spatiotemporal features in *** analyze how these features influence crop yields by utilizing remotely sensed *** methodology incorporates clustering algorithms and correlation matrix analysis to identify significant patterns and dependencies,offering a comprehensive understanding of the factors affecting agricultural productivity in *** optimize the model's performance and identify the optimal hyperparameters,we implemented a comprehensive grid search across four distinct machine learning regressors:Random Forest,Extreme Gradient Boosting(XGBoost),Categorical Boosting(CatBoost),and Light Gradient-Boosting Machine(LightGBM).Each regressor offers unique functionalities,enhancing our exploration of potential model *** top-performing models were selected based on evaluating multiple performance metrics,ensuring robust and accurate predictive *** results demonstrated that XGBoost and CatBoost perform better than the other *** introduce synthetic crop data generated using a Variational Auto Encoder to address the challenges posed by limited agricultural *** achieving high similarity scores with real-world data,our synthetic samples enhance model robustness,mitigate overfitting,and provide a viable solution for small dataset issues in *** approach distinguishes itself by creating a flexible model applicable to various crops *** integrating five crop datasets and generating high-quality synthetic data,we improve model performance,reduce overfitting,and enhance *** findings provide crucial insights for productivity drivers in key cropping systems,enabling robust recommendations and strengthening the decision-making capabilities of policymakers and farmers in datascarce regions.
Automotive cyber-physical systems (ACPS) are typical cyber-physical systems because of the joint interaction between the cyber part and physical part. Functional safety requirement (including response time and reliabi...
详细信息
Automotive cyber-physical systems (ACPS) are typical cyber-physical systems because of the joint interaction between the cyber part and physical part. Functional safety requirement (including response time and reliability requirements) for an ACPS function must be assured for safe driving. Auto industry is cost-sensitive, power-sensitive, and environment-friendly. Energy consumption affects the development efficiency of the ACPS and the living environment of people. This paper solves the problem of optimizing the energy consumption for an ACPS function while assuring its functional safety requirement (i.e., energy-efficient functional safety for ACPS). However, implementing minimum response time, maximum reliability, and minimum energy consumption is a conflicting problem. Consequently, solving the problem is a challenge. In this paper, we propose a three-stage design process toward energy-efficient functional safety for ACPS. The topic problem is divided into three sub-problems, namely, response time requirement verification (first stage), functional safety requirement verification (second stage), and functional safety-critical energy consumption optimization (third stage). The proposed energy-efficient functional safety design methodology is implemented by using automotive safety integrity level decomposition, which is defined in the ACPS functional safety standard ISO 26262. Experiments with real-life and synthetic ACPS functions reveal the advantages of the proposed design methodology toward energy-efficient functional safety for ACPS compared with state-of-the-art algorithms. IEEE
暂无评论