Diabetic retinopathy (DR) is a severe complication of diabetes affecting the retina, potentially leading to vision impairment or blindness. Deep learning for diabetic retinopathy identification leverages intricate neu...
详细信息
This paper introduces a novel local fine-grained visual tracking task, aiming to precisely locate arbitrary local parts of objects. This task is motivated by our observation that in many realistic scenarios, the user ...
详细信息
As the device complexity keeps increasing,the blockchain networks have been celebrated as the cornerstone of numerous prominent platforms owing to their ability to provide distributed and immutable ledgers and data-dr...
详细信息
As the device complexity keeps increasing,the blockchain networks have been celebrated as the cornerstone of numerous prominent platforms owing to their ability to provide distributed and immutable ledgers and data-driven autonomous *** distributed consensus algorithm is the core component that directly dictates the performance and properties of blockchain ***,the inherent characteristics of the shared wireless medium,such as fading,interference,and openness,pose significant challenges to achieving consensus within these networks,especially in the presence of malicious jamming *** cope with the severe consensus problem,in this paper,we present a distributed jamming-resilient consensus algorithm for blockchain networks in wireless environments,where the adversary can jam the communication channel by injecting jamming *** on a non-binary slight jamming model,we propose a distributed four-stage algorithm to achieve consensus in the wireless blockchain network,including leader election,leader broadcast,leader aggregation,and leader announcement *** high probability,we prove that our jamming-resilient algorithm can ensure the validity,agreement,termination,and total order properties of consensus with the time complexity of O(n).Both theoretical analyses and empirical simulations are conducted to verify the consistency and efficiency of our algorithm.
This paper investigates the input-to-state stabilization of discrete-time Markov jump systems. A quantized control scheme that includes coding and decoding procedures is proposed. The relationship between the error in...
详细信息
Pulsed current cathodic protection(PCCP) could be more effective than direct current cathodic protection(DCCP)for mitigating corrosion in buried structures in the oil and gas industries if appropriate pulsed parameter...
详细信息
Pulsed current cathodic protection(PCCP) could be more effective than direct current cathodic protection(DCCP)for mitigating corrosion in buried structures in the oil and gas industries if appropriate pulsed parameters are chosen. The purpose of this research is to present the corrosion prevention mechanism of the PCCP technique by taking into account the effects of duty cycle as well as frequency, modeling the relationships between pulse parameters(frequency and duty cycle) and system outputs(corrosion rate, protective current and pipe-to-soil potential) and finally identifying the most effective protection conditions over a wide range of frequency(2–10 kHz) and duty cycle(25%-75%). For this, pipe-to-soil potential, pH, current and power consumption, corrosion rate, surface deposits and investigation of pitting corrosion were taken into account. To model the input-output relationship in the PCCP method, a data-driven machine learning approach was used by training an artificial neural network(ANN). The results revealed that the PCCP system could yield the best protection conditions at 10 kHz frequency and 50% duty cycle, resulting in the longest protection length with the lowest corrosion rate at a consumption current 0.3 time that of the DCCP method. In the frequency range of 6–10 kHz and duty cycles of 50%-75%, SEM images indicated a uniform distribution of calcite deposits and no pits on cathode surface.
Processing big data poses a significant challenge when transitioning from sequential to distributed code, primarily due to the extensive scale at which data is handled. This complexity arises from both syntax and sema...
详细信息
Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lac...
详细信息
Preserving formal style in neural machine translation (NMT) is essential, yet often overlooked as an optimization objective of the training processes. This oversight can lead to translations that, though accurate, lack formality. In this paper, we propose how to improve NMT formality with large language models (LLMs), which combines the style transfer and evaluation capabilities of an LLM and the high-quality translation generation ability of NMT models to improve NMT formality. The proposed method (namely INMTF) encompasses two approaches. The first involves a revision approach using an LLM to revise the NMT-generated translation, ensuring a formal translation style. The second approach employs an LLM as a reward model for scoring translation formality, and then uses reinforcement learning algorithms to fine-tune the NMT model to maximize the reward score, thereby enhancing the formality of the generated translations. Considering the substantial parameter size of LLMs, we also explore methods to reduce the computational cost of INMTF. Experimental results demonstrate that INMTF significantly outperforms baselines in terms of translation formality and translation quality, with an improvement of +9.19 style accuracy points in the German-to-English task and +2.16 COMET score in the Russian-to-English task. Furthermore, our work demonstrates the potential of integrating LLMs within NMT frameworks to bridge the gap between NMT outputs and the formality required in various real-world translation scenarios.
This paper presents a method for the optimized reconfiguration of radial distribution systems that explicitly considers the protection systems constraints. A fully automated method based on graph analysis is proposed ...
详细信息
The methods of network attacks have become increasingly sophisticated,rendering traditional cybersecurity defense mechanisms insufficient to address novel and complex threats *** recent years,artificial intelligence h...
详细信息
The methods of network attacks have become increasingly sophisticated,rendering traditional cybersecurity defense mechanisms insufficient to address novel and complex threats *** recent years,artificial intelligence has achieved significant progress in the field of network ***,many challenges and issues remain,particularly regarding the interpretability of deep learning and ensemble learning *** address the challenge of enhancing the interpretability of network attack prediction models,this paper proposes a method that combines Light Gradient Boosting Machine(LGBM)and SHapley Additive exPlanations(SHAP).LGBM is employed to model anomalous fluctuations in various network indicators,enabling the rapid and accurate identification and prediction of potential network attack types,thereby facilitating the implementation of timely defense measures,the model achieved an accuracy of 0.977,precision of 0.985,recall of 0.975,and an F1 score of 0.979,demonstrating better performance compared to other models in the domain of network attack *** is utilized to analyze the black-box decision-making process of the model,providing interpretability by quantifying the contribution of each feature to the prediction results and elucidating the relationships between *** experimental results demonstrate that the network attack predictionmodel based on LGBM exhibits superior accuracy and outstanding predictive ***,the SHAP-based interpretability analysis significantly improves the model’s transparency and interpretability.
暂无评论