State-space graphs and automata serve as fundamental tools for modeling and analyzing the behavior of computational systems. Recurrent neural networks (RNNs) and language models are deeply intertwined, as RNNS provide...
详细信息
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management *** has become a promi...
详细信息
As modern communication technology advances apace,the digital communication signals identification plays an important role in cognitive radio networks,the communication monitoring and management *** has become a promising solution to this problem due to its powerful modeling capability,which has become a consensus in academia and ***,because of the data-dependence and inexplicability of AI models and the openness of electromagnetic space,the physical layer digital communication signals identification model is threatened by adversarial *** examples pose a common threat to AI models,where well-designed and slight perturbations added to input data can cause wrong ***,the security of AI models for the digital communication signals identification is the premise of its efficient and credible *** this paper,we first launch adversarial attacks on the end-to-end AI model for automatic modulation classifi-cation,and then we explain and present three defense mechanisms based on the adversarial *** we present more detailed adversarial indicators to evaluate attack and defense ***,a demonstration verification system is developed to show that the adversarial attack is a real threat to the digital communication signals identification model,which should be paid more attention in future research.
This paper presents a high-security medical image encryption method that leverages a novel and robust sine-cosine *** map demonstrates remarkable chaotic dynamics over a wide range of *** employ nonlinear analytical t...
详细信息
This paper presents a high-security medical image encryption method that leverages a novel and robust sine-cosine *** map demonstrates remarkable chaotic dynamics over a wide range of *** employ nonlinear analytical tools to thoroughly investigate the dynamics of the chaotic map,which allows us to select optimal parameter configurations for the encryption *** findings indicate that the proposed sine-cosine map is capable of generating a rich variety of chaotic attractors,an essential characteristic for effective *** encryption technique is based on bit-plane decomposition,wherein a plain image is divided into distinct bit *** planes are organized into two matrices:one containing the most significant bit planes and the other housing the least significant *** subsequent phases of chaotic confusion and diffusion utilize these matrices to enhance *** auxiliary matrix is then generated,comprising the combined bit planes that yield the final encrypted *** results demonstrate that our proposed technique achieves a commendable level of security for safeguarding sensitive patient information in medical *** a result,image quality is evaluated using the Structural Similarity Index(SSIM),yielding values close to zero for encrypted images and approaching one for decrypted ***,the entropy values of the encrypted images are near 8,with a Number of Pixel Change Rate(NPCR)and Unified Average Change Intensity(UACI)exceeding 99.50%and 33%,***,quantitative assessments of occlusion attacks,along with comparisons to leading algorithms,validate the integrity and efficacy of our medical image encryption approach.
This paper considers link scheduling in a wireless network comprising of two types of nodes:(i)hybrid access points(HAPs)that harvest solar en-ergy,and(ii)devices that harvest radio frequency(RF)energy whenever HAPs *...
详细信息
This paper considers link scheduling in a wireless network comprising of two types of nodes:(i)hybrid access points(HAPs)that harvest solar en-ergy,and(ii)devices that harvest radio frequency(RF)energy whenever HAPs *** aim is to de-rive the shortest possible link schedule that determines the transmission time of inter-HAPs links,and uplinks from devices to *** first outline a mixed in-teger linear program(MILP),which can be run by a central node to determine the optimal schedule and transmit power of HAPs and *** then out-line a game theory based protocol called Distributed Schedule Minimization Protocol(DSMP)that is run by HAPs and ***,it does not require causal energy arrivals and channel gains *** results show that DSMP produces schedule lengths that are at most 1.99x longer than the schedule computed by MILP.
One of the most critical challenges in optical fiber communication systems using non-orthogonal multiple access (NOMA) techniques is reducing the peak-to-average power ratio (PAPR). Optical NOMA often leads to high PA...
详细信息
The research investigates the impact of birefringence loss in quartz-based mid-infrared photonic waveguides over various wavelength ranges. The analysis of three specified wavelength ranges from 1 to 12 µm a...
详细信息
Software engineering workflows use version control systems to track changes and handle merge cases from multiple contributors. This has introduced challenges to testing because it is impractical to test whole codebase...
详细信息
Software engineering workflows use version control systems to track changes and handle merge cases from multiple contributors. This has introduced challenges to testing because it is impractical to test whole codebases to ensure each change is defect-free, and it is not enough to test changed files alone. Just-in-time software defect prediction (JIT-SDP) systems have been proposed to solve this by predicting the likelihood that a code change is defective. Numerous techniques have been studied to build such JIT software defect prediction models, but the power of pre-trained code transformer language models in this task has been underexplored. These models have achieved human-level performance in code understanding and software engineering tasks. Inspired by that, we modeled the problem of change defect prediction as a text classification task utilizing these pre-trained models. We have investigated this idea on a recently published dataset, ApacheJIT, consisting of 44k commits. We concatenated the changed lines in each commit as one string and augmented it with the commit message and static code metrics. Parameter-efficient fine-tuning was performed for 4 chosen pre-trained models, JavaBERT, CodeBERT, CodeT5, and CodeReviewer, with either partially frozen layers or low-rank adaptation (LoRA). Additionally, experiments with the Local, Sparse, and Global (LSG) attention variants were conducted to handle long commits efficiently, which reduces memory consumption. As far as the authors are aware, this is the first investigation into the abilities of pre-trained code models to detect defective changes in the ApacheJIT dataset. Our results show that proper fine-tuning improves the defect prediction performance of the chosen models in the F1 scores. CodeBERT and CodeReviewer achieved a 10% and 12% increase in the F1 score over the best baseline models, JITGNN and JITLine, when commit messages and code metrics are included. Our approach sheds more light on the abilities of l
App reviews are crucial in influencing user decisions and providing essential feedback for developers to improve their *** the analysis of these reviews is vital for efficient review *** traditional machine learning(M...
详细信息
App reviews are crucial in influencing user decisions and providing essential feedback for developers to improve their *** the analysis of these reviews is vital for efficient review *** traditional machine learning(ML)models rely on basic word-based feature extraction,deep learning(DL)methods,enhanced with advanced word embeddings,have shown superior *** research introduces a novel aspectbased sentiment analysis(ABSA)framework to classify app reviews based on key non-functional requirements,focusing on usability factors:effectiveness,efficiency,and *** propose a hybrid DL model,combining BERT(Bidirectional Encoder Representations from Transformers)with BiLSTM(Bidirectional Long Short-Term Memory)and CNN(Convolutional Neural Networks)layers,to enhance classification *** analysis against state-of-the-art models demonstrates that our BERT-BiLSTM-CNN model achieves exceptional performance,with precision,recall,F1-score,and accuracy of 96%,87%,91%,and 94%,*** contributions of this work include a refined ABSA-based relabeling framework,the development of a highperformance classifier,and the comprehensive relabeling of the Instagram App Reviews *** advancements provide valuable insights for software developers to enhance usability and drive user-centric application development.
AC optimal power flow (AC OPF) is a fundamental problem in power system operations. Accurately modeling the network physics via the AC power flow equations makes AC OPF a challenging nonconvex problem. To search for g...
详细信息
This paper presents the architecture of a Convolution Neural Network(CNN)accelerator based on a newprocessing element(PE)array called a diagonal cyclic array(DCA).As demonstrated,it can significantly reduce the burden...
详细信息
This paper presents the architecture of a Convolution Neural Network(CNN)accelerator based on a newprocessing element(PE)array called a diagonal cyclic array(DCA).As demonstrated,it can significantly reduce the burden of repeated memory accesses for feature data and weight parameters of the CNN models,which maximizes the data reuse rate and improve the computation ***,an integrated computation architecture has been implemented for the activation function,max-pooling,and activation function after convolution calculation,reducing the hardware *** evaluate the effectiveness of the proposed architecture,a CNN accelerator has been implemented for You Only Look Once version 2(YOLOv2)-Tiny consisting of 9 ***,the methodology to optimize the local buffer size with little sacrifice of inference speed is presented in this *** implemented the proposed CNN accelerator using a Xilinx Zynq ZCU102 Ultrascale+Field Programmable Gate Array(FPGA)and ISE Design *** FPGA implementation uses 34,336 Look Up Tables(LUTs),576 Digital Signal Processing(DSP)blocks,and an on-chip memory of only 58 KB,and it could achieve accuracies of 57.92% and 56.42% mean Average Precession@0.5 thresholds for intersection over union(mAP@0.5)using quantized 16-bit and 8-bit full integer data manipulation with only 0.68% as a loss for 8-bit version and computation time of 137.9 and 69 ms for each input image respectively using a clock speed of 200 *** speeds are expected to be doubled five times using a clock speed of 1GHz if implemented in a silicon System on Chip(SoC)using a sub-micron process.
暂无评论