The use of management by objectives (MBOs) methodologies, particularly the objectives and key results (OKRs) framework, has gained widespread attention in recent years as a means of improving organizational performanc...
详细信息
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, ...
详细信息
Software security poses substantial risks to our society because software has become part of our life. Numerous techniques have been proposed to resolve or mitigate the impact of software security issues. Among them, software testing and analysis are two of the critical methods, which significantly benefit from the advancements in deep learning technologies. Due to the successful use of deep learning in software security, recently,researchers have explored the potential of using large language models(LLMs) in this area. In this paper, we systematically review the results focusing on LLMs in software security. We analyze the topics of fuzzing, unit test, program repair, bug reproduction, data-driven bug detection, and bug triage. We deconstruct these techniques into several stages and analyze how LLMs can be used in the stages. We also discuss the future directions of using LLMs in software security, including the future directions for the existing use of LLMs and extensions from conventional deep learning research.
The increasing use of cloud-based image storage and retrieval systems has made ensuring security and efficiency crucial. The security enhancement of image retrieval and image archival in cloud computing has received c...
详细信息
The increasing use of cloud-based image storage and retrieval systems has made ensuring security and efficiency crucial. The security enhancement of image retrieval and image archival in cloud computing has received considerable attention in transmitting data and ensuring data confidentiality among cloud servers and users. Various traditional image retrieval techniques regarding security have developed in recent years but they do not apply to large-scale environments. This paper introduces a new approach called Triple network-based adaptive grey wolf (TN-AGW) to address these challenges. The TN-AGW framework combines the adaptability of the Grey Wolf Optimization (GWO) algorithm with the resilience of Triple Network (TN) to enhance image retrieval in cloud servers while maintaining robust security measures. By using adaptive mechanisms, TN-AGW dynamically adjusts its parameters to improve the efficiency of image retrieval processes, reducing latency and utilization of resources. However, the image retrieval process is efficiently performed by a triple network and the parameters employed in the network are optimized by Adaptive Grey Wolf (AGW) optimization. Imputation of missing values, Min–Max normalization, and Z-score standardization processes are used to preprocess the images. The image extraction process is undertaken by a modified convolutional neural network (MCNN) approach. Moreover, input images are taken from datasets such as the Landsat 8 dataset and the Moderate Resolution Imaging Spectroradiometer (MODIS) dataset is employed for image retrieval. Further, the performance such as accuracy, precision, recall, specificity, F1-score, and false alarm rate (FAR) is evaluated, the value of accuracy reaches 98.1%, the precision of 97.2%, recall of 96.1%, and specificity of 917.2% respectively. Also, the convergence speed is enhanced in this TN-AGW approach. Therefore, the proposed TN-AGW approach achieves greater efficiency in image retrieving than other existing
Anomaly detection(AD) has been extensively studied and applied across various scenarios in recent years. However, gaps remain between the current performance and the desired recognition accuracy required for practical...
详细信息
Anomaly detection(AD) has been extensively studied and applied across various scenarios in recent years. However, gaps remain between the current performance and the desired recognition accuracy required for practical *** paper analyzes two fundamental failure cases in the baseline AD model and identifies key reasons that limit the recognition accuracy of existing approaches. Specifically, by Case-1, we found that the main reason detrimental to current AD methods is that the inputs to the recovery model contain a large number of detailed features to be recovered, which leads to the normal/abnormal area has not/has been recovered into its original state. By Case-2, we surprisingly found that the abnormal area that cannot be recognized in image-level representations can be easily recognized in the feature-level representation. Based on the above observations, we propose a novel recover-then-discriminate(ReDi) framework for *** takes a self-generated feature map(e.g., histogram of oriented gradients) and a selected prompted image as explicit input information to address the identified in Case-1. Additionally, a feature-level discriminative network is introduced to amplify abnormal differences between the recovered and input representations. Extensive experiments on two widely used yet challenging AD datasets demonstrate that ReDi achieves state-of-the-art recognition accuracy.
Matrix minimization techniques that employ the nuclear norm have gained recognition for their applicability in tasks like image inpainting, clustering, classification, and reconstruction. However, they come with inher...
详细信息
Matrix minimization techniques that employ the nuclear norm have gained recognition for their applicability in tasks like image inpainting, clustering, classification, and reconstruction. However, they come with inherent biases and computational burdens, especially when used to relax the rank function, making them less effective and efficient in real-world scenarios. To address these challenges, our research focuses on generalized nonconvex rank regularization problems in robust matrix completion, low-rank representation, and robust matrix regression. We introduce innovative approaches for effective and efficient low-rank matrix learning, grounded in generalized nonconvex rank relaxations inspired by various substitutes for the ?0-norm relaxed functions. These relaxations allow us to more accurately capture low-rank structures. Our optimization strategy employs a nonconvex and multi-variable alternating direction method of multipliers, backed by rigorous theoretical analysis for complexity and *** algorithm iteratively updates blocks of variables, ensuring efficient convergence. Additionally, we incorporate the randomized singular value decomposition technique and/or other acceleration strategies to enhance the computational efficiency of our approach, particularly for large-scale constrained minimization problems. In conclusion, our experimental results across a variety of image vision-related application tasks unequivocally demonstrate the superiority of our proposed methodologies in terms of both efficacy and efficiency when compared to most other related learning methods.
Federated recommender systems(FedRecs) have garnered increasing attention recently, thanks to their privacypreserving benefits. However, the decentralized and open characteristics of current FedRecs present at least t...
详细信息
Federated recommender systems(FedRecs) have garnered increasing attention recently, thanks to their privacypreserving benefits. However, the decentralized and open characteristics of current FedRecs present at least two ***, the performance of FedRecs is compromised due to highly sparse on-device data for each client. Second, the system's robustness is undermined by the vulnerability to model poisoning attacks launched by malicious users. In this paper, we introduce a novel contrastive learning framework designed to fully leverage the client's sparse data through embedding augmentation, referred to as CL4FedRec. Unlike previous contrastive learning approaches in FedRecs that necessitate clients to share their private parameters, our CL4FedRec aligns with the basic FedRec learning protocol, ensuring compatibility with most existing FedRec implementations. We then evaluate the robustness of FedRecs equipped with CL4FedRec by subjecting it to several state-of-the-art model poisoning attacks. Surprisingly, our observations reveal that contrastive learning tends to exacerbate the vulnerability of FedRecs to these attacks. This is attributed to the enhanced embedding uniformity, making the polluted target item embedding easily proximate to popular items. Based on this insight, we propose an enhanced and robust version of CL4FedRec(rCL4FedRec) by introducing a regularizer to maintain the distance among item embeddings with different popularity levels. Extensive experiments conducted on four commonly used recommendation datasets demonstrate that rCL4FedRec significantly enhances both the model's performance and the robustness of FedRecs.
Temporal knowledge graph(TKG) reasoning, has seen widespread use for modeling real-world events, particularly in extrapolation settings. Nevertheless, most previous studies are embedded models, which require both enti...
详细信息
Temporal knowledge graph(TKG) reasoning, has seen widespread use for modeling real-world events, particularly in extrapolation settings. Nevertheless, most previous studies are embedded models, which require both entity and relation embedding to make predictions, ignoring the semantic correlations among different entities and relations within the same timestamp. This can lead to random and nonsensical predictions when unseen entities or relations occur. Furthermore, many existing models exhibit limitations in handling highly correlated historical facts with extensive temporal depth. They often either overlook such facts or overly accentuate the relationships between recurring past occurrences and their current counterparts. Due to the dynamic nature of TKG, effectively capturing the evolving semantics between different timestamps can be *** address these shortcomings, we propose the recurrent semantic evidenceaware graph neural network(RE-SEGNN), a novel graph neural network that can learn the semantics of entities and relations simultaneously. For the former challenge, our model can predict a possible answer to missing quadruples based on semantics when facing unseen entities or relations. For the latter problem, based on an obvious established force, both the recency and frequency of semantic history tend to confer a higher reference value for the current. We use the Hawkes process to compute the semantic trend, which allows the semantics of recent facts to gain more attention than those of distant facts. Experimental results show that RE-SEGNN outperforms all SOTA models in entity prediction on 6 widely used datasets, and 5 datasets in relation prediction. Furthermore, the case study shows how our model can deal with unseen entities and relations.
Solar flares are one of the strongest outbursts of solar activity,posing a serious threat to Earth’s critical infrastructure,such as communications,navigation,power,and ***,it is essential to accurately predict solar...
详细信息
Solar flares are one of the strongest outbursts of solar activity,posing a serious threat to Earth’s critical infrastructure,such as communications,navigation,power,and ***,it is essential to accurately predict solar flares in order to ensure the safety of human ***,the research focuses on two directions:first,identifying predictors with more physical information and higher prediction accuracy,and second,building flare prediction models that can effectively handle complex observational *** terms of flare observability and predictability,this paper analyses multiple dimensions of solar flare observability and evaluates the potential of observational parameters in *** flare prediction models,the paper focuses on data-driven models and physical models,with an emphasis on the advantages of deep learning techniques in dealing with complex and high-dimensional *** reviewing existing traditional machine learning,deep learning,and fusion methods,the key roles of these techniques in improving prediction accuracy and efficiency are *** prevailing challenges,this study discusses the main challenges currently faced in solar flare prediction,such as the complexity of flare samples,the multimodality of observational data,and the interpretability of *** conclusion summarizes these findings and proposes future research directions and potential technology advancement.
Managing modern data centre operations is increasingly complex due to rising workloads and numerous interdependent components. Organizations that still rely on outdated, manual data management methods face a heightene...
详细信息
Managing modern data centre operations is increasingly complex due to rising workloads and numerous interdependent components. Organizations that still rely on outdated, manual data management methods face a heightened risk of human error and struggle to adapt quickly to shifting demands. This inefficiency leads to excessive energy consumption and higher CO2 emissions in cloud data centres. To address these challenges, integrating advanced automation within Infrastructure as a Service (IaaS) has become essential for IT industries, representing a significant step in the ongoing transformation of cloud computing. For data centres aiming to enhance efficiency and reduce their carbon footprint, intelligent automation provides tangible benefits, including optimized resource allocation, dynamic workload balancing, and lower operational costs. As computing resources remain energy-intensive, the growing demand for AI and ML workloads is expected to surge by 160% by 2030 (Goldman Sachs). This heightened focus on energy efficiency has driven the need for advanced scheduling systems that reduce both carbon emissions and operational expenses. This study introduces a deployable cloud-based framework that incorporates real-time carbon intensity data into energy-intensive task scheduling. By utilizing AWS services, the proposed algorithm dynamically adjusts high-energy workloads based on regional carbon intensity fluctuations, using both historical and real-time analytics. This approach enables cloud service providers and enterprises to minimize environmental impact without sacrificing performance. Designed for seamless integration with existing cloud infrastructures—including AWS, Google Cloud, and Azure—this scalable solution utilizes Kubernetes-based scheduling and containerized workloads for intelligent resource management. By combining automation, real-time analytics, and cloud-native technologies, the framework significantly enhances energy efficiency compared to traditional
ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential sec...
详细信息
ChatGPT is a powerful artificial intelligence(AI)language model that has demonstrated significant improvements in various natural language processing(NLP) tasks. However, like any technology, it presents potential security risks that need to be carefully evaluated and addressed. In this survey, we provide an overview of the current state of research on security of using ChatGPT, with aspects of bias, disinformation, ethics, misuse,attacks and privacy. We review and discuss the literature on these topics and highlight open research questions and future *** this survey, we aim to contribute to the academic discourse on AI security, enriching the understanding of potential risks and mitigations. We anticipate that this survey will be valuable for various stakeholders involved in AI development and usage, including AI researchers, developers, policy makers, and end-users.
暂无评论