The virtual private cloud service currently lacks a real-time end-to-end consistency validation mechanism, which prevents tenants from receiving immediate feedback on their requests. Existing solutions consume excessi...
详细信息
The virtual private cloud service currently lacks a real-time end-to-end consistency validation mechanism, which prevents tenants from receiving immediate feedback on their requests. Existing solutions consume excessive communication and computational resources in such large-scale cloud environments, and suffer from poor timeliness. To address these issues, we propose a lightweight consistency validation mechanism that includes real-time incremental validation and periodic full-scale validation. The former leverages message layer aggregation to enable tenants to swiftly determine the success of their requests on hosts with minimal communication overhead. The latter utilizes lightweight validation checksums to compare the expected and actual states of hosts locally, while efficiently managing the checksums of various host entries using inverted indexing. This approach enables us to efficiently validate the complete local configurations within the limited memory of hosts. In summary, our proposed mechanism achieves closed-loop implementation for new requests and ensures their long-term effectiveness.
Language-guided fashion image editing is challenging,as fashion image editing is local and requires high precision,while natural language cannot provide precise visual information for *** this paper,we propose LucIE,a...
详细信息
Language-guided fashion image editing is challenging,as fashion image editing is local and requires high precision,while natural language cannot provide precise visual information for *** this paper,we propose LucIE,a novel unsupervised language-guided local image editing method for fashion *** adopts and modifies recent text-to-image synthesis network,DF-GAN,as its ***,the synthesis backbone often changes the global structure of the input image,making local image editing *** increase structural consistency between input and edited images,we propose Content-Preserving Fusion Module(CPFM).Different from existing fusion modules,CPFM prevents iterative refinement on visual feature maps and accumulates additive modifications on RGB *** achieves local image editing explicitly with language-guided image segmentation and maskguided image blending while only using image and text *** on the DeepFashion dataset shows that LucIE achieves state-of-the-art *** with previous methods,images generated by LucIE also exhibit fewer *** provide visualizations and perform ablation studies to validate LucIE and the *** also demonstrate and analyze limitations of LucIE,to provide a better understanding of LucIE.
With the rapid development of network technology and the automation process for 5G, cyberattacks have become increasingly complex and threatening. In response to these threats, researchers have developed various netwo...
详细信息
With the rapid development of network technology and the automation process for 5G, cyberattacks have become increasingly complex and threatening. In response to these threats, researchers have developed various network intrusion detection systems(NIDS) to monitor network traffic. However, the incessant emergence of new attack techniques and the lack of system interpretability pose challenges to improving the detection performance of NIDS. To address these issues, this paper proposes a hybrid explainable neural network-based framework that improves both the interpretability of our model and the performance in detecting new attacks through the innovative application of the explainable artificial intelligence(XAI)method. We effectively introduce the Shapley additive explanations(SHAP) method to explain a light gradient boosting machine(Light GBM) model. Additionally, we propose an autoencoder long-term short-term memory(AE-LSTM) network to reconstruct SHAP values previously generated. Furthermore, we define a threshold based on reconstruction errors observed during the training phase. Any network flow that surpasses the specified threshold is classified as an attack flow. This approach enhances the framework's ability to accurately identify attacks. We achieve an accuracy of 92.65%, a recall of 95.26%, a precision of 92.57%,and an F1-score of 93.90% on the dataset NSL-KDD. Experimental results demonstrate that our approach generates detection performance on par with state-of-the-art methods.
Software defect prediction plays a critical role in software development and quality assurance processes. Effective defect prediction enables testers to accurately prioritize testing efforts and enhance defect detecti...
详细信息
Software defect prediction plays a critical role in software development and quality assurance processes. Effective defect prediction enables testers to accurately prioritize testing efforts and enhance defect detection efficiency. Additionally, this technology provides developers with a means to quickly identify errors, thereby improving software robustness and overall quality. However, current research in software defect prediction often faces challenges, such as relying on a single data source or failing to adequately account for the characteristics of multiple coexisting data sources. This approach may overlook the differences and potential value of various data sources, affecting the accuracy and generalization performance of prediction results. To address this issue, this study proposes a multivariate heterogeneous hybrid deep learning algorithm for defect prediction (DP-MHHDL). Initially, Abstract Syntax Tree (AST), Code Dependency Network (CDN), and code static quality metrics are extracted from source code files and used as inputs to ensure data diversity. Subsequently, for the three types of heterogeneous data, the study employs a graph convolutional network optimization model based on adjacency and spatial topologies, a Convolutional Neural Network-Bidirectional Long Short-Term Memory (CNN-BiLSTM) hybrid neural network model, and a TabNet model to extract data features. These features are then concatenated and processed through a fully connected neural network for defect prediction. Finally, the proposed framework is evaluated using ten promise defect repository projects, and performance is assessed with three metrics: F1, Area under the curve (AUC), and Matthews correlation coefficient (MCC). The experimental results demonstrate that the proposed algorithm outperforms existing methods, offering a novel solution for software defect prediction.
Machine learning has been massively utilized to construct data-driven solutions for predicting the lifetime of rechargeable batteries in recent years, which project the physical measurements obtained during the early ...
详细信息
Machine learning has been massively utilized to construct data-driven solutions for predicting the lifetime of rechargeable batteries in recent years, which project the physical measurements obtained during the early charging/discharging cycles to the remaining useful lifetime. While most existing techniques train the prediction model through minimizing the prediction error only, the errors associated with the physical measurements can also induce negative impact to the prediction accuracy. Although total-least-squares(TLS) regression has been applied to address this issue, it relies on the unrealistic assumption that the distributions of measurement errors on all input variables are equivalent, and cannot appropriately capture the practical characteristics of battery degradation. In order to tackle this challenge, this work intends to model the variations along different input dimensions, thereby improving the accuracy and robustness of battery lifetime prediction. In specific, we propose an innovative EM-TLS framework that enhances the TLS-based prediction to accommodate dimension-variate errors, while simultaneously investigating the distributions of them using expectation-maximization(EM). Experiments have been conducted to validate the proposed method based on the data of commercial Lithium-Ion batteries, where it reduces the prediction error by up to 29.9 % compared with conventional TLS. This demonstrates the immense potential of the proposed method for advancing the R&D of rechargeable batteries.
Multiarmed bandit(MAB) models are widely used for sequential decision-making in uncertain environments, such as resource allocation in computer communication systems.A critical challenge in interactive multiagent syst...
Multiarmed bandit(MAB) models are widely used for sequential decision-making in uncertain environments, such as resource allocation in computer communication systems.A critical challenge in interactive multiagent systems with bandit feedback is to explore and understand the equilibrium state to ensure stable and tractable system performance.
We consider the online convex optimization (OCO) problem with quadratic and linear switching cost when at time t only gradient information for functions fτ, τ 16(Lµ+5) for the quadratic switching cost, and also...
详细信息
We consider the online convex optimization (OCO) problem with quadratic and linear switching cost when at time t only gradient information for functions fτ, τ 16(Lµ+5) for the quadratic switching cost, and also show the bound to be order-wise tight in terms of L, µ. In addition, we show that the competitive ratio of any online algorithm is at least max{Ω(L), Ω(pLµ )} when the switching cost is quadratic. For the linear switching cost, the competitive ratio of the OMGD algorithm is shown to depend on both the path length and the squared path length of the problem instance, in addition to L, µ, and is shown to be order-wise, the best competitive ratio any online algorithm can achieve. Copyright is held by author/owner(s).
Self-supervised learning is attracting significant attention from researchers in the point cloud processing field. However, due to the natural sparsity and irregularity of point clouds, effectively extracting discrimi...
详细信息
Self-supervised learning is attracting significant attention from researchers in the point cloud processing field. However, due to the natural sparsity and irregularity of point clouds, effectively extracting discriminative and transferable features for efficient training on downstream tasks remains an unsolved challenge. Consequently, we propose PointSmile, a reconstruction-free self-supervised learning paradigm by maximizing curriculum mutual information(CMI) across the replicas of point cloud objects. From the perspective of how-and-what-to-learn, PointSmile is designed to imitate human curriculum learning, i.e.,starting with easier topics in a curriculum and gradually progressing to learning more complex topics in the curriculum. To solve “how-to-learn”, we introduce curriculum data augmentation(CDA) of point *** encourages PointSmile to follow a learning path that starts from learning easy data samples and progresses to learning hard data samples, such that the latent space can be dynamically affected to create better embeddings. To solve “what-to-learn”, we propose maximizing both feature-and class-wise CMI to better extract discriminative features of point clouds. Unlike most existing methods, PointSmile does not require a pretext task or cross-modal data to yield rich latent representations; additionally, it can be easily transferred to various backbones. We demonstrate the effectiveness and robustness of PointSmile in downstream tasks such as object classification and segmentation. The study results show that PointSmile outperforms existing self-supervised methods and compares favorably with popular fully supervised methods on various standard architectures. The code is available at https://***/theaalee/PointSmile.
Traditional dynamic random access memory is limited by its low capacity and high cost because of the increasing demand for the memory capacity of applications. Hybrid memory architectures, including new memory hardwar...
详细信息
Traditional dynamic random access memory is limited by its low capacity and high cost because of the increasing demand for the memory capacity of applications. Hybrid memory architectures, including new memory hardware and memory disaggregation, provide a feasible approach to expanding the memory capacity.
暂无评论