Cloud storage auditing research is dedicated to solving the data integrity problem of outsourced storage on the cloud. In recent years, researchers have proposed various cloud storage auditing schemes using different ...
详细信息
Cloud storage auditing research is dedicated to solving the data integrity problem of outsourced storage on the cloud. In recent years, researchers have proposed various cloud storage auditing schemes using different techniques. While these studies are elegant in theory, they assume an ideal cloud storage model;that is, they assume that the cloud provides the storage and compute interfaces as required by the proposed schemes. However, this does not hold for mainstream cloud storage systems because these systems only provide read and write interfaces but not the compute interface. To bridge this gap, this work proposes a serverless computing-based cloud storage auditing system for existing mainstream cloud object storage. The proposed system leverages existing cloud storage auditing schemes as a basic building block and makes two adaptations. One is that we use the read interface of cloud object storage to support block data requests in a traditional cloud storage auditing scheme. Another is that we employ the serverless computing paradigm to support block data computation as traditionally required. Leveraging the characteristics of serverless computing, the proposed system realizes economical, pay-as-you-go cloud storage auditing. The proposed system also supports mainstream cloud storage upper layer applications(e.g., file preview) by not modifying the data formats when embedding authentication tags for later auditing. We prototyped and open-sourced the proposed system to a mainstream cloud service, i.e., Tencent Cloud. Experimental results show that the proposed system is efficient and promising for practical use. For 40 GB of data, auditing takes approximately 98 s using serverless computation. The economic cost is 120.48 CNY per year, of which serverless computing only accounts for 46%. In contrast, no existing studies reported cloud storage auditing results for real-world cloud services.
To mitigate the challenges posed by data uncertainty in Full-Self Driving (FSD) systems. This paper proposes a novel feature extraction learning model called Adaptive Region of Interest Optimized Pyramid Network (ARO)...
详细信息
Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system ***, due to the model's inherent uncertainty, rigorous vali...
详细信息
Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system ***, due to the model's inherent uncertainty, rigorous validation is requisite for its application in real-world tasks. Specific tests may reveal inadequacies in the performance of pre-trained DRL models, while the “black-box” nature of DRL poses a challenge for testing model behavior. We propose a novel performance improvement framework based on probabilistic automata,which aims to proactively identify and correct critical vulnerabilities of DRL systems, so that the performance of DRL models in real tasks can be improved with minimal model ***, a probabilistic automaton is constructed from the historical trajectory of the DRL system by abstracting the state to generate probabilistic decision-making units(PDMUs), and a reverse breadth-first search(BFS) method is used to identify the key PDMU-action pairs that have the greatest impact on adverse outcomes. This process relies only on the state-action sequence and final result of each trajectory. Then, under the key PDMU, we search for the new action that has the greatest impact on favorable results. Finally, the key PDMU, undesirable action and new action are encapsulated as monitors to guide the DRL system to obtain more favorable results through real-time monitoring and correction mechanisms. Evaluations in two standard reinforcement learning environments and three actual job scheduling scenarios confirmed the effectiveness of the method, providing certain guarantees for the deployment of DRL models in real-world applications.
Let P be a set of points in the plane and let T be a maximum-weight spanning tree of P. For an edge (p, q), let Dpq be the diametral disk induced by (p, q), i.e., the disk having the segment pq as its diameter. Let DT...
详细信息
Recommender systems are effective in mitigating information overload, yet the centralized storage of user data raises significant privacy concerns. Cross-user federated recommendation(CUFR) provides a promising distri...
详细信息
Recommender systems are effective in mitigating information overload, yet the centralized storage of user data raises significant privacy concerns. Cross-user federated recommendation(CUFR) provides a promising distributed paradigm to address these concerns by enabling privacy-preserving recommendations directly on user devices. In this survey, we review and categorize current progress in CUFR, focusing on four key aspects: privacy, security, accuracy, and efficiency. Firstly,we conduct an in-depth privacy analysis, discuss various cases of privacy leakage, and then review recent methods for privacy protection. Secondly, we analyze security concerns and review recent methods for untargeted and targeted *** untargeted attack methods, we categorize them into data poisoning attack methods and parameter poisoning attack methods. For targeted attack methods, we categorize them into user-based methods and item-based methods. Thirdly,we provide an overview of the federated variants of some representative methods, and then review the recent methods for improving accuracy from two categories: data heterogeneity and high-order information. Fourthly, we review recent methods for improving training efficiency from two categories: client sampling and model compression. Finally, we conclude this survey and explore some potential future research topics in CUFR.
This research proposes a refined deep learning framework aimed at boosting the precision and efficacy of detecting surface imperfections in strip steel. This method integrates enhancement and simplification techniques...
详细信息
The naive Bayesian classifier(NBC) is a supervised machine learning algorithm having a simple model structure and good theoretical interpretability. However, the generalization performance of NBC is limited to a large...
详细信息
The naive Bayesian classifier(NBC) is a supervised machine learning algorithm having a simple model structure and good theoretical interpretability. However, the generalization performance of NBC is limited to a large extent by the assumption of attribute independence. To address this issue, this paper proposes a novel attribute grouping-based NBC(AG-NBC), which is a variant of the classical NBC trained with different attribute groups. AG-NBC first applies a novel effective objective function to automatically identify optimal dependent attribute groups(DAGs). Condition attributes in the same DAG are strongly dependent on the class attribute, whereas attributes in different DAGs are independent of one another. Then,for each DAG, a random vector functional link network with a SoftMax layer is trained to output posterior probabilities in the form of joint probability density estimation. The NBC is trained using the grouping attributes that correspond to the original condition attributes. Extensive experiments were conducted to validate the rationality, feasibility, and effectiveness of AG-NBC. Our findings showed that the attribute groups chosen for NBC can accurately represent attribute dependencies and reduce overlaps between different posterior probability densities. In addition, the comparative results with NBC, flexible NBC(FNBC), tree augmented Bayes network(TAN), gain ratio-based attribute weighted naive Bayes(GRAWNB), averaged one-dependence estimators(AODE), weighted AODE(WAODE), independent component analysis-based NBC(ICA-NBC), hidden naive Bayesian(HNB) classifier, and correlation-based feature weighting filter for naive Bayes(CFW) show that AG-NBC obtains statistically better testing accuracies, higher area under the receiver operating characteristic curves(AUCs), and fewer probability mean square errors(PMSEs) than other Bayesian classifiers. The experimental results demonstrate that AG-NBC is a valid and efficient approach for alleviating the attribute i
Mobile edge computing(MEC) provides edge services to users in a distributed and on-demand *** to the heterogeneity of edge applications, deploying latency and resource-intensive applications on resourceconstrained dev...
详细信息
Mobile edge computing(MEC) provides edge services to users in a distributed and on-demand *** to the heterogeneity of edge applications, deploying latency and resource-intensive applications on resourceconstrained devices is a key challenge for service providers. This is especially true when underlying edge infrastructures are fault and error-prone. In this paper, we propose a fault tolerance approach named DFGP, for enforcing mobile service fault-tolerance in MEC. It synthesizes a generative optimization network(GON) model for predicting resource failure and a deep deterministic policy gradient(DDPG) model for yielding preemptive migration *** show through extensive simulation experiments that DFGP is more effective in fault detection and guaranteeing quality of service, in terms of fault detection accuracy, migration efficiency, task migration time, task scheduling time,and energy consumption than other existing methods.
Exemplar-based image translation involves converting semantic masks into photorealistic images that adopt the style of a given ***,most existing GAN-based translation methods fail to produce photorealistic *** this st...
详细信息
Exemplar-based image translation involves converting semantic masks into photorealistic images that adopt the style of a given ***,most existing GAN-based translation methods fail to produce photorealistic *** this study,we propose a new diffusion model-based approach for generating high-quality images that are semantically aligned with the input mask and resemble an exemplar in *** proposed method trains a conditional denoising diffusion probabilistic model(DDPM)with a SPADE module to integrate the semantic *** then used a novel contextual loss and auxiliary color loss to guide the optimization process,resulting in images that were visually pleasing and semantically *** demonstrate that our method outperforms state-of-the-art approaches in terms of both visual quality and quantitative metrics.
This article presents LoRaDIP, a novel low-light image enhancement (LLIE) model based on deep image priors (DIPs). While DIP-based enhancement models are known for their zero-shot learning, their expensive computation...
详细信息
暂无评论