Cloud storage auditing research is dedicated to solving the data integrity problem of outsourced storage on the cloud. In recent years, researchers have proposed various cloud storage auditing schemes using different ...
详细信息
Cloud storage auditing research is dedicated to solving the data integrity problem of outsourced storage on the cloud. In recent years, researchers have proposed various cloud storage auditing schemes using different techniques. While these studies are elegant in theory, they assume an ideal cloud storage model;that is, they assume that the cloud provides the storage and compute interfaces as required by the proposed schemes. However, this does not hold for mainstream cloud storage systems because these systems only provide read and write interfaces but not the compute interface. To bridge this gap, this work proposes a serverless computing-based cloud storage auditing system for existing mainstream cloud object storage. The proposed system leverages existing cloud storage auditing schemes as a basic building block and makes two adaptations. One is that we use the read interface of cloud object storage to support block data requests in a traditional cloud storage auditing scheme. Another is that we employ the serverless computing paradigm to support block data computation as traditionally required. Leveraging the characteristics of serverless computing, the proposed system realizes economical, pay-as-you-go cloud storage auditing. The proposed system also supports mainstream cloud storage upper layer applications(e.g., file preview) by not modifying the data formats when embedding authentication tags for later auditing. We prototyped and open-sourced the proposed system to a mainstream cloud service, i.e., Tencent Cloud. Experimental results show that the proposed system is efficient and promising for practical use. For 40 GB of data, auditing takes approximately 98 s using serverless computation. The economic cost is 120.48 CNY per year, of which serverless computing only accounts for 46%. In contrast, no existing studies reported cloud storage auditing results for real-world cloud services.
The importance of secure data sharing in fog computing is increasing due to the growing number of Internet of Things(IoT)*** article addresses the privacy and security issues brought up by data sharing in the context ...
详细信息
The importance of secure data sharing in fog computing is increasing due to the growing number of Internet of Things(IoT)*** article addresses the privacy and security issues brought up by data sharing in the context of IoT fog *** suggested framework,called"BlocFogSec",secures key management and data sharing through blockchain consensus and smart *** existing solutions,BlocFogSec utilizes two types of smart contracts for secure key exchange and data sharing,while employing a consensus protocol to validate transactions and maintain blockchain *** process and store data effectively at the network edge,the framework makes use of fog computing,notably reducing latency and raising *** successfully blocks unauthorized access and data breaches by restricting transactions to authorized *** addition,the framework uses a consensus protocol to validate and add transactions to the blockchain,guaranteeing data accuracy and *** compare BlocFogSec's performance to that of other models,a number of simulations are *** simulation results indicate that BlocFogSec consistently outperforms existing models,such as Security Services for Fog Computing(SSFC)and Blockchain-based Key Management Scheme(BKMS),in terms of throughput(up to 5135 bytes per second),latency(as low as 7 ms),and resource utilization(70%to 92%).The evaluation also takes into account attack defending accuracy(up to 100%),precision(up to 100%),and recall(up to 99.6%),demonstrating BlocFogSec's effectiveness in identifying and preventing potential attacks.
In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mo...
详细信息
In the last decade, technical advancements and faster Internet speeds have also led to an increasing number ofmobile devices and users. Thus, all contributors to society, whether young or old members, can use these mobileapps. The use of these apps eases our daily lives, and all customers who need any type of service can accessit easily, comfortably, and efficiently through mobile apps. Particularly, Saudi Arabia greatly depends on digitalservices to assist people and visitors. Such mobile devices are used in organizing daily work schedules and services,particularly during two large occasions, Umrah and Hajj. However, pilgrims encounter mobile app issues such asslowness, conflict, unreliability, or user-unfriendliness. Pilgrims comment on these issues on mobile app platformsthrough reviews of their experiences with these digital services. Scholars have made several attempts to solve suchmobile issues by reporting bugs or non-functional requirements by utilizing user ***, solving suchissues is a great challenge, and the issues still exist. Therefore, this study aims to propose a hybrid deep learningmodel to classify and predict mobile app software issues encountered by millions of pilgrims during the Hajj andUmrah periods from the user perspective. Firstly, a dataset was constructed using user-generated comments fromrelevant mobile apps using natural language processing methods, including information extraction, the annotationprocess, and pre-processing steps, considering a multi-class classification problem. Then, several experimentswere conducted using common machine learning classifiers, Artificial Neural Networks (ANN), Long Short-TermMemory (LSTM), and Convolutional Neural Network Long Short-Term Memory (CNN-LSTM) architectures, toexamine the performance of the proposed model. Results show 96% in F1-score and accuracy, and the proposedmodel outperformed the mentioned models.
Anomaly detection(AD) has been extensively studied and applied across various scenarios in recent years. However, gaps remain between the current performance and the desired recognition accuracy required for practical...
详细信息
Anomaly detection(AD) has been extensively studied and applied across various scenarios in recent years. However, gaps remain between the current performance and the desired recognition accuracy required for practical *** paper analyzes two fundamental failure cases in the baseline AD model and identifies key reasons that limit the recognition accuracy of existing approaches. Specifically, by Case-1, we found that the main reason detrimental to current AD methods is that the inputs to the recovery model contain a large number of detailed features to be recovered, which leads to the normal/abnormal area has not/has been recovered into its original state. By Case-2, we surprisingly found that the abnormal area that cannot be recognized in image-level representations can be easily recognized in the feature-level representation. Based on the above observations, we propose a novel recover-then-discriminate(ReDi) framework for *** takes a self-generated feature map(e.g., histogram of oriented gradients) and a selected prompted image as explicit input information to address the identified in Case-1. Additionally, a feature-level discriminative network is introduced to amplify abnormal differences between the recovered and input representations. Extensive experiments on two widely used yet challenging AD datasets demonstrate that ReDi achieves state-of-the-art recognition accuracy.
Acute Bilirubin Encephalopathy(ABE)is a significant threat to neonates and it leads to disability and high mortality *** and treating ABE promptly is important to prevent further complications and long-term *** studie...
详细信息
Acute Bilirubin Encephalopathy(ABE)is a significant threat to neonates and it leads to disability and high mortality *** and treating ABE promptly is important to prevent further complications and long-term *** studies have explored ABE ***,they often face limitations in classification due to reliance on a single modality of Magnetic Resonance Imaging(MRI).To tackle this problem,the authors propose a Tri-M2MT model for precise ABE detection by using tri-modality MRI *** scans include T1-weighted imaging(T1WI),T2-weighted imaging(T2WI),and apparent diffusion coefficient maps to get indepth ***,the tri-modality MRI scans are collected and preprocessesed by using an Advanced Gaussian Filter for noise reduction and Z-score normalisation for data *** Advanced Capsule Network was utilised to extract relevant features by using Snake Optimization Algorithm to select optimal features based on feature correlation with the aim of minimising complexity and enhancing detection ***,a multi-transformer approach was used for feature fusion and identify feature correlations ***,accurate ABE diagnosis is achieved through the utilisation of a SoftMax *** performance of the proposed Tri-M2MT model is evaluated across various metrics,including accuracy,specificity,sensitivity,F1-score,and ROC curve analysis,and the proposed methodology provides better performance compared to existing methodologies.
Scalability and information personal privacy are vital for training and deploying large-scale deep learning *** learning trains models on exclusive information by aggregating weights from various devices and taking ad...
详细信息
Scalability and information personal privacy are vital for training and deploying large-scale deep learning *** learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web ***,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client ***,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for *** this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training *** a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning *** is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous ***,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central *** has actually been measured in various setups using the MNIST *** results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data *** addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network ***,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.
The integration of technologies like artificial intelligence,6G,and vehicular ad-hoc networks holds great potential to meet the communication demands of the Internet of Vehicles and drive the advancement of vehicle **...
详细信息
The integration of technologies like artificial intelligence,6G,and vehicular ad-hoc networks holds great potential to meet the communication demands of the Internet of Vehicles and drive the advancement of vehicle ***,these advancements also generate a surge in data processing requirements,necessitating the offloading of vehicular tasks to edge servers due to the limited computational capacity of *** recent advancements,the robustness and scalability of the existing approaches with respect to the number of vehicles and edge servers and their resources,as well as privacy,remain a *** this paper,a lightweight offloading strategy that leverages ubiquitous connectivity through the Space Air Ground Integrated Vehicular Network architecture while ensuring privacy preservation is *** Internet of Vehicles(IoV)environment is first modeled as a graph,with vehicles and base stations as nodes,and their communication links as ***,vehicular applications are offloaded to suitable servers based on latency using an attention-based heterogeneous graph neural network(HetGNN)***,a differential privacy stochastic gradient descent trainingmechanism is employed for privacypreserving of vehicles and offloading ***,the simulation results demonstrated that the proposedHetGNN method shows good performance with 0.321 s of inference time,which is 42.68%,63.93%,30.22%,and 76.04% less than baseline methods such as Deep Deterministic Policy Gradient,Deep Q Learning,Deep Neural Network,and Genetic Algorithm,respectively.
The disappearance of Indigenous languages results in a decrease in cultural diversity, hence making the preservation of these languages extremely important. Conventional methods of documentation are lengthy, and the p...
详细信息
Drug-target interactions(DTIs) prediction plays an important role in the process of drug *** computational methods treat it as a binary prediction problem, determining whether there are connections between drugs and t...
详细信息
Drug-target interactions(DTIs) prediction plays an important role in the process of drug *** computational methods treat it as a binary prediction problem, determining whether there are connections between drugs and targets while ignoring relational types information. Considering the positive or negative effects of DTIs will facilitate the study on comprehensive mechanisms of multiple drugs on a common target, in this work, we model DTIs on signed heterogeneous networks, through categorizing interaction patterns of DTIs and additionally extracting interactions within drug pairs and target protein pairs. We propose signed heterogeneous graph neural networks(SHGNNs), further put forward an end-to-end framework for signed DTIs prediction, called SHGNN-DTI,which not only adapts to signed bipartite networks, but also could naturally incorporate auxiliary information from drug-drug interactions(DDIs) and protein-protein interactions(PPIs). For the framework, we solve the message passing and aggregation problem on signed DTI networks, and consider different training modes on the whole networks consisting of DTIs, DDIs and PPIs. Experiments are conducted on two datasets extracted from Drug Bank and related databases, under different settings of initial inputs, embedding dimensions and training modes. The prediction results show excellent performance in terms of metric indicators, and the feasibility is further verified by the case study with two drugs on breast cancer.
In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation *** this paper,we aim to reduce the annotation cost of crowd datasets,a...
详细信息
In a crowd density estimation dataset,the annotation of crowd locations is an extremely laborious task,and they are not taken into the evaluation *** this paper,we aim to reduce the annotation cost of crowd datasets,and propose a crowd density estimation method based on weakly-supervised learning,in the absence of crowd position supervision information,which directly reduces the number of crowds by using the number of pedestrians in the image as the supervised *** this purpose,we design a new training method,which exploits the correlation between global and local image features by incremental learning to train the ***,we design a parent-child network(PC-Net)focusing on the global and local image respectively,and propose a linear feature calibration structure to train the PC-Net simultaneously,and the child network learns feature transfer factors and feature bias weights,and uses the transfer factors and bias weights to linearly feature calibrate the features extracted from the Parent network,to improve the convergence of the network by using local features hidden in the crowd *** addition,we use the pyramid vision transformer as the backbone of the PC-Net to extract crowd features at different levels,and design a global-local feature loss function(L2).We combine it with a crowd counting loss(LC)to enhance the sensitivity of the network to crowd features during the training process,which effectively improves the accuracy of crowd density *** experimental results show that the PC-Net significantly reduces the gap between fullysupervised and weakly-supervised crowd density estimation,and outperforms the comparison methods on five datasets of Shanghai Tech Part A,ShanghaiTech Part B,UCF_CC_50,UCF_QNRF and JHU-CROWD++.
暂无评论