Another significant research issue is that of data indexing and retrieval in cloud computing context given that the virtues of velocity, variety, and volume are escalating pervasively. Indexing techniques have in the ...
详细信息
ISBN:
(数字)9798331506452
ISBN:
(纸本)9798331506469
Another significant research issue is that of data indexing and retrieval in cloud computing context given that the virtues of velocity, variety, and volume are escalating pervasively. Indexing techniques have in the past proved difficult for real-time cloud applications requirements. This research focuses on the utilization of developed Random Forest algorithms as a new conception of smart cloud data indexing and search. The reported method exploits the strict parallelism and scaling properties of Random Forest algorithms to design a distributed and robust indexing scheme that can efficiently process lager cloud data. During the indexing phase, feature extraction takes place followed by partitioning of the data, construction of several decision trees for search capability. For data retrieval experiments performed on benchmark cloud data sets, the results indicate that the proposed Random Forest-based indexing achieves average accuracy equal to 92.4%, the efficiency is 15-20% higher than traditional indexing methods. The method also stands high on parameters of data skew and node failure for providing reliable performance with frequent change in cloud-based infrastructure. The introduction of Random Forest algorithms into cloud data indexing serves as a boiling prospect in creating scalability and efficiency in the discovery process amidst the growing big data problem. The developments proposed within this research point to the future developments regarding intelligent cloud data management and create new opportunities for this topic within the field of cloud computing.
Anomaly detection in Service Function Chains (SFCs) is essential for ensuring the security of edge-cloud networks. However, edge servers in Industrial Internet of Things (IIoT) face challenges in meeting the real-time...
详细信息
ISBN:
(数字)9798331513054
ISBN:
(纸本)9798331513061
Anomaly detection in Service Function Chains (SFCs) is essential for ensuring the security of edge-cloud networks. However, edge servers in Industrial Internet of Things (IIoT) face challenges in meeting the real-time processing requirements for high-precision anomaly detection due to limited computational resources. In order to address this issue, we initially propose an architectural framework for in-band measurement of cloud-native SFC, which can efficiently detect the state information of virtual network functions (VNFs). Secondly, we propose LightSFC, a lightweight distributed edge-cloud network service function chain anomaly detection model. LightSFC achieves comprehensive awareness of the SFC state by collecting multi-source information from both the data plane and the control plane, and utilizes a lightweight deep Autoencoder model for proactive anomaly detection. Our experimental results show that LightSFC is capable of rapidly detecting anomalies with lower resource overhead. Compared to other methods, LightSFC exhibits superior performance in terms of accuracy, precision, recall, and F1-score, thereby substantiating its efficacy in SFC anomaly detection for edge-cloud networks.
Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO). In this work, we propose a novel gradient-based algorithm for profiling Pareto front ...
详细信息
ISBN:
(纸本)9781713845393
Finding diverse and representative Pareto solutions from the Pareto front is a key challenge in multi-objective optimization (MOO). In this work, we propose a novel gradient-based algorithm for profiling Pareto front by using Stein variational gradient descent (SVGD). We also provide a counterpart of our method based on Langevin dynamics. Our methods iteratively update a set of points in a parallel fashion to push them towards the Pareto front using multiple gradient descent, while encouraging the diversity between the particles by using the repulsive force mechanism in SVGD, or diffusion noise in Langevin dynamics. Compared with existing gradient-based methods that require predefined preference functions, our method can work efficiently in high dimensional problems, and can obtain more diverse solutions evenly distributed in the Pareto front. Moreover, our methods are theoretically guaranteed to converge to the Pareto front. We demonstrate the effectiveness of our method, especially the SVGD algorithm, through extensive experiments, showing its superiority over existing gradient-based algorithms.
Magnetic Resonance Imaging (MRI) plays a crucial role in diagnosing and treating various diseases. However, the long acquisition time of MRI scans often leads to patient discomfort and motion artifacts. Consequently, ...
详细信息
ISBN:
(数字)9798350390155
ISBN:
(纸本)9798350390162
Magnetic Resonance Imaging (MRI) plays a crucial role in diagnosing and treating various diseases. However, the long acquisition time of MRI scans often leads to patient discomfort and motion artifacts. Consequently, accelerating MRI speed is essential. Researchers have combined Deep Learning with Compressed Sensing and parallel Imaging to advance MRI. However, many existing methods fail to effectively recover the fine details and structures in Magnetic Resonance images. To address these challenges, we propose a novel model for accelerated parallel MRI reconstruction. Our model incorporates a high-frequency fidelity method into the reconstruction process, explicitly emphasizing the recovery of high-frequency information. Additionally, we consider the joint priori distribution between the reconstructed images from each coil. Using the variable splitting approach, the proposed model is unrolled as an end-to-end network termed HFF-Net. Experimental results demonstrate that our method outperforms state-of-the-art techniques, yielding high-quality MR images with enhanced detail and fine structure recovery.
In recent years, big data processing platform Hadoop and parallel computing model MapReduce have achieved good results in mass data processing. However, since Hadoop does not design the input data streams for image fi...
详细信息
ISBN:
(纸本)9781665417396
In recent years, big data processing platform Hadoop and parallel computing model MapReduce have achieved good results in mass data processing. However, since Hadoop does not design the input data streams for image files and MapReduce does not define an interface to match the data types of image files, the technology that running MapReduce program on Hadoop cluster to process massive image files in parallel is not mature. In addition, as image characteristic data (such as pixel intensive) is often skewness, imageparallelprocessing on heterogeneous Hadoop cluster triggers a large data transfer and load imbalance among nodes, so that reducing the efficiency of MapReduce job. To tackle aforementioned problems, this paper carried out the following work, (1) The Hadoop platform is optimized to efficiently process the massive image; (2) The MapReduce model is extended and the interface is designed to support parallelimageprocessing. (3) A new partition, namely PIHA, is proposed to mitigate data skew in heterogeneous Hadoop 3.2.0. Experiments show that the method proposed in this paper can effectively improve the efficiency of parallelimageprocessing using MapReduce on Hadoop.
Deciding privacy-type properties of deterministic cryptographic protocols such as anonymity and strong secrecy can be reduced to deciding the symbolic equivalence of processes, where each process is described by a set...
详细信息
ISBN:
(纸本)9781728165820
Deciding privacy-type properties of deterministic cryptographic protocols such as anonymity and strong secrecy can be reduced to deciding the symbolic equivalence of processes, where each process is described by a set of possible symbolic traces. This equivalence is parameterized by a deduction system that describes which actions and observations an intruder can perform on a running system. We present in this paper a notion of finitary deduction systems. For this class of deduction system, we first reduce the problem of the equivalence of processes with no disequations to the resolution of reachability problem on each symbolic trace of one process, and then testing whether each solution found is solution of a related trace in the other process. We then extend this reduction to the case of generic deterministic finite processes in which symbolic traces may contain disequalities.
Lung nodules, as an initial symptomatic indication of lung cancer, have been a significant concern for human health and well-being. In order to achieve more accurate segmentation of lung nodules in CT images, a 3D U-n...
详细信息
ISBN:
(数字)9798350355413
ISBN:
(纸本)9798350355420
Lung nodules, as an initial symptomatic indication of lung cancer, have been a significant concern for human health and well-being. In order to achieve more accurate segmentation of lung nodules in CT images, a 3D U-net segmentation method is proposed that incorporates the attention mechanism and dense cavity convolution to address the incomplete acquisition of spatial features of lung nodules by the U-net network. Firstly, a 3D Convolutional Block Attention Module is added after the jump connection to avoid the loss of spatial dimensional information due to the sequence-dependent channel attention by optimizing the parallel channel and spatial attention modules, and to improve the attention of the model to the key features, which helps the network focus on important lung nodule characteristics. Second, by adding the Residual-Atrous Spatial Pyramid Pooling block at the bottleneck of the network, the network can improve the receptive field while maintaining high resolution, and capture more information between layers of CT slices. In this paper, Luna16 and a case dataset from a hospital in Southwest China are used, and the experimental results show that the four metrics of Dice, Recall, Precision, and Jaccard have been improved by 3.14%, 2.06%, 1.67%, and 1.69%, respectively, and the segmentation results of the model in this paper are also closer to the gold standard compared to other mainstream methods.
Cloud removal is vital for the analysis of optical satellite images. To alleviate the impact of thick clouds, recent advances integrate deep learning withmultimodal data. To relax the requirements of paired training s...
详细信息
ISBN:
(数字)9798350373820
ISBN:
(纸本)9798350373837
Cloud removal is vital for the analysis of optical satellite images. To alleviate the impact of thick clouds, recent advances integrate deep learning withmultimodal data. To relax the requirements of paired training samples, existing methods utilized cycle-consistent architecture to learn the relation between cloudy and cloud-free images based on unpaired training samples. When considering thick cloud removal based on unpaired training data, the relevant studies are insufficient and they face two challenges: 1) The information in non-cloudy areas is not preserved well after cloud removal; 2) The recovered information lacks consistency with the global textures and structures. Based on optical-SAR fusion and cycle-consistent training, this paper proposes Multiscale Cycle-consistent Fusion (MCF) model. MCF designs preservation loss to overcome the first challenge, and proposes global-local discriminator combined with parallel Dilated Channel-weighted Module (PDCM) for the second challenge. MCF is evaluated both on a simulated dataset and a real dataset, which demonstrates its effectiveness.
Estimating human pose in complex multi-frame situations is a challenging task and has attracted intensive research by many researchers. Although 3D human pose estimation methods have achieved remarkable results in sce...
详细信息
ISBN:
(数字)9798350374407
ISBN:
(纸本)9798350374414
Estimating human pose in complex multi-frame situations is a challenging task and has attracted intensive research by many researchers. Although 3D human pose estimation methods have achieved remarkable results in scenes based on single images, their performance often fails once these model transformations are applied to video sequences. Common problems with these models include inability to cope with motion blur, out-of-focus videos, and occlusion of human poses. In order to solve the above problems, this paper proposes a feature extraction and representation model MHMF, which is used in the feature extraction stage of the model. The initial features extracted by the backbone network HRNet-w32 are guided by the heat map to the attention layer, which improves the network’s attention to important areas for predicting key points of human posture. At the same time, the integration of the aggregation heat map and the backbone network heat map improves the spatiotemporal consistency of the key points. In addition, to improve the accuracy of mesh pose estimation under occlusion, this paper proposes a transformer-based NewDSTformer model. By adjusting the structure of the Transformer encoder, increasing the encoder level and combining it with the dynamic progressive attention masking method. The model can adapt to different input situations, handle the positional relationship of local key points, and be able to perform accurate detection even under occlusion. It was evaluated on the 3DPW data set and improved the accuracy by $0.3 \%$, indicating that this paper effectively improved the performance of 3D human mesh reconstruction.
With the exponential growth of biomedical knowledge in unstructured text repositories such as PubMed, it is imminent to establish a knowledge graph-style, efficient searchable and targeted database that can support th...
详细信息
ISBN:
(纸本)9798350337488
With the exponential growth of biomedical knowledge in unstructured text repositories such as PubMed, it is imminent to establish a knowledge graph-style, efficient searchable and targeted database that can support the need of information retrieval from researchers and clinicians. To mine knowledge from graph databases, most previous methods view a triple in a graph (see Fig. 1) as the basic processing unit and embed the triplet element (i.e. drugs/chemicals, proteins/genes and their interaction) as separated embedding matrices, which cannot capture the semantic correlation among triple elements. To remedy the loss of semantic correlation caused by disjoint embeddings, we propose a novel approach to learn triple embeddings by combining entities and interactions into a unified representation. Furthermore, traditional methods usually learn triple embeddings from scratch, which cannot take advantage of the rich domain knowledge embedded in pre-trained models, and is also another significant reason for the fact that they cannot distinguish the differences implied by the same entity in the multi-interaction triples. In this paper, we propose a novel fine-tuning based approach to learn better triple embeddings by creating weakly supervised signals from pre-trained knowledge graph embeddings. The method automatically samples triples from knowledge graphs and estimates their pairwise similarity from pre-trained embedding models. The triples are then fed pairwise into a Siamese-like neural architecture, where the triple representation is fine-tuned in the manner bootstrapped by triple similarity scores. Finally, we demonstrate that triple embeddings learned with our method can be readily applied to several downstream applications (e.g. triple classification and triple clustering). We evaluated the proposed method on two open-source drug-protein knowledge graphs constructed from PubMed abstracts, as provided by BioCreative. Our method achieves consistent improvement in both t
暂无评论