For addressing the problem of low accuracy in the available diagnostic methods due to the insufficient fault-sample, this article proposes a small-sample fault diagnosis method with Recurrence plot (RP), Synchrosqueez...
详细信息
The current computer situation is that people generally have home computers. With the rise and prosperity of the Internet era such as short video, people have higher and higher requirements for high-definition status ...
详细信息
Real-world noise removal is crucial in low-level computer vision. Due to the remarkable generation capabilities of diffusion models, recent attention has shifted towards leveraging diffusion priors for image restorati...
Today, due to the development of technology and the advent of web 2.0 applications, different users prefer to do many of their personal tasks over the Internet. Due to the huge amount of information on the web, retrie...
详细信息
ISBN:
(纸本)9781665496728
Today, due to the development of technology and the advent of web 2.0 applications, different users prefer to do many of their personal tasks over the Internet. Due to the huge amount of information on the web, retrieving the appropriate information for each user has become a challenging task. Content-based image retrieval is one of the most important research fields in digital imageprocessing domain, which searches the similar images to the target image by extracting visual content from the query image. In this regard, many studies have been conducted to increase the accuracy of image retrieval systems. However, due to the explosive growth of storage resources and the lack of a responsible system for image retrieval, it is still considered as one of the most attractive fields of research. In this paper, a method is proposed that extracts the appropriate features using a hybrid method, and then searches the images that are similar to the target image. In this way, self-supervised learning approach is utilized to provide the most similar images. Experimental results based on the Corel dataset show that the accuracy of the proposed method has increased compared to the other methods.
Edge computing responds to users' requests with low latency by storing the relevant files at the network edge. Various data deduplication technologies are currently employed at edge to eliminate redundant data chu...
详细信息
Edge computing responds to users' requests with low latency by storing the relevant files at the network edge. Various data deduplication technologies are currently employed at edge to eliminate redundant data chunks for space saving. However, the lookup for the global huge-volume fingerprint indexes imposed by detecting redundancies can significantly degrade the data processing performance. Besides, we envision a novel file storage strategy that realizes the following rationales simultaneously: 1) space efficiency, 2) access efficiency, and 3) load balance, while the existing methods fail to achieve them at one shot. To this end, we report LOFS, a Lightweight Online File Storage strategy, which aims at eliminating redundancies through maximizing the probability of successful data deduplication, while realizing the three design rationales simultaneously. LOFS leverages a lightweight three-layer hash mapping scheme to solve this problem with constant-time complexity. To be specific, LOFS employs the Bloom filter to generate a sketch for each file, and thereafter feeds the sketches to the Locality Sensitivity hash (LSH) such that similar files are likely to be projected nearby in LSH tablespace. At last, LOFS assigns the files to real-world edge servers with the joint consideration of the LSH load distribution and the edge server capacity. Trace-driven experiments show that LOFS closely tracks the global deduplication ratio and generates a relatively low load std compared with the comparison methods.
In a geo-distributed database, data shards and their respective replicas are deployed in distinct datacenters across multiple regions, enabling regional-level disaster recovery and the ability to serve globalusers loc...
详细信息
Evolutionary optimization plays an important role in the representation of information from data sets originating from various technical fields and natural sciences, as it helps to explore parameter spaces for meaning...
详细信息
ISBN:
(纸本)9798350368130
Evolutionary optimization plays an important role in the representation of information from data sets originating from various technical fields and natural sciences, as it helps to explore parameter spaces for meaningful representations. Quality-Diversity (QD) methods, notably MAP-Elites variants, prove effective in diverse fields, emphasizing their ability to provide sets of high-performing solutions. This review discusses challenges in single- and multi-objective optimization, with applications in multiple directions such as imageprocessing, visualization, and medical imaging. It also reviews QD algorithms and highlights advancements in algorithmic adaptations, user-driven optimization and the potential to explore complex feature spaces. The presented works contribute to understanding and applying evolutionary optimization for solving visualization and feature space exploration problems in various domains.
Several parallel and distributed data mining algorithms have been proposed in literature to perform large scale data analysis, overcoming the bottleneck of traditional methods on a single machine. However, although th...
Several parallel and distributed data mining algorithms have been proposed in literature to perform large scale data analysis, overcoming the bottleneck of traditional methods on a single machine. However, although the master-worker approach greatly simplifies the synchronization of all nodes since only the master is in charge to do that, it also presents several problematic issues for large-scale data analysis tasks (involving thousands or millions of nodes). This paper presents a hierarchical (or multi-level) master-worker framework for iterative parallel data analysis algorithms, to overcome the scalability issues affecting classic master-worker solutions. Specifically, the framework is composed of (more than one) merger and worker nodes organized in a k-tree structure, in which the workers are on the leaves and the mergers are on the root and the internal nodes in the tree.
Federated and continual learning are training paradigms addressing data distribution shift in space and time. More specifically, federated learning tackles non-i.i.d data in space as information is distributed in mult...
ISBN:
(纸本)9798350307443
Federated and continual learning are training paradigms addressing data distribution shift in space and time. More specifically, federated learning tackles non-i.i.d data in space as information is distributed in multiple nodes, while continual learning faces with temporal aspect of training as it deals with continuous streams of data. Distribution shifts over space and time is what it happens in real federated learning scenarios that show multiple challenges. First, the federated model needs to learn sequentially while retaining knowledge from the past training rounds. Second, the model has also to deal with concept drift from the distributed data distributions. To address these complexities, we attempt to combine continual and federated learning strategies by proposing a solution inspired by experience replay and generative adversarial concepts for supporting decentralized distributed training. In particular, our approach relies on using limited memory buffers of synthetic privacy-preserving samples and interleaving training on local data and on buffer data. By translating the CL formulation into the task of integrating distributed knowledge with local knowledge, our method enables models to effectively integrate learned representation from local nodes, providing models the capability to generalize across multiple datasets. We test our integrated strategy on two realistic medical image analysis tasks - tuberculosis and melanoma classification - using multiple datasets in order to simulate realistic non-i.i.d. medical data scenarios. Results show that our approach achieves performance comparable to standard (non-federated) learning and significantly outperforms state-of-the-art federated methods in their centralized (thus, more favourable) formulation.
Content-based image retrieval (CBIR) is a homogeneous search technology based on visual features, and information has been leaked to potential users of pharmaceutical testing, education, and research. However, the CBI...
详细信息
暂无评论