Ship oscillatory motions with long coherence processing interval (CPI) are complex and hard to accurately estimate. The traditional ship imaging methods usually avoid long-CPI focusing by dealing with a short observat...
详细信息
ISBN:
(纸本)9781538691540
Ship oscillatory motions with long coherence processing interval (CPI) are complex and hard to accurately estimate. The traditional ship imaging methods usually avoid long-CPI focusing by dealing with a short observation time interval, which may make it hard to obtain a high-resolution and high-SNR image. In this paper, the mechanisms and challenges of long CPI imaging of oscillatory targets are investigated. The properties of wavenumber domain support (WDS) and point spreading function (PSF) of oscillatory targets are analyzed. It's found that the WDS spreads as a three-dimensional (3D) thin and curved sweep surface, with time-variant energy density, non-parallel boundaries and a complex structure. The PSF of an oscillatory target has a 3D resolution, but also multiple side-lobes with high level. We inspect the relationships between the properties of the WDS and the non-ideal PSF. Moreover, it's found that scatters distributed on a 3D oscillatory target cannot be focused uniformly on a predefined imaging plane (IP). The projection relationship of the target focusing positions on the slant-range plane (SRP) and the IP are also derived. The simulation results can well validate the proposed method.
image encryption is an efficient technique to protect image content from unauthorized parties. In this paper a parallelimage encryption method based on bitplane decomposition is proposed. The original grayscale image...
详细信息
image encryption is an efficient technique to protect image content from unauthorized parties. In this paper a parallelimage encryption method based on bitplane decomposition is proposed. The original grayscale image is converted to a set of binary images by local binary pattern (LBP) technique and bitplane decomposition (BPD) methods. Then, permutation and substitution steps are performed by genetic algorithm (GA) using crossover and mutation operations. Finally, these scrambled bitplanes are combined together to obtain encrypted image. Instead of random population selection in GA, a deterministic method with security keys is utilized to improve security level. The proposed encryption method has parallelprocessing capability for multiple bitplanes encryption. This distributed GA with multiple populations increases encryption speed and makes it suitable for real-time applications. Simulations and security analysis are done to demonstrate efficiency of our algorithm.
Many image denoising methods achieve state-of-the-art performance with little consideration of computation efficiency. With the popularity of high-definition imaging devices, these denoising methods do not scale well ...
详细信息
For the demand of domestic digital film industry to improve the quality and efficiency of special effects film and television works, This paper studies the transmission quality of virtual and real fusion and the fusio...
详细信息
Significant advances in spaceborne imaging payloads have resulted in new big data problems in the Earth Observation (EO) field. These challenges are compounded onboard satellites due to a lack of equivalent advancemen...
详细信息
Significant advances in spaceborne imaging payloads have resulted in new big data problems in the Earth Observation (EO) field. These challenges are compounded onboard satellites due to a lack of equivalent advancement in onboard data processing and downlink technologies. We have previously proposed a new GPU accelerated onboard data processing architecture and developed parallelised imageprocessing software to demonstrate the achievable data processing throughput and compression performance. However, the environmental characteristics are distinctly different to those on Earth, such as available power and the probability of adverse single event radiation effects. In this paper, we analyse new performance results for a low power embedded GPU platform, investigate the error resilience of our GPU imageprocessing application and offer two new error resilient versions of the application. We utilise software based error injection testing to evaluate data corruption and functional interrupts. These results inform the new error resilient methods that also leverages GPU characteristics to minimise time and memory overheads. The key results show that our targeted redundancy techniques reduce the data corruption from a probability of up to 46 percent to now less than 2 percent for all test cases, with a typical execution time overhead of 130 percent.
With the development of intelligent traffic system, high-definition cameras are spread along the urban roads. These devices transmit real-time captured images to data center for multi-purpose usability, but these brin...
详细信息
Detection of strongly connected component (SCC) on the GPU has become a fundamental operation to accelerate graph computing. Existing SCC detection methods on multiple GPUs introduce massive unnecessary data transform...
详细信息
ISBN:
(数字)9781728133201
ISBN:
(纸本)9781728133218
Detection of strongly connected component (SCC) on the GPU has become a fundamental operation to accelerate graph computing. Existing SCC detection methods on multiple GPUs introduce massive unnecessary data transformation between multiple GPUs. In this paper, we propose a novel distributed SCC detection approach using multiple GPUs plus CPU. Our approach includes three key ideas: (1) segmentation and labeling over large-scale datasets; (2) collecting and merging the segmented SCCs; and (3) running tasks assignment over multiples GPUs and CPU. We implement our approach under a hybrid distributed architecture with multiple GPUs plus CPU. Our approach can achieve device-level optimization and can be compatible with the state-of-the-art algorithms. We conduct extensive theoretical and experimental analysis to demonstrate efficiency and accuracy of our approach. The experimental results expose that our approach can achieves 11.2×, 1.2×, 1.2× speedup for SCC detection using NVIDIA K80 compared with Tarjan's, FB-Trim, and FB-Hybrid algorithms respectively.
Background: Bioinformatics research comes into an era of big data. Mining potential value in biological big data for scientific research and health care field has the vital significance. Deep learning as new machine l...
详细信息
Background: Bioinformatics research comes into an era of big data. Mining potential value in biological big data for scientific research and health care field has the vital significance. Deep learning as new machine learning algorithms, on the basis of big data and high performance distributedparallel computing, show the excellent performance in biological big data processing. Objective: Provides a valuable reference for researchers to use deep learning in their studies of processing large biological data. methods: This paper introduces the new model of data storage and computational facilities for big data analyzing. Then, the application of deep learning in three aspects including biological omics data processing, biological imageprocessing and biomedical diagnosis was summarized. Aiming at the problem of large biological data processing, the accelerated methods of deep learning model have been described. Conclusion: The paper summarized the new storage mode, the existing methods and platforms for biological big data processing, and the progress and challenge of deep learning applies in biological big data processing.
Relationships in online social networks often imply social connections in real life. An accurate understanding of relationship types benefits many applications, e.g. social advertising and recommendation. Some recent ...
详细信息
ISBN:
(数字)9781728129037
ISBN:
(纸本)9781728129044
Relationships in online social networks often imply social connections in real life. An accurate understanding of relationship types benefits many applications, e.g. social advertising and recommendation. Some recent attempts have been proposed to classify user relationships into predefined types with the help of pre-labeled relationships or abundant interaction features on relationships. Unfortunately, both relationship feature data and label data are very sparse in real social platforms like WeChat, rendering existing methods inapplicable. In this paper, we present an in-depth analysis of WeChat relationships to identify the major challenges for the relationship classification task. To tackle the challenges, we propose a Local Community-based Edge Classification (LoCEC) framework that classifies user relationships in a social network into real-world social connection types. LoCEC enforces a three-phase processing, namely local community detection, community classification and relationship classification, to address the sparsity issue of relationship features and relationship labels. Moreover, LoCEC is designed to handle large-scale networks by allowing parallel and distributedprocessing. We conduct extensive experiments on the real-world WeChat network with hundreds of billions of edges to validate the effectiveness and efficiency of LoCEC.
We propose an approach to image segmentation that views it as one of pixel classification using simple features defined over the local neighborhood. We use a support vector machine for pixel classification, making the...
详细信息
We propose an approach to image segmentation that views it as one of pixel classification using simple features defined over the local neighborhood. We use a support vector machine for pixel classification, making the approach automatically adaptable to a large number of image segmentation applications. Since our approach utilizes only local information for classification, both training and application of the image segmentor can be done on a distributed computing platform. This makes our approach scalable to larger images than the ones tested. This article describes the methodology in detail and tests it efficacy against 5 other comparable segmentation methods on 2 well-known image segmentation databases. Hence, we present the results together with the analysis that support the following conclusions: (i) the approach is as effective, and often better than its studied competitors;(ii) the approach suffers from very little overfitting and hence generalizes well to unseen images;(iii) the trained image segmentation program can be run on a distributed computing environment, resulting in linear scalability characteristics. The overall message of this paper is that using a strong classifier with simple pixel-centered features gives as good or better segmentation results than some sophisticated competitors and does so in a computationally scalable fashion.
暂无评论