In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sens...
详细信息
In the context of next-generation radio telescopes, like the Square Kilometre Array (SKA), the efficient processing of large-scale data sets is extremely important. Convex optimization tasks under the compressive sensing framework have recently emerged and provide both enhanced image reconstruction quality and scalability to increasingly larger data sets. We focus herein mainly on scalability and propose two new convex optimization algorithmic structures able to solve the convex optimization tasks arising in radio-interferometric imaging. They rely on proximal splitting and forward-backward iterations and can be seen, by analogy, with the CLEAN major-minor cycle, as running sophisticated CLEAN-like iterations in parallel in multiple data, prior, and image spaces. Both methods support any convex regularization function, in particular, the well-studied l(1) priors promoting image sparsity in an adequate domain. Tailored for big-data, they employ parallel and distributed computations to achieve scalability, in terms of memory and computational requirements. One of them also exploits randomization, over data blocks at each iteration, offering further flexibility. We present simulation results showing the feasibility of the proposed methods as well as their advantages compared to state-of-the-art algorithmic solvers. Our MATLAB code is available online on GitHub.
Convolutional Neural Networks (CNNs) have shown to be powerful classification tools in tasks that range from check reading to medical diagnosis, reaching close to human perception, and in some cases surpassing it. How...
详细信息
The inversion of linear systems is a fundamental step in many inverse problems. Computational challenges exist when trying to invert large linear systems, where limited computing resources mean that only part of the s...
详细信息
distributed algorithms for graph searching require a high-performance CPU-efficient hash table that supports find-or-put. This operation either inserts data or indicates that it has already been added before. This pap...
详细信息
ISBN:
(纸本)9783319321523;9783319321516
distributed algorithms for graph searching require a high-performance CPU-efficient hash table that supports find-or-put. This operation either inserts data or indicates that it has already been added before. This paper focuses on the design and evaluation of such a hash table, targeting supercomputers. The latency of find-or-put is minimized by using one-sided RDMA operations. These operations are overlapped as much as possible to reduce waiting times for roundtrips. In contrast to existing work, we use linear probing and argue that this requires less roundtrips. The hash table is implemented in UPC. A peak-throughput of 114.9 million op/s is reached on an Infiniband cluster. With a load-factor of 0.9, find-or-put can be performed in 4.5 mu s on average. The hash table performance remains very high, even under high loads.
Automatic extraction of crop organ from images is a crucial step for quantitatively acquiring crop growth information in precision agriculture. There has been some attempt on this task, but the performance is not sati...
详细信息
ISBN:
(纸本)9781509041527
Automatic extraction of crop organ from images is a crucial step for quantitatively acquiring crop growth information in precision agriculture. There has been some attempt on this task, but the performance is not satisfactory. In this paper, we proposed an image-based method based on low-rank matrix recovery to extract organ accurately. In our method, a crop image is considered to be compose of two factors: background and organ. In a certain feature space, the image is represented as a low-rank matrix plus sparse noises. The organ is then extracted by identifying the sparse noises when using low-rank matrix recovery algorithm. In order to ensure the rank of background is low, a linear transform for the feature space is introduced and needs to be learned from historical data. Dynamic threshold segmentation followed by vegetation removing techniques are ultimately adopted in the final step. The experimental results on the benchmark farmland dataset show that our method achieve competitive performance, compared with the other well-established methods, yielding the highest performance of 93.9% with the lowest standard deviation of 2.86%, which means our method is more robust and not sensitive to the complex environmental elements and different cultivars.
Uncertainty quantification is a critical missing component in radio interferometric imaging that will only become increasingly important as the big-data era of radio interferometry emerges. Statistical sampling approa...
详细信息
Sputum smear conventional microscopy (CM) is used as primary bacteriological test for detection of TB. This technique is the most preferred technique in low and middle income countries due to its availability as well ...
详细信息
ISBN:
(纸本)9781509036691
Sputum smear conventional microscopy (CM) is used as primary bacteriological test for detection of TB. This technique is the most preferred technique in low and middle income countries due to its availability as well as accessibility. Manual screening of bacilli using CM is time consuming and labor intensive. As a result, the sensitivity of TB detection is compromised leading to misdiagnosis 33-50% of active cases. Automated methods can increase the sensitivity and specificity of TB detection. Currently, the remote areas of TB-endemic developing countries have easy accessibility to portable and camera-enabled Smartphone microscope for capturing images from ZN-stained smear slide. In this paper, the performance of watershed segmentation method for detection and classification of bacilli from camera-enable Smartphone microscopic images is presented. Several preprocessing techniques have been implemented prior to watershed segmentation. Current method has achieved the sensitivity and specificity of 93.3% and 87% respectively for classifying an image as TB positive or negative.
image enhancement is a challenging problem in the world of digital technology. Currently, producers of video cameras, are providing devices working with increasingly higher frequencies and resolutions of acquired vide...
详细信息
ISBN:
(纸本)9783662493908;9783662493892
image enhancement is a challenging problem in the world of digital technology. Currently, producers of video cameras, are providing devices working with increasingly higher frequencies and resolutions of acquired video frames. However, that creates a need of development of methods used in image enhancement. Furthermore, real time processing of a significant amount of data require utilization of parallelprocessing. The paper presents the results of research on the problem of real time histogram stretching. Proposed solution utilizes parallelprocessing, supported by FPGA (Field Programmable Gate Array). The problem of histogram stretching is closely associated with thermal imaging, in which, in most of the cases, the range of measured infrared radiation, in the observed scene, is generally narrow, in relation to total possible range of radiation measured by IRFPA (Infrared Focal Plane Array), which results in narrow histogram and thus low contrast of the acquired image. The proposed solution can be used in every branch of contemporary industry, wherever video cameras are used. Real time operating hardware implementation, eliminates a need of further post-processing, and may be considered as useful and eligible in many modern applications.
The summed-area table ( SAT), also known as integral image, is a data structure extensively used in computer graphics and vision for fast image filtering. The parallelization of its construction has been thoroughly in...
详细信息
ISBN:
(纸本)9783319436593;9783319436586
The summed-area table ( SAT), also known as integral image, is a data structure extensively used in computer graphics and vision for fast image filtering. The parallelization of its construction has been thoroughly investigated and many algorithms have been proposed for GPUs. Generally speaking, state-of-the-art methods cannot efficiently solve this problem in multi-core and many-core (Xeon Phi) systems due to cache misses, strided and/or remote memory accesses. This work proposes three novel cache-aware parallel SAT algorithms, which generalize parallel block-based prefix-sums algorithms. In addition, we discuss 2D matrix partitioning policies which play an important role in the efficient operation of the cache subsystem. The combination of a SAT algorithm and a partition is manually tuned according to the matrix layout and the number of threads. Experimental evaluation of our algorithms on two NUMA systems and Intel's Xeon Phi, and for three datatypes (int, float, double) by utilizing all system cores, shows, in all experimental settings, better performance compared to the best known CPU and GPU approaches (up to 4.55x on NUMA and 2.8x on Xeon Phi).
Bayesian methods and their implementations by means of sophisticated Monte Carlo techniques have become very popular in signal processing over the last years. Importance Sampling (IS) is a well-known Monte Carlo techn...
详细信息
暂无评论