The algorithms based on the technique of optimal k-thresholding (OT) were recently proposed for signal recovery, and they are very different from the traditional family of hard thresholding methods. However, the compu...
详细信息
The algorithms based on the technique of optimal k-thresholding (OT) were recently proposed for signal recovery, and they are very different from the traditional family of hard thresholding methods. However, the computational cost for OT-based algorithms remains high at the current stage of their development. This stimulates the development of the so-called natural thresholding (NT) algorithm and its variants in this paper. The family of NT algorithms is developed through the first-order approximation of the so-called regularized optimal k-thresholding model, and thus the computational cost for this family of algorithms is significantly lower than that of the OT-based algorithms. The guaranteed performance of NT-type algorithms for signal recovery from noisy measurements is shown under the restricted isometry property and concavity of the objective function of regularized optimal k-thresholding model. Empirical results indicate that the NTtype algorithms are robust and very comparable to several mainstream algorithms for sparse signal recovery.
Image and video processing algorithms implemented in software, require most computation time when the image size is increased. Also, some algorithms must be processed at high-speed, for example the image thresholding ...
详细信息
ISBN:
(纸本)9781509037971
Image and video processing algorithms implemented in software, require most computation time when the image size is increased. Also, some algorithms must be processed at high-speed, for example the image thresholding algorithms for high throughput real-time applications. Then, in order to overcome this requirement, the algorithms must be efficiently implemented in hardware. In this paper, we present the hardware architectures for ISODATA and Otsu thresholding algorithms comparing area, latency, throughput and power consumption. The designs are described using generic structural VHDL and synthesized on the FPGA EP4CE115F29C7N. The designed architectures were verified using Signal Tap and an image acquisition system based on the D5M camera and the DE2-115 development kit of Terasic.
The proposed research aims to restore deteriorated text sections that are affected by stain markings, ink seepages and document ageing in ancient document photographs, as these challenges confront document enhancement...
详细信息
The proposed research aims to restore deteriorated text sections that are affected by stain markings, ink seepages and document ageing in ancient document photographs, as these challenges confront document enhancement. A tri-level semi-adaptive thresholding technique is developed in this paper to overcome the issues. The primary focus, however, is on removing deteriorations that obscure text sections. The proposed algorithm includes three levels of degradation removal as well as pre- and post-enhancement processes. In level-wise degradation removal, a global thresholding approach is used, whereas, pseudo-colouring uses local thresholding procedures. Experiments on palm leaf and DIBCO document photos reveal a decent performance in removing ink/oil stains whilst retaining obscured text sections. In DIBCO and palm leaf datasets, our system also showed its efficacy in removing common deteriorations such as uneven illumination, show throughs, discolouration and writing marks. The proposed technique directly correlates to other thresholding-based benchmark techniques producing average F-measure and precision of 65.73 and 93% towards DIBCO datasets and 55.24 and 94% towards palm leaf datasets. Subjective analysis shows the robustness of proposed model towards the removal of stains degradations with a qualitative score of 3 towards 45% of samples indicating degradation removal with fairly readable text.
This research presents an efficient automatic thresholding technique based on Otsu's method that can be used in edge detection algorithms and then applied as a plug-in for real-time image processing devices. The p...
详细信息
This research presents an efficient automatic thresholding technique based on Otsu's method that can be used in edge detection algorithms and then applied as a plug-in for real-time image processing devices. The proposed thresholding technique uses an iterative clustering based method that targets a reduced number of operations. It is well known that the Otsu calculates the global threshold splitting the image into two classes, foreground and background, and choose the threshold that minimizes the interclass variance of the threshold black and white pixels. In this paper, a faster version of Otsu's method is proposed knowing that the only pixels that have to be moved from one class to another class are the ones with values in between the previous two thresholds. This procedure yields the same set of thresholds as the original method but the redundant computation has been removed and, in this way, only few operations are required. The proposed thresholding technique has been implemented in software using C# programing language and in reconfigurable hardware on a Spartan 3E XC3S500E FPGA board using VHDL. The results obtained, presented for different digital images, confirm that the proposed iterative thresholding algorithm and architecture on FPGA can achieve the requirements to be included in real-time image processing systems.
Compressed sensing studies linear recovery problems under structure assumptions. We introduce a new class of measurement operators, coined hierarchical measure-ment operators, and prove results guaranteeing the effici...
详细信息
Compressed sensing studies linear recovery problems under structure assumptions. We introduce a new class of measurement operators, coined hierarchical measure-ment operators, and prove results guaranteeing the efficient, stable and robust recovery of hierarchically structured signals from such measurements. We derive bounds on their hierarchical restricted isometry properties based on the restricted isometry constants of their constituent matrices, generalizing and extending prior work on Kronecker-product measurements. As an exemplary application, we apply the theory to two communication scenarios. The fast and scalable HiHTP algorithm is shown to be suitable for solving these types of problems and its performance is evaluated numerically in terms of sparse signal recovery and block detection capa-bility.(c) 2021 The Author(s). Published by Elsevier Inc. This is an open access article under the CC BY license (http://***/licenses/by/4.0/).
In this paper, a reduced half thresholding algorithm and its accelerated version, a reduced fast half thresholding algorithm, are proposed for solving large-scale L-1/2-regularized problems. At each iteration, the lar...
详细信息
In this paper, a reduced half thresholding algorithm and its accelerated version, a reduced fast half thresholding algorithm, are proposed for solving large-scale L-1/2-regularized problems. At each iteration, the large-scale original problem is reduced into a sequence of small-scale subproblems, and the subproblems are solved by the (fast) half thresholding algorithm. Global convergence of the reduced half thresholding algorithm is proven, and two techniques: the Newton acceleration technique and the shrinking technique are presented to improve the performance. Compared with the state-of-the-art algorithms, the numerical results show that the presented algorithms are promising. As an example, our algorithms are efficient to recover the signal recovery problem with 19,264,097 samples and signal length 29,890,095
Purpose: To evaluate relative diagnostic precision and testeretest variability of 2 devices, the Compass (CMP, CenterVue, Padova, Italy) fundus perimeter and the Humphrey Field Analyzer (HFA, Zeiss, Dublin, CA), in de...
详细信息
Purpose: To evaluate relative diagnostic precision and testeretest variability of 2 devices, the Compass (CMP, CenterVue, Padova, Italy) fundus perimeter and the Humphrey Field Analyzer (HFA, Zeiss, Dublin, CA), in detecting glaucomatous optic neuropathy (GON). Design: Multicenter, cross-sectional, case-control study. Participants: We sequentially enrolled 499 patients with glaucoma and 444 normal subjects to analyze relative precision. A separate group of 44 patients with glaucoma and 54 normal subjects was analyzed to assess test-retest variability. Methods: One eye of recruited subjects was tested with the index tests: HFA (Swedish interactive thresholding algorithm [SITA] standard strategy) and CMP (Zippy Estimation by Sequential Testing [ZEST] strategy), 24-2 grid. The reference test for GON was specialist evaluation of fundus photographs or OCT, independent of the visual field (VF). For both devices, linear regression was used to calculate the sensitivity decrease with age in the normal group to compute pointwise total deviation (TD) values and mean deviation (MD). We derived 5% and 1% pointwise normative limits. The MD and the total number of TD values below 5% (TD 5%) or 1% (TD 1%) limits per field were used as classifiers. Main Outcome Measures: We used partial receiver operating characteristic (pROC) curves and partial area under the curve (pAUC) to compare the diagnostic precision of the devices. Pointwise mean absolute deviation and Blande-Altman plots for the mean sensitivity (MS) were computed to assess test-retest variability. Results: Retinal sensitivity was generally lower with CMP, with an average mean difference of 1.85 +/- 0.06 decibels (dB) (mean +/- standard error, P < 0.001) in healthy subjects and 1.46 +/- 0.05 dB (mean +/- standard error, P < 0.001) in patients with glaucoma. Both devices showed similar discriminative power. The MD metric had marginally better discrimination with CMP (pAUC difference +/- standard error, 0.019 +/- 0.009, P =
In this paper we consider the problem of inverting linear homogeneous transforms by Vaguelette–Wavelet decomposition and stabilized hard thresholding of noisy wavelet coefficients. We also prove asymptotic normality ...
详细信息
The following mini-review attempts to guide researchers in the quantification of fluorescently-labelled proteins within cultured thick or chromogenically-stained proteins within thin sections of brain *** follows from...
详细信息
The following mini-review attempts to guide researchers in the quantification of fluorescently-labelled proteins within cultured thick or chromogenically-stained proteins within thin sections of brain *** follows from our examination of the utility of Fiji Image J thresholding and binarization *** how we identified the maximum intensity projection as the best of six tested for two dimensional(2 D)-rendering of three-dimensional(3 D) images derived from a series of z-stacked micrographs,the review summarises our comparison of 16 global and 9 local algorithms for their ability to accurately quantify the expression of astrocytic glial fibrillary acidic protein(GFAP),microglial ionized calcium binding adapter molecule 1(IBA1) and oligodendrocyte lineage Olig2 within fixed cultured rat hippocampal brain *** application of these algorithms to chromogenically-stained GFAP and IBA1 within thin tissue sections,is also ***’s Bio Voxxel plugin allowed categorisation of algorithms according to their sensitivity,specificity accuracy and relative *** Percentile algorithm was deemed best for quantifying levels of GFAP,the Li algorithm was best when quantifying IBA expression,while the Otsu algorithm was optimum for Olig2 staining,albeit with over-quantification of oligodendrocyte number when compared to a stereological ***,GFAP and IBA expression in 3,3′-diaminobenzidine(DAB)/haematoxylin-stained cerebellar tissue was best quantified with Default,Isodata and Moments *** workflow presented in Figure 1 could help to improve the quality of research outcomes that are based on the quantification of protein with brain tissue.
Motivation: Computational drug repositioning is an important and efficient approach towards identifying novel treatments for diseases in drug discovery. The emergence of large-scale, heterogeneous biological and biome...
详细信息
Motivation: Computational drug repositioning is an important and efficient approach towards identifying novel treatments for diseases in drug discovery. The emergence of large-scale, heterogeneous biological and biomedical datasets has provided an unprecedented opportunity for developing computational drug repositioning methods. The drug repositioning problem can be modeled as a recommendation system that recommends novel treatments based on known drug-disease associations. The formulation under this recommendation system is matrix completion, assuming that the hidden factors contributing to drug-disease associations are highly correlated and thus the corresponding data matrix is low-rank. Under this assumption, the matrix completion algorithm fills out the unknown entries in the drug-disease matrix by constructing a low-rank matrix approximation, where new drug-disease associations having not been validated can be screened. Results: In this work, we propose a drug repositioning recommendation system (DRRS) to predict novel drug indications by integrating related data sources and validated information of drugs and diseases. Firstly, we construct a heterogeneous drug-disease interaction network by integrating drug-drug, disease-disease and drug-disease networks. The heterogeneous network is represented by a large drug-disease adjacency matrix, whose entries include drug pairs, disease pairs, known drug-disease interaction pairs and unknown drug-disease pairs. Then, we adopt a fast Singular Value thresholding (SVT) algorithm to complete the drug-disease adjacency matrix with predicted scores for unknown drug-disease pairs. The comprehensive experimental results show that DRRS improves the prediction accuracy compared with the other state-of-the-art approaches. In addition, case studies for several selected drugs further demonstrate the practical usefulness of the proposed method.
暂无评论