In this study, the problem is considered of determining the unknown abrupt change in the power and frequency parameters of the band fast fluctuating Gaussian processes. To solve it, new approximations of decision stat...
详细信息
Several computational features distinguish one type of abnormality from another, including the nature of the abnormality, its size, shape, volume, number of lesions, and distribution. Because it is challenging to clas...
详细信息
ISBN:
(数字)9798350371406
ISBN:
(纸本)9798350371413
Several computational features distinguish one type of abnormality from another, including the nature of the abnormality, its size, shape, volume, number of lesions, and distribution. Because it is challenging to classify distinct types of abnormalities using an automated approach, we use the term abnormalities to refer to tumours, blood clots, and strokes collectively. Segmentation of brain images is an attempt to assign tissue labels to individual pixels. Since MRI gives a greater contrast of soft tissue structures, it is the preferred approach for imaging the brain. Appropriate segmentation methods have a high correlation with image capture modality, the tissue of interest. In order to devise a suitable treatment plan, it is crucial to correctly diagnose brain abnormalities like tumour, stroke, or bleeding lesions. Studies have focused on developing better imageprocessing algorithms for use in CAD systems, with the ultimate goal of assessing photos of various brain disorders. In this study, researchers employ an innovative automated method to examine brain MRIs for signs of disease. image segmentation, area and volume calculations, and localization are only a few of the many steps of the algorithm that has been put into place. The statistical comparison of the suggested method’s output with the reference image reveals encouraging results.
作者:
Aidos, HelenaTomás, PedroLASIGE
Faculdade de Ciências Universidade de Lisboa Lisbon Portugal INESC-ID
Instituto Superior Técnico Universidade de Lisboa Lisbon Portugal
Missing values are a fundamental issue in many applications by constraining the application of different learning methods or by impairing the attained results. Many solutions have been proposed by relying on statistic...
详细信息
In recent years, deep learning has been widely used in the field of medical imageprocessing, such as identification of symptoms, detection of organ. Due to the complexity of medical images, in the model training, the...
详细信息
Machine Learning is the understanding of some rules or algorithms by the machine. It is the process of providing scientific algorithm and statistical model to computer system which utilizes it to perform specific task...
详细信息
In recent trends, Computer Assisted Diagnosis (CAD) enables the pathologists to diagnose cancer disease from histopathology images very efficiently. Color normalization is a pre-processing step prior to cancer classif...
详细信息
ISBN:
(纸本)9789082797060
In recent trends, Computer Assisted Diagnosis (CAD) enables the pathologists to diagnose cancer disease from histopathology images very efficiently. Color normalization is a pre-processing step prior to cancer classification task which can reduce the computational complexity of the classifier. However, existing color normalization methods are fraught with the problems of data loss and huge computational complexity. The purpose of employing this color normalization method is to reduce the color variation among a set of histopathology images so that in the next step, the classifier can efficiently extract the prominent features for cancer grading. This color variation is generally occurred due to using different scanners, stain concentration variability and poor tissue sectioning, while preparing the histopathology slides. In this paper, a modified Reinhard algorithm is proposed for color normalization of Hematoxylin and Eosin (H&E) stained colorectal cancer histopathology images. The limitations of Reinhard algorithm are alleviated by the proposed algorithm. Moreover, a statistical analysis is provided to prove that proposed algorithm does not cause any data loss and subsequently, it satisfies all four hypotheses of color normalization. Furthermore, the performance of the proposed algorithm is compared with other existing color normalization methods both qualitatively and quantitatively.
Significant technical restrictions, such as limited data storage on the satellite platform in space and limited bandwidth for communication with the ground station, restrict satellite sensors from simultaneously recor...
详细信息
With an ever-increasing amount of image data, the manual labeling process has become the bottleneck in many machine learning applications. Plankton taxa labeling is especially a challenge due to its complex nature, an...
详细信息
ISBN:
(数字)9781510646018
ISBN:
(纸本)9781510646018
With an ever-increasing amount of image data, the manual labeling process has become the bottleneck in many machine learning applications. Plankton taxa labeling is especially a challenge due to its complex nature, and the manual labeling effort places a large burden on the domain experts. The Active Learning (AL) paradigm is a promising research direction adopted in the literature to minimize the manual labeling effort exerted by domain experts. Many approaches for AL have been proposed over the recent years to improve the labeling task by supporting the construction of large datasets suitable to train machine learning models while minimizing human involvement in the process. Our empirical study suggests that many modern active learning methods fail to incorporate both the samples that represent the statistical pattern of the data and the samples in which the machine learning model is not confident about. Inspired by these limitations, we propose an algorithm that combines these two types of sampling in order to capture the data distribution of the whole feature space, prevent redundant sampling from correlated uncertainty queries and finetune the inter-class decision boundary. Our experiments show that the proposed method outperforms each of the methods separately. Further, it also proves to be efficient on both the CIFAR dataset and the more complex Kaggle plankton dataset.
The generalized normal (GN) distribution is one of the most used models for imagery feature processing. Alternatives more flexible than the GN are sought in some real phenomena. We propose a new four-parameter distrib...
详细信息
The generalized normal (GN) distribution is one of the most used models for imagery feature processing. Alternatives more flexible than the GN are sought in some real phenomena. We propose a new four-parameter distribution for modeling medical imagery called the Marshall-Olkin generalized normal (MOGN). Some of its properties are derived, including quantile function, expansions for the density and cumulative functions and ordinary and incomplete moments. We estimate the model parameters using maximum likelihood and provide a stochastic expectation- maximization (SEM)-based segmentation algorithm for features in medical images. The performance of our proposals is quantified in an application to real data in contrast to those furnished by the well-known segmentation methods: k-means and other based on the generalized normal SEM. Results indicate that the segmenter which is induced from the MOGN distribution may be an efficient tool for processing medical images.
Remote sensing spatio-temporal fusion (STF) aims at fusing temporally-dense coarse-resolution images and temporally-sparse fine-resolution images to reconstruct high spatio-temporal resolution images. Multi-band remot...
详细信息
ISBN:
(数字)9798350368741
ISBN:
(纸本)9798350368758
Remote sensing spatio-temporal fusion (STF) aims at fusing temporally-dense coarse-resolution images and temporally-sparse fine-resolution images to reconstruct high spatio-temporal resolution images. Multi-band remote sensing images are often accepted as inputs for STF that have complementary characteristics for high-fidelity land surface reconstruction, yet the existing STF framework often treats different bands uniformly without considering the statistical correlation between different bands, resulting in unsatisfying results. To address this problem, this paper presents a group-wise semantic-enhanced interactive network for STF, dubbed as GSINet. Based on statistical observations, the feature correlation between the visible-light group and the invisible-light group is weak, while the intra-group correlation is strong. Therefore, the GSINet first separates the inputs into visible-light and invisible-light groups, which are fed into different branches with independent encoders for feature extraction and fusion. Afterwards, to address the issue of land cover changes between the prediction coarse- and reference fine-resolution images, a Semantic-Enhancement Fusion Module (SEFM) is designed to interact with the features from the same group with enhanced semantic information captured in an unsupervised learning manner. Then, the semantic-enhanced fused features from different bands are fed into an Interleaved Cross-attention Module (ICM) for further fusion. Finally, the output fusion features fully encode the intra- and inter-group information, which are fed into the decoder, reconstructing the spatio-temporal high-resolution images. Extensive experiments on CIA and LGC benchmark datasets demonstrate that the GSINet outperforms a variety of state-of-the-art methods in terms of multiple metrics.
暂无评论