The control of weeds at earlier stages is one of the most relevant tasks in agriculture. However, the detection of plants in environments with uncontrolled conditions is still a challenge. Hence, a deep learning-based...
详细信息
ISBN:
(数字)9783031337833
ISBN:
(纸本)9783031337826;9783031337833
The control of weeds at earlier stages is one of the most relevant tasks in agriculture. However, the detection of plants in environments with uncontrolled conditions is still a challenge. Hence, a deep learning-based approach to address this problem has been proposed in this work. On the one hand, a CNN model based on the UNet architecture has been used to segment plants. On the other hand, a MobileNetV2-based architecture has been implemented to classify different types of plants, in this case, corn, monocotyledon weed species, and dicotyledon weed species. For training the models, a large image dataset was first created and manually annotated. The performance of the segmentation network achieves a Dice Similarity Coefficient (DSC) of 84.27% and a mean Intersection over Union (mIoU) of 74.21%. The performance of the classification model obtained an accuracy, precision, recall, and F-1-score of 96.36%, 96.68%, 96.34%, and 96.34%, respectively. Then, as the results indicated, the proposed approach is an advantageous alternative for farmers since it provides a way for crop/weed detection in natural field conditions.
In healthcare sector, magnetic resonance imaging (MRI) images are taken for multiple sclerosis (MS) assessment, classification, and management. However, interpreting an MRI scan requires an exceptional amount of skill...
详细信息
In healthcare sector, magnetic resonance imaging (MRI) images are taken for multiple sclerosis (MS) assessment, classification, and management. However, interpreting an MRI scan requires an exceptional amount of skill because abnormalities on scans are frequently inconsistent with clinical symptoms, making it difficult to convert the findings into effective treatment strategies. Furthermore, MRI is an expensive process, and its frequent utilization to monitor an illness increases healthcare costs. To overcome these drawbacks, this research employs advanced technological approaches to develop a deep learning system for classifying types of MS through clinical brain MRI scans. The major innovation of this model is to influence the convolution network with attention concept and recurrent-based deep learning for classifying the disorder;this also proposes an optimization algorithm for tuning the parameter to enhance the performance. Initially, the total images as 3427 are collected from database, in which the collected samples are categorized for training and testing phase. Here, the segmentation is carried out by adaptive and attentive-based mask regional convolution neural network (AA-MRCNN). In this phase, the MRCNN's parameters are finely tuned with an enhanced pine cone optimization algorithm (EPCOA) to guarantee outstanding efficiency. Further, the segmented image is given to recurrent MobileNet with long short term memory (RM-LSTM) for getting the classification outcomes. Through experimental analysis, this deep learning model is acquired 95.4% for accuracy, 95.3% for sensitivity, and 95.4% for specificity. Hence, these results prove that it has high potential for appropriately classifying the sclerosis disorder.
segmentation and classification of breast tumors are two critical tasks since they provide significant information for computer-aided breast cancer diagnosis. Combining these tasks leverages their intrinsic relevance ...
详细信息
segmentation and classification of breast tumors are two critical tasks since they provide significant information for computer-aided breast cancer diagnosis. Combining these tasks leverages their intrinsic relevance to enhance performance, but the variability and complexity of tumor characteristics remain challenging. We propose a novel multi-task deep learning network (NMTNet) for the joint segmentation and classification of breast tumors, which is based on a convolutional neural network (CNN) and U-shaped architecture. It mainly comprises a shared encoder, a multi-scale fusion channel refinement (MFCR) module, a segmentation branch, and a classification branch. First, ResNet18 is used as the backbone network in the encoding part to enhance the feature representation capability. Then, the MFCR module is introduced to enrich the feature depth and diversity. Besides, the segmentation branch combines a lesion region enhancement (LRE) module between the encoder and decoder parts, aiming to capture more detailed texture and edge information of irregular tumors to improve segmentation accuracy. The classification branch incorporates a fine-grained classifier that reuses valuable segmentation information to discriminate between benign and malignant tumors. The proposed NMTNet is evaluated on both ultrasound and magnetic resonance imaging datasets. It achieves segmentation dice scores of 90.30% and 91.50%, and Jaccard indices of 84.70% and 88.10% for each dataset, respectively. And the classification accuracy scores are 87.50% and 99.64% for the corresponding datasets, respectively. Experimental results demonstrate the superiority of NMTNet over state-of-the-art methods on breast tumor segmentation and classification tasks.
Skin Lesion (SL) prediction and segmentation is the trending topic in the medical imaging industry for finding different disease severity. But the un-cleared pixels in the image data might reduce the image features an...
详细信息
Skin Lesion (SL) prediction and segmentation is the trending topic in the medical imaging industry for finding different disease severity. But the un-cleared pixels in the image data might reduce the image features analyzing exactness that has resulted from average or poor segmentation accuracy. Thus the traditional method is not succeeding in this specific domain. Hence, proper preprocessing is required to filter noise features at a high level, which has led to attaining the expected SL segmentationclassification exactness score. To address these issues, the current study intended to implement a novel Sailfish-based Gradient Boosting Framework (SbGBF) for recognizing and segmenting the SL region with a high exactness rate. The boosting mechanism is rich with noise removal features;hence, this study considers the optimized boosting mechanism. The boosting parameters were initially activated to eliminate the noise variables in the trained SL data. Then the sailfish fitness function is processed for tracing the region features in the preprocessed SL images, and the segmentation is performed. Finally, the SL types were classified, and the processing outcomes were measured. The recorded SL segmentation and classification accuracy by a novel SbGBF is 98%;a 0.02% error score was also reported. In addition, the recorded minimum computational complexity score is 3 s, which is quite less than the compared associated models.
Thyroid nodule segmentation and classification in ultrasound images are two essential but challenging tasks for computer-aided diagnosis of thyroid nodules. Since these two tasks are inherently related to each other a...
详细信息
Thyroid nodule segmentation and classification in ultrasound images are two essential but challenging tasks for computer-aided diagnosis of thyroid nodules. Since these two tasks are inherently related to each other and sharing some common features, solving them jointly with multi-task leaning is a promis-ing direction. However, both previous studies and our experimental results confirm the problem of in-consistent predictions among these related tasks. In this paper, we summarize two types of task incon-sistency according to the relationship among different tasks: intra-task inconsistency between homoge-neous tasks (e.g., both tasks are pixel-wise segmentation tasks);and inter-task inconsistency between heterogeneous tasks (e.g., pixel-wise segmentation task and categorical classification task). To address the task inconsistency problems, we propose intra-and inter-task consistent learning on top of the designed multi-stage and multi-task learning network to enforce the network learn consistent predictions for all the tasks during network training. Our experimental results based on a large clinical thyroid ultrasound image dataset indicate that the proposed intra-and inter-task consistent learning can effectively elimi-nate both types of task inconsistency and thus improve the performance of all tasks for thyroid nodule segmentation and classification.(c) 2022 Elsevier B.V. All rights reserved.
Liver cancer is an important disease that leads to the death of people worldwide. Image classification based on Computed Axial Tomography (CAT) and manual segmentation is the classical method and it requires more time...
详细信息
Liver cancer is an important disease that leads to the death of people worldwide. Image classification based on Computed Axial Tomography (CAT) and manual segmentation is the classical method and it requires more time to function using huge data. The fundamental Computer-Aided Diagnosis (CAD) aids in finding the early stage of liver disease and hence it is used to reduce the death rate. Some of the standard optimization techniques are used widely in research for classification and segmentation processes. However, it has some challenges in the segmenting of the tumor, which affects the classification process. Hence in this paper, the novel deep learning system is presented for diagnosing liver tumor disease. The implemented approach involves three major phases. The needed images are collected from standard sources. Subsequently, the collected images are fed into Transformer Residual EfficientNet B7 Unet++ (Trans-REB7Unet++), where the abnormality regions are segmented. Further, the attained images are given to Adaptive EfficientNet B7 with LSTM layer (AEB7-LSTM), in which the EfficientNet B7 is combined with the LSTM layers. For further achieving the optimum result, the Adaptive Circle-Inspired Optimization (ACIO) algorithm is proposed that optimizes the parameters of the model. Finally, the efficiency of the technique is validated through divergent standards and differentiated with baseline models. Therefore, the suggested system demonstrates that it uses accurate values for identifying the tumor disease.
Accurate segmentation and classification of pulmonary nodules are of great significance to early detection and diagnosis of lung diseases, which can reduce the risk of developing lung cancer and improve patient surviv...
详细信息
Accurate segmentation and classification of pulmonary nodules are of great significance to early detection and diagnosis of lung diseases, which can reduce the risk of developing lung cancer and improve patient survival rate. In this paper, we propose an effective network for pulmonary nodule segmentation and classification at one time based on adversarial training scheme. The segmentation network consists of a High-Resolution network with Multi-scale Progressive Fusion (HR-MPF) and a proposed Progressive Decoding Module (PDM) recovering final pixel-wise prediction results. Specifically, the proposed HR-MPF firstly incorporates boosted module to High-Resolution Network (HRNet) in a progressive feature fusion manner. In this case, feature communication is augmented among all levels in this high-resolution network. Then, downstream classification module would identify benign and malignant pulmonary nodules based on feature map from PDM. In the adversarial training scheme, a discriminator is set to optimize HR-MPF and PDM through back propagation. Meanwhile, a reasonably designed multi-task loss function optimizes performance of segmentation and classification overall. To improve the accuracy of boundary prediction crucial to nodule segmentation, a boundary consistency constraint is designed and incorporated in the segmentation loss function. Experiments on publicly available LUNA16 dataset show that the framework outperforms relevant advanced methods in quantitative evaluation and visual perception.
Automatic diagnosis based on medical imaging necessitates both lesion segmentation and disease classification. Lesion segmentation requires pixel-level annotations while disease classification only requires image-leve...
详细信息
Automatic diagnosis based on medical imaging necessitates both lesion segmentation and disease classification. Lesion segmentation requires pixel-level annotations while disease classification only requires image-level annotations. The two tasks are usually studied separately despite the latter problem relies on the former. Motivated by the close correlation between them, we propose a mixed-supervision guided method and a residual-aided classification U-Net model (ResCU-Net) for joint segmentation and benign-malignant classification. By coupling the strong supervision in the form of segmentation mask and weak supervision in the form of benign-malignant label through a simple annotation procedure, our method efficiently segments tumor regions while simultaneously predicting a discriminative map for identifying the benign-malignant types of tumors. Our network, ResCU-Net, extends U-Net by incorporating the residual module and the SegNet architecture to exploit multilevel information for achieving improved tissue identification. With experiments on a public mammogram database of INbreast, we validate the effectiveness of our method and achieve consistent improvements over state-of-the-art models.
Objectives We aimed to develop machine learning (ML) models based on diffusion- and perfusion-weighted imaging fusion (DP fusion) for identifying stroke within 4.5 h, to compare them with DWI- and/or PWI-based ML mode...
详细信息
Objectives We aimed to develop machine learning (ML) models based on diffusion- and perfusion-weighted imaging fusion (DP fusion) for identifying stroke within 4.5 h, to compare them with DWI- and/or PWI-based ML models, and to construct an automatic segmentation-classification model and compare with manual labeling methods. Methods ML models were developed from multimodal MRI datasets of acute stroke patients within 24 h of clear symptom onset from two centers. The processes included manual segmentation, registration, DP fusion, feature extraction, and model establishment (logistic regression (LR) and support vector machine (SVM)). A segmentation-classification model (X-Net) was proposed for automatically identifying stroke within 4.5 h. The area under the receiver operating characteristic curve (AUC), sensitivity, Dice coefficients, decision curve analysis, and calibration curves were used to evaluate model performance. Results A total of 418 patients (<= 4.5 h: 214;>4.5 h: 204) were evaluated. The DP fusion model achieved the highest AUC in identifying the onset time in the training (LR: 0.95;SVM: 0.92) and test sets (LR: 0.91;SVM: 0.90). The DP fusion-LR model displayed consistent positive and greater net benefits than other models across a broad range of risk thresholds. The calibration curve demonstrated the good calibration of the DP fusion-LR model (average absolute error: 0.049). The X-Net model obtained the highest Dice coefficients (DWI: 0.81;Tmax: 0.83) and achieved similar performance to manual labeling (AUC: 0.84). Conclusions The automatic segmentation-classification models based on DWI and PWI fusion images had high performance in identifying stroke within 4.5 h.
The advent of three-dimensional convolutional neural networks (3D CNNs) has revolutionized the detection and analysis of COVID-19 cases. As imaging technologies have advanced, 3D CNNs have emerged as a powerful tool f...
详细信息
The advent of three-dimensional convolutional neural networks (3D CNNs) has revolutionized the detection and analysis of COVID-19 cases. As imaging technologies have advanced, 3D CNNs have emerged as a powerful tool for segmenting and classifying COVID-19 in medical images. These networks have demonstrated both high accuracy and rapid detection capabilities, making them crucial for effective COVID-19 diagnostics. This study offers a thorough review of various 3D CNN algorithms, evaluating their efficacy in segmenting and classifying COVID-19 across a range of medical imaging modalities. This review systematically examines recent advancements in 3D CNN methodologies. The process involved a comprehensive screening of abstracts and titles to ensure relevance, followed by a meticulous selection and analysis of research papers from academic repositories. The study evaluates these papers based on specific criteria and provides detailed insights into the network architectures and algorithms used for COVID-19 detection. The review reveals significant trends in the use of 3D CNNs for COVID-19 segmentation and classification. It highlights key findings, including the diverse range of networks employed for COVID-19 detection compared to other diseases, which predominantly utilize encoder/decoder frameworks. The study provides an in-depth analysis of these methods, discussing their strengths, limitations, and potential areas for future research. The study reviewed a total of 60 papers published across various repositories, including Springer and Elsevier. The insights from this study have implications for clinical diagnosis and treatment strategies. Despite some limitations, the accuracy and efficiency of 3D CNN algorithms underscore their potential for advancing medical image segmentation and classification. The findings suggest that 3D CNNs could significantly enhance the detection and management of COVID-19, contributing to improved healthcare outcomes.
暂无评论