The HT-29 cell line, derived from human colon cancer, is valuable for biological and cancer research applications. Early detection is crucial for improving the chances of survival, and researchers are introducing new ...
详细信息
The HT-29 cell line, derived from human colon cancer, is valuable for biological and cancer research applications. Early detection is crucial for improving the chances of survival, and researchers are introducing new techniques for accurate cancer diagnosis. This study introduces an efficient deeplearning-based method for detecting and counting colorectal cancer cells (HT-29). The colorectal cancer cell line was procured from a company. Further, the cancer cells were cultured, and a transwell experiment was conducted in the lab to collect the dataset of colorectal cancer cell images via fluorescence microscopy. Of the 566 images, 80 % were allocated to the training set, and the remaining 20 % were assigned to the testing set. The HT-29 cell detection and counting in medical images is performed by integrating YOLOv2, ResNet-50, and ResNet-18 architectures. The accuracy achieved by ResNet-18 is 98.70 % and ResNet-50 is 96.66 %. The study achieves its primary objective by focusing on detecting and quantifying congested and overlapping colorectal cancer cells within the images. This innovative work constitutes a significant development in overlapping cancer cell detection and counting, paving the way for novel advancements and opening new avenues for research and clinical applications. Researchers can extend the study by exploring variations in ResNet and YOLO architectures to optimize object detection performance. Further investigation into real-time deployment strategies will enhance the practical applicability of these models.
Most of the CNN (convolutional neural networks) methods require alignment, which will affect the efficiency of verification. This paper proposes a deep face verification framework without alignment. First and foremost...
详细信息
Most of the CNN (convolutional neural networks) methods require alignment, which will affect the efficiency of verification. This paper proposes a deep face verification framework without alignment. First and foremost, the framework consists of two training stages and one testing stage. In the first training stage, the CNN is fully trained on the large face dataset. In the second training stage, embedding triplet is adopted to fine-tune the models. Furthermore, in the testing stage, SIFT (scale invariant feature transform) descriptors are extracted from intermediate pooling results for cascading verification, which effectively improves the accuracy of face verification without alignment. Last but not least, two CNN architectures are designed for different scenarios. The CNN1 (convolutional neural networks 1), with fewer layers and parameters, requires a small amount of memory and computation in training and testing, so it is suitable for real-time system. The CNN2 (convolutional neural networks 2), with more layers and parameters, has excellent face verification. Through the long-term training on WEB-face dataset and experiments on the LFW (labled faces in the wild), YTB (YouTube) datasets, the results show that the proposed method has superior performance compared with some state-of-the-art methods.
In recent years, the resolution of satellite remote sensing images has been continuously improved. People can obtain more useful data and information from remote sensing images. Remote sensing images are widely used i...
详细信息
Cracks are one of the most common factors that affect the quality of concrete surfaces, so it is necessary to detect concrete surface cracks. However, the current method of manual crack detection is labor-intensive an...
详细信息
Cracks are one of the most common factors that affect the quality of concrete surfaces, so it is necessary to detect concrete surface cracks. However, the current method of manual crack detection is labor-intensive and time-consuming. This study implements a novel lightweight neural network based on the YOLOv4 algorithm to detect cracks on a concrete surface in fog. Using the computer vision algorithm and the GhostNet Module concept for reference, the backbone network architecture of YOLOv4 is improved. The feature redundancy between networks is reduced and the entire network is compressed. The multi-scale fusion method is adopted to effectively detect cracks on concrete surfaces. In addition, the detection of concrete surface cracks is seriously affected by the frequent occurrence of fog. In view of a series of degradation phenomena in image acquisition in fog and the low accuracy of crack detection, the network model is integrated with the dark channel prior concept and the Inception module. The image crack features are extracted at multiple scales, and BReLU bilateral constraints are adopted to maintain local linearity. The improved model for crack detection in fog achieved an mAP of 96.50% with 132 M and 2.24 GMacs. The experimental results show that the detection performance of the proposed model has been improved in both subjective vision and objective evaluation metrics. This performs better in terms of detecting concrete surface cracks in fog.
Spatial sampling density and data size are important determinants of the imaging speed of photoacoustic mi-croscopy (PAM). Therefore, undersampling methods that reduce the number of scanning points are typically adopt...
详细信息
Spatial sampling density and data size are important determinants of the imaging speed of photoacoustic mi-croscopy (PAM). Therefore, undersampling methods that reduce the number of scanning points are typically adopted to enhance the imaging speed of PAM by increasing the scanning step size. Since undersampling methods sacrifice spatial sampling density, by considering the number of data points, data size, and the char-acteristics of PAM that provides three-dimensional (3D) volume data, in this study, we newly reported deeplearning-based fully reconstructing the undersampled 3D PAM data. The results of quantitative analyses demonstrate that the proposed method exhibits robustness and outperforms interpolation-based reconstruction methods at various undersampling ratios, enhancing the PAM system performance with 80-times faster-imaging speed and 800-times lower data size. The proposed method is demonstrated to be the closest model that can be used under experimental conditions, effectively shortening the imaging time with significantly reduced data size for processing.
Low employment rates in Latin America have contributed to a substantial rise in crime, prompting the emergence of new criminal tactics. For instance, “express robbery” has become a common crime committed by armed th...
详细信息
ISBN:
(数字)9798350376036
ISBN:
(纸本)9798350376043
Low employment rates in Latin America have contributed to a substantial rise in crime, prompting the emergence of new criminal tactics. For instance, “express robbery” has become a common crime committed by armed thieves, in which they drive motorcycles and assault people in public in a matter of seconds. Recent research has approached the problem by embedding weapon detectors in surveillance cameras; however, these systems are prone to false positives if no counterpart confirms the event. In light of this, we present a distributed IoT system that integrates a computer vision pipeline and object detection capabilities into multiple end-devices, constantly monitoring for the presence of firearms and sharp weapons. Once a weapon is detected, the end-device sends a series of frames to a cloud server that implements a 3DCNN to classify the scene as either a robbery or a normal situation, thus minimizing false positives. The deeplearning process to train and deploy weapon detection models uses a custom dataset with 16,799 images of firearms and sharp weapons. The best-performing model, YOLOv5s, optimized using TensorRT, achieved a final mAP of 0.87 running at 4.43 FPS. Additionally, the 3DCNN demonstrated 0.88 accuracy in detecting abnormal situations. Extensive experiments validate that the proposed system significantly reduces false positives while autonomously monitoring multiple locations in real-time.
The deeplearning (DL)-based methods of low-level tasks have many advantages over the traditional camera in terms of hardware prospects, error accumulation and imaging effects. Recently, the application of deep learni...
详细信息
Objectives: Intravascular optical coherence tomography (IVOCT) is a crucial micro-resolution imaging modality used to assess the internal structure of blood vessels. Lumen segmentation in IVOCT images is vital for mea...
详细信息
Objectives: Intravascular optical coherence tomography (IVOCT) is a crucial micro-resolution imaging modality used to assess the internal structure of blood vessels. Lumen segmentation in IVOCT images is vital for measuring the location and the extent of vessel blockages and for guiding percutaneous coronary intervention. Obtaining such information in real-time is essential, necessitating the use of fast automated algorithms. In this paper, we proposed an innovative polynomial-regression convolutional neural network (CNN) for fast and automated IVOCT lumen *** and methods: The polynomial-regression CNN architecture was uniquely crafted to enable single pass extraction of lumen borders via IVOCT image regression, ensuring real-timeprocessing efficiency without compromising accuracy. The architecture designed convolution for regression while omitting fully connected layers, leading to the spatial output of lumen representation as polynomial coefficients, thus enabling the formation of interconnected lumen points. The approach equipped the network to comprehend the intricate and continuous geometries and curvatures intrinsic to blood vessels in transverse and longitudinal dimensions. The network was trained on a dataset of 16,165 images and evaluated using 7,016 ***: The predicted segmentations exhibited a distance error of less than 2 pixels (26.40 mu m), Dice's coefficient of 0.982, Jaccard Index of 0.966, sensitivity of 0.980, specificity of 0.999, and a prediction time of 4 s (for a pullback containing 360 images). This technique demonstrated significantly improved performance in both accuracy and speed compared to published techniques. Conclusion: The strong segmentation performance, fast speed, and robustness to image variations highlight the practical clinical utility of the proposed polynomial-regression network. (c) 2023 AGBM. Published by Elsevier Masson SAS. All rights reserved.
This paper presents the dynamic modelling and linear matrix inequality (LMI) based controller design of a distributed fog computing framework for real-time robot vision applications. A mobile robot vision system acqui...
详细信息
This paper presents the dynamic modelling and linear matrix inequality (LMI) based controller design of a distributed fog computing framework for real-time robot vision applications. A mobile robot vision system acquires the images from an application environment such as a warehouse, where articles are stacked in numerous racks. We characterise the mobile robot vision data (MRVD) using frames per second (FPS) and the image resolution. From the MRVD, object detection is performed by an open-source deeplearning (DL) platform for detecting and localising various objects. However, with higher FPS together with high-resolution images, the processingtime by the DL algorithm increases significantly. This necessitates the deployment of a distributed computing platform with several computing nodes. In this work, we deploy a distributed fog computing environment (DFCE) for the real-time object detection in an application environment. The processingtime required to handle the MRVD is called the service time. However, for efficient auto-scaling performance, the mathematical model of the DFCE, taking into consideration the characteristics of the MRVD is necessary. In this context, we envisage the application of control theory to build the dynamic model of the DFCE. A Linear Parameter Varying (LPV) model is proposed for the DFCE with the service time as the output, the number of fog nodes as the input, and the characteristics of MRVD as the time-varying parameters. At first, an LPV model for the DFCE is derived using system identification, and the model is validated using the real-time test data. The LPV model is converted to a Polytopic LPV (PLPV) model for LMI based controller design. Finally, we develop and validate a Linear Matrix Inequality (LMI) based LPV controller to meet the service time constraints for a given application environment. For localisation and trajectory tracking with obstacle avoidance in the application environment, the mobile robot implements an Extend
暂无评论