Artificial Intelligence is an essential tool for early disease recognition and supporting patient condition monitoring in the future. Timely and exact conclusions about the type of disease are significant for treatmen...
详细信息
Herbage mass yield and composition estimation is an important tool for dairy farmers to ensure an adequate supply of high quality herbage for grazing and subsequently milk production. By accurately estimating herbage ...
详细信息
ISBN:
(数字)9781665487399
ISBN:
(纸本)9781665487405
Herbage mass yield and composition estimation is an important tool for dairy farmers to ensure an adequate supply of high quality herbage for grazing and subsequently milk production. By accurately estimating herbage mass and composition, targeted nitrogen fertiliser application strategies can be deployed to improve localised regions in a herbage field, effectively reducing the negative impacts of over-fertilization on biodiversity and the environment. In this context, deep learning algorithms offer a tempting alternative to the usual means of sward composition estimation, which involves the destructive process of cutting a sample from the herbage field and sorting by hand all plant species in the herbage. The process is labour intensive and time consuming and so not utilised by farmers. Deep learning has been successfully applied in this context on images collected by high-resolution cameras on the ground. Moving the deep learning solution to drone imaging, however, has the potential to further improve the herbage mass yield and composition estimation task by extending the ground-level estimation to the large surfaces occupied by fields/paddocks. Drone images come at the cost of lower resolution views of the fields taken from a high altitude and requires further herbage ground-truth collection from the large surfaces covered by drone images. This paper proposes to transfer knowledge learned on ground-level images to raw drone images in an unsupervised manner. To do so, we use unpaired image style translation to enhance the resolution of drone images by a factor of eight and modify them to appear closer to their ground-level counterparts. We then use the enhanced drone images to train a semi-supervised algorithm that uses ground-truthed, ground-level images as the labelled data together with a large amount of unlabeled drone images. We validate our results on a small held-out drone image test set to show the validity of our approach, which opens the way for automate
Emotions play a crucial role in decision-making, moral judgments, and other cognitive processes. The goal of this work is to identify people's facial emotions from face images. Facial expression recognition (FER),...
详细信息
ISBN:
(数字)9798331528539
ISBN:
(纸本)9798331528546
Emotions play a crucial role in decision-making, moral judgments, and other cognitive processes. The goal of this work is to identify people's facial emotions from face images. Facial expression recognition (FER), a highly diverse field with many commonalities between classes, is a challenging problem. The capabilities of current FER systems are limited in this regard. In this work, a multi-block Deep Convolutional Neural Networks (DCNN) model was proposed to identify facial expressions. Five blocks with different computational components were defined in a multi-block DCNN to extract the unique characteristics from input images. The second model (DCNN-SVM) is developed with an ensemble of SVM classifier to improve predictability and stability. The third model (DCNN-MV) is based on the majority voting with 4 classifiers SVM, KNN, Random Forest, and Decision Tree. The CK+ dataset was utilized and expanded via image data augmentation to enhance model performance and generalization. By adjusting hyperparameters, the suggested DCNN model's accuracy was investigated. The DCNN model reports an accuracy score of 97.71%. Both DCNN-SVM and DCNN-MV models achieved 97.79% accuracy. On the CK+ dataset, the proposed models demonstrated good accuracy in identifying facial expressions. When compared to existing models, the proposed model outperformed existing models in terms of accuracy.
To promote a patient-centered, sustainable healthcare ecosystem, this paper explores how blockchain and Federated Learning (FL) might be integrated into the Medical Internet of Things (MIoT). A decentralized architect...
详细信息
To promote a patient-centered, sustainable healthcare ecosystem, this paper explores how blockchain and Federated Learning (FL) might be integrated into the Medical Internet of Things (MIoT). A decentralized architecture for improving data security, privacy, and interoperability between healthcare systems is proposed in this article. The article handles the drawbacks of centralized Machine Learning (ML) techniques, which frequently jeopardize the privacy of sensitive medical data, by utilizing the collaborative character of FL and the security aspects of blockchain. ML models are jointly trained while protecting patient privacy by leveraging distributed MIoT data. To effectively anticipate diseases, the study uses a variety of approaches, including FL coupled with AdaBoost, Extra Tree Classifier, Decision Tree (DT), or Linear Discriminant Analysis (LDA). Extensive testing encompassing feature selection, data splitting, cross-validation, and hyperparameter tuning guarantees the effectiveness and confidentiality of the suggested methodology. To assess the efficacy of the technique, performance measures such as Accuracy, Balanced Accuracy (BA), Fowlkes-Mallows Index (FM), Matthews Correlation Coefficient (MCC), and Bookmaker Informedness (BI) are calculated. The findings demonstrate an improved level of accuracy in contrast with conventional techniques. The research represents a novel approach to medical data analysis by choosing the best algorithm using several evaluation metrics and then incorporating it into FL.
Efficient monitoring of carbon capture and storage (CCS) systems heavily relies on sensor data. However, sensors are susceptible to potential multiple faults, leading to performance degradation and posing a risk of ca...
详细信息
In this paper, the classification of colon cancer tissues by means of machine learning approaches is evaluated. In today’s world, a revolutionary advancement has come in the classification and diagnosis of diseases i...
详细信息
In this paper, the classification of colon cancer tissues by means of machine learning approaches is evaluated. In today’s world, a revolutionary advancement has come in the classification and diagnosis of diseases in the medical and healthcare sectors. Deep learning classifiers and machine learning methods are now broadly applied to accurately diagnose a number of diseases. Cancer is one of the world’s most significant roots of death, appealing to the lives of one person out of every six. As per the national library of medicine, the third leading cause of death worldwide is colorectal cancer. Identifying an illness at a premature stage increases the chances of survival. Automated diagnosis and the classification of tissues from images can be completed much more quickly with the use of artificial intelligence. A publicly available IoT dataset CRC–VAL–HE–7K consisting of 7180 images, distributed among nine types of colorectal tissues: background, lymphocytes, adipose, mucus, colorectal adenocarcinoma epithelium, normal colon mucosa, debris, cancer-associated stroma, and, smooth muscle is used after preprocessing. Feature extraction is done by applying Differential-Box-Count on all blocks of images. The dataset is evaluated by these Machine Learning (ML) procedures: K-Nearest Neighbor, Support Vector Machine, Decision Tree, Random Forest, Extreme Gradient Boosting, and Gaussian Naive Bayes. Results show that the performance of Extreme Gradient Boosting is the best and most viable approach.
Skin pathologies encompass a spectrum of conditions, with malignancies such as melanoma representing a critical diagnostic urgency. This investigation delineates the deployment of Convolutional Neural Networks (CNNs) ...
详细信息
ISBN:
(数字)9798350356816
ISBN:
(纸本)9798350356823
Skin pathologies encompass a spectrum of conditions, with malignancies such as melanoma representing a critical diagnostic urgency. This investigation delineates the deployment of Convolutional Neural Networks (CNNs) for the classification of dermatological anomalies, benchmarking CNN diagnostic fidelity against dermatological expert evaluations. The study underscores the efficacy of CNN classifiers in expediting the diagnostic workflow for various cutaneous disorders. Advocating for an automated diagnostic framework, the research introduces a CNN-based system aimed at reducing human diagnostic load, accelerating diagnostic timelines, and enhancing survival outcomes. Utilizing advanced image processing algorithms and deep learning architectures, the research presents an automated classification system for skin pathologies, addressing both benign and malignant presentations. The classification matrix includes nine dermatological conditions: actinic keratosis, basal cell carcinoma, benign keratosis, dermatofibroma, melanoma, nevus, seborrheic keratosis, squamous cell carcinoma, and vascular lesions. The objective is to engineer a CNN model with robust diagnostic performance across a diverse lesion dataset, ensuring accurate identification and categorization of dermatological conditions.
Exploratory data analysis and visualization are essential tools for discovering patterns, highlighting anomalies, and discovering insights from datasets. In addition, visualizations tend to provide information in a wa...
Exploratory data analysis and visualization are essential tools for discovering patterns, highlighting anomalies, and discovering insights from datasets. In addition, visualizations tend to provide information in a way that is more accessible for the audience to understand and comprehend. Underprivileged communities will need an easier way to comprehend issues, challenges, and threats such as avoiding diseases. The use of exploratory data analysis and visualizations of diabetes datasets can be used as a means to make this topic easier to understand for underprivileged communities. A methodology has been developed to explore a dataset collection of diabetes patients. The dataset was obtained from the Palestinian community medical center (first-hand dataset), then the dataset was prepared and cleaned, and all rows with missing values or wrong values were dropped and removed, then an exploratory data analysis with a large number of data visualizations was produced, and last, a sample of the produced visualizations have been used to conduct a social acceptance experiment to educate an underprivileged population in the Palestinian community about the risks of diabetes. The result of the experiment was excellent and very promising in which participants understood the contents in a much easier way and provided very positive feedback on utilizing visualizations of diabetes datasets for public awareness and good.
Automated radiotherapy treatment planning aims to improve treatment accuracy and efficiency. However, the prevalent Knowledge-Based Planning (KBP) method faces issues like lengthy manual problem formulation and challe...
Automated radiotherapy treatment planning aims to improve treatment accuracy and efficiency. However, the prevalent Knowledge-Based Planning (KBP) method faces issues like lengthy manual problem formulation and challenges in accurately modeling human anatomy and processing high-dimensional data. This work focuses on treatment plan optimization and considers deep learning as a potential solution. Deep learning algorithms, by virtue of their ability to learn from large quantities of data and model complex relationships, can automate the formulation of the optimization problem in KBP, saving significant time and effort. However, despite these compelling advantages, it should be noted that the current convolution-based encoder-decoder models used for radiotherapy treatment plan optimization have a limited capability in capturing long-range dependencies between distant voxels. This work aims to introduce ‘DE-ConvGraph 3D UNet’, a novel deep learning model, to address these limitations and optimize radiotherapy treatment plans for oropharyngeal cancer. The proposed ‘DE-ConvGraph 3D UNet’ model includes a Graph Convolutional Network (GCN) component to capture long-range dependencies between distant voxels. Furthermore, the dual-encoder structure of the model combines the strengths of GCN and 3D Convolutional U-Net, enabling global relationships and local pattern capturing in a 3D patient volume. Experiments were conducted using the benchmark dataset - OpenKBP-Opt. The proposed model shows improvements in performance in comparison to state-of-the-art U-Net variants in terms of mean squared voxel-wise error, dose volume histogram points and clinical criteria satisfaction.
Transformer models exploit multi-head self-attention to capture long-range information. Further, window-based self-attention solves the problem of quadratic computational complexity and provides a new solution for den...
详细信息
Transformer models exploit multi-head self-attention to capture long-range information. Further, window-based self-attention solves the problem of quadratic computational complexity and provides a new solution for dense prediction of 3D images. However, Transformers miss structural information due to the naive tokenization scheme. Furthermore, single-scale attention fails to achieve a balance between feature representation and semantic information. Aiming at the above problems, we propose a window-based dynamic crossscale cross-attention transformer (DCS-Former) for precise representation of the diversity features. DCS-former first constructs dual-compound feature representations through Antehoc Structure-aware Module and Post-hoc Class-aware Module. Then, the bidirectional attention structure is designed to interactively fuse structural features with class representations. The experimental results show that our method outperforms various competing segmentation models on three different public datasets.
暂无评论