Brain strokes, characterized by sudden interruptions in cerebral blood flow, pose a significant health concern, especially in children, where detection is intrinsically challenging due to limitations in existing metho...
详细信息
ISBN:
(数字)9798350359688
ISBN:
(纸本)9798350359695
Brain strokes, characterized by sudden interruptions in cerebral blood flow, pose a significant health concern, especially in children, where detection is intrinsically challenging due to limitations in existing methodologies. Recent attempts leveraging Machine Learning with Deep Learning techniques have grappled with accuracy and efficiency shortcomings in addressing this critical issue. In response, our innovative model integrates the Extreme Learning Machine (ELM), Convolutional Neural Networks (CNNs), and Generative Adversarial Networks (GANs) algorithms. Rigorously evaluated using a comprehensive dataset, our ELM + CNN + GAN model demonstrates unparalleled accuracy, precision, and recall rates, surpassing benchmarks set by conventional approaches like the Random Forest algorithm and the AlexNet + SVM hybrid model. Notably, our model excels in diagnostic speed, offering a promising avenue for swift and efficient early detection of pediatric brain strokes. The strategic incorporation of ELM, recognized for its efficiency in single layer neural networks, along with the analytical power of CNN for image analysis and GAN for synthetic data generation, establishes a robust and comprehensive model. By outperforming existing methods, our approach signifies a substantial leap forward in addressing this critical health concern, revolutionizing stroke detection with enhanced accuracy and speed.
The trilemma trade-off problem between decentralisation, scalability, and security states that in blockchain systems the above properties are negatively correlated. Infrastructure, node configuration, choice of Consen...
详细信息
For wearing a mask, Covid19 has a new identity. Detecting masked faces with accuracy and efficiency is becoming increasingly important. Face mask recognition is a tough task because of the high occlusions that cause t...
详细信息
Machine learning is a powerful technology that enables computer systems to recognize patterns and make prediction and choices without performing explicit programming. The basic notion behind this technology is to crea...
详细信息
ISBN:
(数字)9798350374629
ISBN:
(纸本)9798350374636
Machine learning is a powerful technology that enables computer systems to recognize patterns and make prediction and choices without performing explicit programming. The basic notion behind this technology is to create models and algorithms that can take in input information, predict an outcome using statistical computation, and adjust outputs in response to fresh information. In this research paper, we examine the Big Mart Shopping Centre scenarios in order to forecast sales of various product categories and comprehend the impact of various factors on sales of various product categories. This paper focuses on leveraging machine learning techniques to predict sales for Big Mart with the aim of optimizing inventory management and maximizing revenue. Using a variety of characteristics from a dataset that Big Mart gathered, along with the methodology used to construct a prediction model, very accurate results are produced and these findings can be utilized to make decisions aimed at increasing sales and increase operational efficiency.
Poverty in India is a complex issue with multiple causes. This paper mainly focuses on three of the main causes for poverty: Education, Unemployment and Consumer Price Index. This study makes use of data received from...
详细信息
ISBN:
(数字)9798331505462
ISBN:
(纸本)9798331505479
Poverty in India is a complex issue with multiple causes. This paper mainly focuses on three of the main causes for poverty: Education, Unemployment and Consumer Price Index. This study makes use of data received from the National Informatics Center (NIC) of India to predict poverty across the various states of India. Through applying preprocessing techniques and employing the HPCC Systems Visualization bundle, the research shows that Bihar is facing a higher level of poverty, whereas Mizoram exhibits converse trend. Subsequently, the HPCC ML_Core bundle is applied to conduct linear regression analysis for predicting poverty scores. This predictive model demonstrated an accuracy of 97%. Lastly, the difference between a single-node and 8-node cluster is presented for runtime performance which clearly shows that the 8-node cluster has an advantage for similar computational tasks and thus makes it suitable.
Text summarization is the process of shortening a large body of text in a document to form a shorter while preserving its most important information. With the exponential growth of digital data, there is a need for au...
详细信息
ISBN:
(数字)9798350307795
ISBN:
(纸本)9798350307801
Text summarization is the process of shortening a large body of text in a document to form a shorter while preserving its most important information. With the exponential growth of digital data, there is a need for automated text summarization to help users quickly comprehend the content. The use of machine learning techniques, particularly deep learning models, has shown great promise in generating high-quality summaries. Among these models, BERT (Bidirectional Encoder Representations from Transformers) has emerged as a state-of-the-art method for various natural language processing (NLP) tasks, including text summarization. In this paper, we propose to implement a text summarization system using the BERT model. Apart from the normal text summarization, we will be implementing a Language Simplification model that simplifies complex sentence structures or use simpler vocabulary while preserving the original meaning. This model transforms convoluted or verbose sentences into shorter and more straight-forward constructions The paper aims to develop an end-to- end summarization system that takes a long document as input and generates a summary of the most important information contained in the document. For example, like taking the entire news from an online portal and generates it summary.
In the current era of chatbots, this research delves into the advancements in AI chatbots, drawing on artificial intelligence (AI) and natural language processing (NLP) techniques to mimic human-like conversations. A ...
详细信息
The classification of raw robusta coffee beans is a pivotal process with profound implications for the coffee industry. Recognizing the critical significance of this classification, this research endeavors to establis...
详细信息
The early and accurate diagnosis of glaucoma, a primary cause of permanent blindness, is critical for efficient treatment and prevention of vision loss. Although the exact causes of glaucoma are not yet fully understo...
The early and accurate diagnosis of glaucoma, a primary cause of permanent blindness, is critical for efficient treatment and prevention of vision loss. Although the exact causes of glaucoma are not yet fully understood, it is thought to be a result of several factors, including raised pressure inside the eye and decreased blood supply to the optic nerve. We have developed a convolutional neural network model for accurate detection of glaucoma. Methods based on deep learning have been effective at classifying diseases in retinal fundus images, facilitating in the evaluation of the growing number of images. The goal of this work is to create and train a unique deep CNN model that makes use of the connections between related eye-fundus tasks and metrics used to identify glaucoma. We have meticulously selected two distinct datasets to underpin this research endeavor: the ACRIMA dataset and the LAG dataset. Notably, our model attains a remarkable accuracy score of 99.29% on the ACRIMA dataset and an equally commendable accuracy score of 97.22% on the LAG dataset. This performance eclipses that of the majority of contemporary deep CNN models, underscoring the prowess and sophistication of our approach.
One of the most challenging jobs in image processing techniques is image segmentation. To identify the objects of interest in an image, we segment the image into different parts and extract the interesting objects. Pr...
One of the most challenging jobs in image processing techniques is image segmentation. To identify the objects of interest in an image, we segment the image into different parts and extract the interesting objects. Previous studies have addressed various techniques for image segmentation, like clustering, thresholding, watershed, neural networks, etc. However, a segmentation technique alone cannot achieve better-segmented results. Much research is being carried out by combining the segmentation technique with either an optimization algorithm or entropy measures to enhance the efficiency of segmented results in the segmentation process. So, there is a need to study various segmentation techniques combined with optimization algorithms or entropy measures. Recent studies focused only on segmentation techniques, entropy measures, or optimization algorithms rather than on these combinations. This paper presents various works based on optimization techniques and entropy measures combined with clustering and thresholding techniques of image segmentation. From the study it is observed that most of the work is carried out in combining thresholding-based image segmentation techniques with optimization algorithms and entropy measures.
暂无评论