Lung cancer is a dangerous disease with differing treatment plans based on types and location of the cancerous cells. The overall 5-year survival rate for all stages of lung cancer is around 15%. People who smoke are ...
详细信息
Lung cancer is a dangerous disease with differing treatment plans based on types and location of the cancerous cells. The overall 5-year survival rate for all stages of lung cancer is around 15%. People who smoke are at the highest risk of developing lung cancer. Early detection of lung cancer is crucial for starting early treatment and preventing the disease from spreading. Hence, it can improve people’s chances of survival. Imaging tests, such as a chest computed tomography (CT) scan, can detect lung cancer by providing a more detailed picture. However, the examination of chest CT scans is a challenging task and is prone to subject variability. For this, researchers have developed many computer-aided diagnostic (CAD) systems for the automatic detection of cancer using CT scan images. Misdiagnoses can occur in manual interpretation of images. An automated trained neural network on lung images from healthy and malignant lung cells helps lower the problem. Convolutional neural network (CNN)-based pretrained deep learning models have been used successfully to detect lung cancer. The accuracy of classification is significant to avoid false prediction. This research presents a metalearning based approach for identifying the common types of lung cancer tissues namely, Benign tissue, Squamous Cell Carcinoma, and Adenocarcinoma using LC25000 dataset. All the experiments have been conducted on a publicly available benchmark dataset for lung histopathological images. The features extracted from the penultimate layer (global average pooling) of the transfer learning-based CNN models, namely InceptionResNetV1, EfficientNetB7, and DenseNet121, have been fused together, and the dimensionality reduction has been applied to them before passing to the metaclassifier, which is the Support Vector Machine (SVM) classifier in our case. A quantitative analysis of the proposed algorithm has been conducted through classification accuracy and confusion matrix computation. When compared wit
Parkinson’s disease is one of the most prevalent and harmful neurodegenerative conditions (PD). Even today, PD diagnosis and monitoring remain pricy and inconvenient processes. With the unprecedented progress of arti...
详细信息
Over the years, numerous optimization problems have been addressed utilizing meta-heuristic algorithms. Continuing initiatives have always been to create and develop new, practical algorithms. This work proposes a nov...
详细信息
Suicide is a significant public health issue that devastates individuals and society. Early warning systems are crucial in preventing suicide. The purpose of this research is to create a deep learning model to identif...
详细信息
Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing *** the popularity of ML techniques,only a...
详细信息
Over the past few years,the application and usage of Machine Learning(ML)techniques have increased exponentially due to continuously increasing the size of data and computing *** the popularity of ML techniques,only a few research studies have focused on the application of ML especially supervised learning techniques in Requirement engineering(RE)activities to solve the problems that occur in RE *** authors focus on the systematic mapping of past work to investigate those studies that focused on the application of supervised learning techniques in RE activities between the period of 2002–*** authors aim to investigate the research trends,main RE activities,ML algorithms,and data sources that were studied during this ***-five research studies were selected based on our exclusion and inclusion *** results show that the scientific community used 57 *** those algorithms,researchers mostly used the five following ML algorithms in RE activities:Decision Tree,Support Vector Machine,Naïve Bayes,K-nearest neighbour Classifier,and Random *** results show that researchers used these algorithms in eight major RE *** activities are requirements analysis,failure prediction,effort estimation,quality,traceability,business rules identification,content classification,and detection of problems in requirements written in natural *** selected research studies used 32 private and 41 public data *** most popular data sources that were detected in selected studies are the Metric Data Programme from NASA,Predictor Models in Software engineering,and iTrust Electronic Health Care System.
In the contemporary landscape, autonomous vehicles (AVs) have emerged as a prominent technological advancement globally. Despite their widespread adoption, significant hurdles remain, with security standing out as a c...
详细信息
The Internet of Things (IoT) is a form of Internet-based distributed computing that allows devices and their services to interact and execute tasks for each other. Consequently, the footprint of the IoT is increasing ...
详细信息
The proliferation of deluding data such as fake news and phony audits on news web journals,online publications,and internet business apps has been aided by the availability of the web,cell phones,and social *** can qu...
详细信息
The proliferation of deluding data such as fake news and phony audits on news web journals,online publications,and internet business apps has been aided by the availability of the web,cell phones,and social *** can quickly fabricate comments and news on social *** most difficult challenge is determining which news is real or ***,tracking down programmed techniques to recognize fake news online is *** an emphasis on false news,this study presents the evolution of artificial intelligence techniques for detecting spurious social media *** study shows past,current,and possible methods that can be used in the future for fake news *** different publicly available datasets containing political news are utilized for performing *** supervised learning algorithms are used,and their results show that conventional Machine Learning(ML)algorithms that were used in the past perform better on shorter text *** contrast,the currently used Recurrent Neural Network(RNN)and transformer-based algorithms perform better on longer ***,a brief comparison of all these techniques is provided,and it concluded that transformers have the potential to revolutionize Natural Language Processing(NLP)methods in the near future.
In recent years,it has been evident that internet is the most effective means of transmitting information in the form of documents,photographs,or videos around the *** purpose of an image compression method is to enco...
详细信息
In recent years,it has been evident that internet is the most effective means of transmitting information in the form of documents,photographs,or videos around the *** purpose of an image compression method is to encode a picture with fewer bits while retaining the decompressed image’s visual *** transmission,this massive data necessitates a lot of channel *** order to overcome this problem,an effective visual compression approach is required to resize this large amount of *** work is based on lossy image compression and is offered for static color *** quantization procedure determines the compressed data quality *** images are converted from RGB to International Commission on Illumination CIE La^(∗)b^(∗);and YCbCr color spaces before being *** the transform domain,the color planes are encoded using the proposed quantization *** improve the efficiency and quality of the compressed image,the standard quantization matrix is updated with the respective image *** used seven discrete orthogonal transforms,including five variations of the Complex Hadamard Transform,Discrete Fourier Transform and Discrete Cosine Transform,as well as thresholding,quantization,de-quantization and inverse discrete orthogonal transforms with CIE La^(∗)b^(∗);and YCbCr to RGB *** to signal noise ratio,signal to noise ratio,picture similarity index and compression ratio are all used to assess the quality of compressed *** the relevant transforms,the image size and bits per pixel are also *** the(n,n)block of transform,adaptive scanning is used to acquire the best feasible compression *** of these characteristics,multimedia systems and services have a wide range of possible applications.
Software trustworthiness is an essential criterion for evaluating software quality. In componentbased software, different components play different roles and different users give different grades of trustworthiness af...
详细信息
Software trustworthiness is an essential criterion for evaluating software quality. In componentbased software, different components play different roles and different users give different grades of trustworthiness after using the software. The two elements will both affect the trustworthiness of software. When the software quality is evaluated comprehensively, it is necessary to consider the weight of component and user feedback. According to different construction of components, the different trustworthiness measurement models are established based on the weight of components and user feedback. Algorithms of these trustworthiness measurement models are designed in order to obtain the corresponding trustworthiness measurement value automatically. The feasibility of these trustworthiness measurement models is demonstrated by a train ticket purchase system.
暂无评论