In this paper, a new approach for mining image association rules is presented, which involves the fine-tuned CNN model, as well as the proposed FIAR and OFIAR algorithms. Initially, the image transactional database is...
详细信息
The agriculture industry's production and food quality have been impacted by plant leaf diseases in recent years. Hence, it is vital to have a system that can automatically identify and diagnose diseases at an ini...
详细信息
An information system stores outside data in the backend database to process them efficiently and protects sensitive data from illegitimate flow or unauthorised users. However, most information systems are made in suc...
详细信息
Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, an...
详细信息
Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, and cost. In recent years, convolution neural networks (CNNs) have revolutionized computer vision. Convolution is a "local" CNN technique that is only applicable to a small region surrounding an image. Vision Transformers (ViT) use self-attention, which is a "global" activity since it collects information from the entire image. As a result, the ViT can successfully gather distant semantic relevance from an image. This study examined several optimizers, including Adamax, SGD, RMSprop, Adadelta, Adafactor, Nadam, and Adagrad. With 1750 Healthy and Glaucoma images in the IEEE fundus image dataset and 4800 healthy and glaucoma images in the LAG fundus image dataset, we trained and tested the ViT model on these datasets. Additionally, the datasets underwent image scaling, auto-rotation, and auto-contrast adjustment via adaptive equalization during preprocessing. The results demonstrated that preparing the provided dataset with various optimizers improved accuracy and other performance metrics. Additionally, according to the results, the Nadam Optimizer improved accuracy in the adaptive equalized preprocessing of the IEEE dataset by up to 97.8% and in the adaptive equalized preprocessing of the LAG dataset by up to 92%, both of which were followed by auto rotation and image resizing processes. In addition to integrating our vision transformer model with the shift tokenization model, we also combined ViT with a hybrid model that consisted of six different models, including SVM, Gaussian NB, Bernoulli NB, Decision Tree, KNN, and Random Forest, based on which optimizer was the most successful for each dataset. Empirical results show that the SVM Model worked well and improved accuracy by up to 93% with precision of up to 94% in the adaptive equalization preprocess
The rapid advancement and proliferation of Cyber-Physical Systems (CPS) have led to an exponential increase in the volume of data generated continuously. Efficient classification of this streaming data is crucial for ...
详细信息
Scalability and information personal privacy are vital for training and deploying large-scale deep learning *** learning trains models on exclusive information by aggregating weights from various devices and taking ad...
详细信息
Scalability and information personal privacy are vital for training and deploying large-scale deep learning *** learning trains models on exclusive information by aggregating weights from various devices and taking advantage of the device-agnostic environment of web ***,relying on a main central server for internet browser-based federated systems can prohibit scalability and interfere with the training process as a result of growing client ***,information relating to the training dataset can possibly be extracted from the distributed weights,potentially reducing the privacy of the local data used for *** this research paper,we aim to investigate the challenges of scalability and data privacy to increase the efficiency of distributed training *** a result,we propose a web-federated learning exchange(WebFLex)framework,which intends to improve the decentralization of the federated learning *** is additionally developed to secure distributed and scalable federated learning systems that operate in web browsers across heterogeneous ***,WebFLex utilizes peer-to-peer interactions and secure weight exchanges utilizing browser-to-browser web real-time communication(WebRTC),efficiently preventing the need for a main central *** has actually been measured in various setups using the MNIST *** results show WebFLex’s ability to improve the scalability of federated learning systems,allowing a smooth increase in the number of participating devices without central data *** addition,WebFLex can maintain a durable federated learning procedure even when faced with device disconnections and network ***,it improves data privacy by utilizing artificial noise,which accomplishes an appropriate balance between accuracy and privacy preservation.
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software w...
详细信息
The development of defect prediction plays a significant role in improving software quality. Such predictions are used to identify defective modules before the testing and to minimize the time and cost. The software with defects negatively impacts operational costs and finally affects customer satisfaction. Numerous approaches exist to predict software defects. However, the timely and accurate software bugs are the major challenging issues. To improve the timely and accurate software defect prediction, a novel technique called Nonparametric Statistical feature scaled QuAdratic regressive convolution Deep nEural Network (SQADEN) is introduced. The proposed SQADEN technique mainly includes two major processes namely metric or feature selection and classification. First, the SQADEN uses the nonparametric statistical Torgerson–Gower scaling technique for identifying the relevant software metrics by measuring the similarity using the dice coefficient. The feature selection process is used to minimize the time complexity of software fault prediction. With the selected metrics, software fault perdition with the help of the Quadratic Censored regressive convolution deep neural network-based classification. The deep learning classifier analyzes the training and testing samples using the contingency correlation coefficient. The softstep activation function is used to provide the final fault prediction results. To minimize the error, the Nelder–Mead method is applied to solve non-linear least-squares problems. Finally, accurate classification results with a minimum error are obtained at the output layer. Experimental evaluation is carried out with different quantitative metrics such as accuracy, precision, recall, F-measure, and time complexity. The analyzed results demonstrate the superior performance of our proposed SQADEN technique with maximum accuracy, sensitivity and specificity by 3%, 3%, 2% and 3% and minimum time and space by 13% and 15% when compared with the two sta
Wireless body sensor networks have gained significant importance across diverse fields, including environmental monitoring, healthcare, and sports. This research is concentrated on sports applications, specifically ex...
详细信息
The successful execution and management of Offshore Software Maintenance Outsourcing(OSMO)can be very beneficial for OSMO vendors and the OSMO *** a lot of research on software outsourcing is going on,most of the exis...
详细信息
The successful execution and management of Offshore Software Maintenance Outsourcing(OSMO)can be very beneficial for OSMO vendors and the OSMO *** a lot of research on software outsourcing is going on,most of the existing literature on offshore outsourcing deals with the outsourcing of software development *** frameworks have been developed focusing on guiding software systemmanagers concerning offshore software ***,none of these studies delivered comprehensive guidelines for managing the whole process of *** is a considerable lack of research working on managing OSMO from a vendor’s ***,to find the best practices for managing an OSMO process,it is necessary to further investigate such complex and multifaceted phenomena from the vendor’s *** study validated the preliminary OSMO process model via a case study research *** results showed that the OSMO process model is applicable in an industrial setting with few *** industrial data collected during the case study enabled this paper to extend the preliminary OSMO process *** refined version of the OSMO processmodel has four major phases including(i)Project Assessment,(ii)SLA(iii)Execution,and(iv)Risk.
Abnormal event detection in video surveillance is critical for security, traffic management, and industrial monitoring applications. This paper introduces an innovative methodology for anomaly detection in video data,...
详细信息
暂无评论