Banknote authentication is a critical aspect of the financial industry, playing a vital role in ensuring the integrity of transactions and safeguarding against counterfeiting and fraudulent activities. The prevention ...
Banknote authentication is a critical aspect of the financial industry, playing a vital role in ensuring the integrity of transactions and safeguarding against counterfeiting and fraudulent activities. The prevention of counterfeit banknotes is of utmost importance due to the significant economic and national security risks associated with such activities. To address this challenge, advanced techniques utilizing machine learning algorithms trained on extensive datasets are employed to identify distinctive features and patterns characteristic of authentic banknotes. This work focuses on comparing and evaluating the performance of various machine learning models in the task of banknote authentication. The models under consideration include Logistic Regression With LBFGS, Logistic Regression With SGD, SVM With SGD, Random Forest, XgBoost, and Neural Network. By analyzing various parameters, the study aims to identify the most accurate and efficient approach for verifying banknote authenticity, thereby ensuring the integrity and security of financial transactions. The impact of counterfeit banknotes extends beyond the financial sector, affecting the economy and public trust in the currency's value. Thus, an effective and accurate banknote authentication system is crucial for maintaining a stable economy. The findings from this study will contribute to improved accuracy and efficiency in detecting counterfeit banknotes, mitigating financial losses, and preserving the economy's security and stability.
Support vector machine (SVM) is one of the most successful classifiers in data mining. The performance of SVM is mainly affected by the parameters and features used. Some approaches have been put forward to ensure the...
详细信息
ISBN:
(纸本)9781450397148
Support vector machine (SVM) is one of the most successful classifiers in data mining. The performance of SVM is mainly affected by the parameters and features used. Some approaches have been put forward to ensure the best performance of SVM, which usually utilized evolutionary computation algorithms or swarm intelligence algorithm to learn the optimal parameters or select the best subset for SVM. However, these procedures are conducted separately, which made it difficult to obtain the global optimal SVM classifier as features and parameters are interacted each other. In this paper, it proposes to simultaneously determine the parameters and accomplish feature selection for SVM by using Artificial Bee Colony Algorithm, which might acquire the overall optimal SVM classifier to the largest extent. The proposed method has been run on some UCI data set, as well, particle swarm optimization algorithm (PSO) and genetic algorithm (GA) are utilized to optimize SVM in the same way. Experimental results show that the proposed method has good adaptability and the classification accuracy, it can simultaneously obtain the optimal SVM classifier, which is better than PSO and GA in the term of optimization ability.
This paper examines the response of affective and cognitive processes under human–computer interaction (HCI) toward interactive video mobile learning known as the IVML prototype. The idea is to determine the potentia...
详细信息
Recent generative methods have revolutionized the way of human motion synthesis, such as Variational Autoencoders (VAEs), Generative Adversarial Networks (GANs), and Denoising Diffusion Probabilistic Models (DMs). The...
详细信息
Early detection of Acute Lymphoblastic Leukemia (ALL) is vital for providing appropriate treatment, which requires accurate diagnosis to identify the types of the disease. Therefore, this study focuses on evaluating t...
详细信息
ISBN:
(数字)9798331523657
ISBN:
(纸本)9798331523664
Early detection of Acute Lymphoblastic Leukemia (ALL) is vital for providing appropriate treatment, which requires accurate diagnosis to identify the types of the disease. Therefore, this study focuses on evaluating the performance of deep learning models in classifying different types of leukemia using three pre-trained CNN models: MobileNet V2, DenseNet201, and EfficientNet, to extract deep features. These features are then combined into a single set. An information exchange technique was used to select the best features after merging, and SMOTEENN was employed to address feature imbalance. The results showed that combining features extracted by the pre-trained CNN models achieved high accuracy and reliability, reaching 99.66%, enhancing the performance in accurately and quickly classifying patients. This approach not only improves diagnostic accuracy but also contributes to accelerating the decision-making process for treatment, enabling doctors to better tailor therapies and thereby improve patient outcomes. The findings underscore the importance of ongoing research in this field and provide future recommendations for enhancing performance and increasing classification accuracy in similar medical applications.
The traditional clustering methods, which rely on sample spacing to construct clusters, cannot be applied to high-dimensional data due to its intricate data structure and numerous irrelevant attributes. In practice, h...
The traditional clustering methods, which rely on sample spacing to construct clusters, cannot be applied to high-dimensional data due to its intricate data structure and numerous irrelevant attributes. In practice, high-dimensional data often possess a latent low-dimensional subspace structure. Multi-view clustering achieves consistency and complementarity of multi-view data by uncovering the underlying *** recent years, researchers have proposed many subspace representation clustering methods, which have improved the performance of multi-view clustering to a certain extent. However, general clustering methods need to construct Adjacency matrix and calculate the Eigendecomposition of a matrix of its Laplacian matrix, which makes the algorithm very complex and restricts the use of clustering algorithms in large-scale problems, based on the above issues, we propose a new multi-view subspace clustering model. In our model, by learning a data-anchor connection bipartite graph to achieve a fast clustering process, the weights of the edges of the bipartite graph are used to represent the similarity between the corresponding data and anchors, and matrix decomposition is utilized to reduce the view dimension to a low-dimensional space. In addition, the non-negative matrix decomposition technology is used to reduce the view dimension to a low dimensional space, and the $\ell_{1,2}$-norm is used for the similarity matrix between views to mitigate the impact of outliers in the original data on the Matrix decomposition,and the Schatten p-norm based on t-SVD is employed to amplify the high-order correlation between different views. A multitude of experimental results on various multi-view datasets demonstrate the effectiveness of our proposed method.
Terrorism poses a significant global threat, necessitating comprehensive security measures to protect people and critical infrastructure. With the increasing frequency of explosive-based attacks, there is a pressing n...
详细信息
ISBN:
(数字)9798350348637
ISBN:
(纸本)9798350348644
Terrorism poses a significant global threat, necessitating comprehensive security measures to protect people and critical infrastructure. With the increasing frequency of explosive-based attacks, there is a pressing need for advanced detection systems to mitigate potential devastation. This paper presents the development of an area-based explosive trace detection system using deep transfer learning. Leveraging deep learning technology trained on a substantial dataset collected from sensor networks, the proposed model, named Deep Transfer Learning for Explosive Trace Detection (DTLETD), demonstrates remarkable capabilities in classifying explosive gases based on concentrations of Carbon (C), Hydrogen (H), Oxygen (O), and Nitrogen (N). The methodology involves converting the dataset into 2D data using a serial data to image generator, followed by the adaptation of a pre-trained model for transfer learning. The DTLETD model outperformed conventional Convolutional Neural Network (CNN) models, achieving a training time reduction of about 92 seconds compared to 1287 seconds for the CNN model. Additionally, the transfer learning model exhibits faster convergence with nearly zero losses during training and validation, yielding an impressive accuracy of 99.7% and an average AUC value of approximately 0.89. This research significantly advances the field of explosive trace detection by integrating machine learning-based approaches with deep transfer learning techniques, thereby enhancing security protocols and mitigating potential threats.
Bangladesh has seen an absurd, steeper prize-hike for the last couple of years in one of the most consumed foods taken by millions of people every single day: rice. The impact of this phenomenon, however, is indispens...
详细信息
Learning-based video deraining methods generally integrate temporal correlation within the network. But their non-transparency (i.e., difficult to comprehend how to exploit temporal correlation) seriously limits the d...
详细信息
This paper introduces a novel approach for creating a visual place recognition (VPR) database for localization in indoor environments from RGBD scanning sequences. The proposed method formulates the problem as a minim...
This paper introduces a novel approach for creating a visual place recognition (VPR) database for localization in indoor environments from RGBD scanning sequences. The proposed method formulates the problem as a minimization challenge by utilizing a dominating set algorithm applied to a graph constructed from spatial information, referred to as the “DominatingSet” algorithm. Experimental results on various datasets, including 7-scenes, BundleFusion, RISEdb, and a specifically recorded sequences in a highly repetitive office setting, demonstrate that our technique significantly reduces database size while maintaining comparable VPR performance to state-of-the-art approaches in challenging environments. Additionally, our solution enables weakly-supervised labeling for all images from the sequences, facilitating the automatic fine-tuning of VPR algorithm to target environment. Additionally, this paper presents a fully automated pipeline for creating VPR databases from RGBD scanning sequences and introduces a set of metrics for evaluating the performance of VPR databases. The code and released data are available on our web-page —https://***/place-recognition-db/.
暂无评论