The measurement property of the six-particle maximally entangled state was used by Sun [Mod. Phys. Lett. A 37, 2250149 (2022)] to design a quantum private comparison (QPC) protocol. However, this study points out that...
详细信息
k-Nearest Neighbor (k-NN) has fairly good accuracy results on data with almost the same class (balanced), whereas for data whose class distribution is not balanced, the accuracy value of k-NN tends to be low, moreover...
k-Nearest Neighbor (k-NN) has fairly good accuracy results on data with almost the same class (balanced), whereas for data whose class distribution is not balanced, the accuracy value of k-NN tends to be low, moreover k-NN does not differentiate each feature data, meaning that each feature has the same weight in determining the new data class so that before classifying the feature selection process it is necessary to select the features that are most relevant to the data class. In this research, to overcome this problem, we will propose a framework using the Synthetic Minority Oversampling Technique (SMOTE) method to solve the class imbalance problem and the Gain Ratio (GR) to perform feature selection, so that get a new dataset with a balanced class distribution and features relevant to class data. The datasets used in this research such as E-Coli, Glass, and New Thyroid. For objective results, the 10-fold-cross validation method will be used as an evaluation method with k values 1 to 10. The results of the research prove that SMOTE and GR can increase the accuracy of the k-NN method, where the highest increase occurred in the Glass dataset by a difference increase of 18.43%. The lowest increase in accuracy occurred in the new thyroid dataset with an increase of 8.73%. The increase in accuracy occurred in all datasets used with an average increase of 12.87
The Naïve Bayes method is proven to have a high speed when applied to large datasets, but the Naïve Bayes method has weaknesses when selecting attributes because Naïve Bayes is a statistical classificat...
The Naïve Bayes method is proven to have a high speed when applied to large datasets, but the Naïve Bayes method has weaknesses when selecting attributes because Naïve Bayes is a statistical classification method that is only based on the Bayes theorem so that it can only be used to predict the probability of the class membership of a class independently. Independent without being able to do the selection of attributes that have a high correlation and correlation between one attribute with other attributes so that it can affect the value of accuracy. Naïve Bayesian Weight has been able to provide better accuracy than conventional Naïve Bayesian. Where an increase in the highest accuracy value obtained from the Water Quality dataset is equal to 88.57% in the Weight Naïve Bayesian classification model, while the lowest accuracy value is obtained from the Haberman dataset which is 78.95% in the conventional Naïve Bayesian classification model. The increase in accuracy of the Weight Naïve Bayesian classification model in the Water Quality dataset is 2.9%. While the increase in accuracy value in the Haberman dataset is 1.8%. If done the average accuracy of each dataset using the Weight Naïve Bayesian classification model is 2.35%. Based on the testing that has been done on all test data, it can be said that the Weight Naïve Bayesian classification model can provide better accuracy values than those produced by the conventional Naïve Bayesian classification model.
The objectives of the research article were: To making the online Historical map of the Pak Phraek road community, Kanchanaburi. The samples and Key Informants included;villagers, philosopher of the Pak Phraek road co...
详细信息
Phased arrays are crucial in various technologies, such as radar and wireless communications, due to their ability to precisely control and steer electromagnetic waves. This precise control improves signal processing ...
详细信息
The most adopted definition of landslide hazard combines spatial information about landslide location (susceptibility), threat (intensity), and frequency (return period). Only the first two elements are usually consid...
详细信息
The most adopted definition of landslide hazard combines spatial information about landslide location (susceptibility), threat (intensity), and frequency (return period). Only the first two elements are usually considered and estimated when working over vast areas. Even then, separate models constitute the standard, with frequency being rarely investigated. Frequency and intensity are dependent on each other because larger events occur less frequently and vice versa. However, due to the lack of multi-temporal inventories and joint statistical models, modeling such properties via a unified hazard model has always been challenging and has yet to be attempted. Here, we develop a unified model to estimate landslide hazard at the slope unit level to address such gaps. We employed deep learning, combined with extreme-value theory to analyze an inventory of 30 years of observed rainfall-triggered landslides in Nepal and assess landslide hazard for multiple return periods. We also use our model to further explore landslide hazard for the same return periods under different climate change scenarios up to the end of the century. Our model performs excellently (with an accuracy of 0.78 and an area under the curve of 0.86) and can be used to model landslide hazard in a unified manner. Geomorphologically, we find that under climate change scenarios (SSP245 and SSP585), landslide hazard is likely to increase up to two times on average in the lower Himalayan regions (Siwalik and lower Himalayas; ≈110–3,500 m) while remaining the same in the middle Himalayan region (≈3,500–5,000 m) whilst decreasing slightly in the upper Himalayan region (≳5,000 m) areas. A Landslide hazard model is developed by combining deep learning and the extended generalized Pareto distribution from extreme value theory Thirty years of past observations are used to model the current landslide hazard scenario under different return periods of precipitation Landslide hazard is predicted for multiple return peri
This research introduces a deep learning framework that combines convolutional neural networks with autoencoders to improve the diagnostic accuracy of knee osteoarthritis. The study utilized a publicly available datas...
详细信息
Billions of photos are uploaded to the web daily through various types of social networks. Some of these images receive millions of views and become popular, whereas others remain completely unnoticed. This raises the...
详细信息
The development of technology has been very significant, not only in the fields of information, industry, education, but in agriculture. Therefore technological sophistication is also utilized by corn farmers to obtai...
详细信息
With the recent rapid development of the Internet of Vehicles and autonomous driving technology, the demand for digital vehicle identification has also increased. Coupled with the gradual maturity of quantum technolog...
详细信息
ISBN:
(数字)9798350389210
ISBN:
(纸本)9798350389227
With the recent rapid development of the Internet of Vehicles and autonomous driving technology, the demand for digital vehicle identification has also increased. Coupled with the gradual maturity of quantum technology, the Internet of Vehicles, which mainly uses wireless communication technology for data transmission, will face the threat of quantum computing and privacy issues. This research proposes the Vehicle Forensics Cloud System (VFCS), a framework combining post-quantum cryptography and blockchain technology to address these issues and provide a comprehensive solution for vehicle digital forensics.
暂无评论