During the last three decades, the importance of the application software is tremendously increased. Every organization has automated all their day-to-day activities. For automation purpose they used the procedural or...
详细信息
Analyzing incomplete data is one of the prime concerns in data analysis. Discarding the missing records or values might result in inaccurate analysis outcomes or loss of helpful information, especially when the size o...
详细信息
Analyzing incomplete data is one of the prime concerns in data analysis. Discarding the missing records or values might result in inaccurate analysis outcomes or loss of helpful information, especially when the size of the data is small. A preferable alternative is to substitute the missing values using imputation such that the substituted values are very close to the actual missing values and this is a challenging task. In spite of the existence of many imputation algorithms, there is no universal imputation algorithm that can yield the best values for imputing all types of datasets. This is mainly because of the dependence of the imputation algorithm on the inherent properties of the data. These properties include type of data distribution, data size, dimensionality, presence of outliers, data dependency among the attributes, and so on. In the literature, there exists no straightforward method for determining a suitable imputation algorithm based on the data characteristics. The existing practice is to conduct exhaustive experimentation using the available imputation techniques with every dataset and this requires a lot of time and effort. Moreover, the current approaches for checking the suitability of imputations cannot be done when the ground truth data is not available. In this paper, we propose a new method for the systematic selection of a suitable imputation algorithm based on the inherent properties of the dataset which eliminates the need for exhaustive experimentation. Our method determines the imputation technique which consistently gives lower errors while imputing datasets with specific properties. Also, our method is particularly useful when the real-world data do not have the ground truth for missing data to check the imputation performance and suitability. Once the suitability of a DI technique is established based on the data properties, this selection will remain valid for another dataset with similar properties. Thus, our method can save time an
Graph convolutional networks (GCNs) have emerged as a powerful tool for action recognition, leveraging skeletal graphs to encapsulate human motion. Despite their efficacy, a significant challenge remains the dependenc...
详细信息
Data fusion generates fused data by combining multiple sources,resulting in information that is more consistent,accurate,and useful than any individual source and more reliable and consistent than the raw original dat...
详细信息
Data fusion generates fused data by combining multiple sources,resulting in information that is more consistent,accurate,and useful than any individual source and more reliable and consistent than the raw original data,which are often imperfect,inconsistent,complex,and *** data fusion methods like probabilistic fusion,set-based fusion,and evidential belief reasoning fusion methods are computationally complex and require accurate classification and proper handling of raw *** fusion is the process of integrating multiple data *** filtering means examining a dataset to exclude,rearrange,or apportion data according to the *** sensors generate a large amount of data,requiring the development of machine learning(ML)algorithms to overcome the challenges of traditional *** advancement in hardware acceleration and the abundance of data from various sensors have led to the development of machine learning(ML)algorithms,expected to address the limitations of traditional ***,many open issues still exist as machine learning algorithms are used for data *** the literature,nine issues have been identified irrespective of any *** decision-makers should pay attention to these issues as data fusion becomes more applicable and successful.A fuzzy analytical hierarchical process(FAHP)enables us to handle these *** helps to get the weights for each corresponding issue and rank issues based on these calculated *** most significant issue identified is the lack of deep learning models used for data fusion that improve accuracy and learning quality weighted *** least significant one is the cross-domain multimodal data fusion weighted 0.076 because the whole semantic knowledge for multimodal data cannot be captured.
Histopathology image analysis is crucial for the accurate diagnosis of breast cancer (BC), which is the most prevalent cancer among women. The prompt secondary opinions based on automated analysis can assist pathologi...
详细信息
Ensuring the security and protection of digital content transmitted over a network is a critical challenge, as data can take various forms, such as text, images, audio, and videos, all of which can be easily manipulat...
详细信息
Aims/Background: Twitter has rapidly become a go-to source for current events coverage. The more people rely on it, the more important it is to provide accurate data. Twitter makes it easy to spread misinformation, wh...
详细信息
Social media is so widely used;travelers frequently post texts, photos, and videos about their experiences online. These user-generated content (UGCs) significantly affect how travelers perceive Tourist Destination Im...
详细信息
Social media is so widely used;travelers frequently post texts, photos, and videos about their experiences online. These user-generated content (UGCs) significantly affect how travelers perceive Tourist Destination Image (TDI) and directly influence their decision-making. UGC photos represent passengers' visual preferences for a certain place. Considering the significance of photographs, a lot of researchers have tried to analyze them using machine learning methods. Nevertheless, a limitation of research efforts employing these techniques for tourism image analysis is that they cannot precisely categorize the unique photos found in specific tourist destinations using predetermined images. In order to get rid of these challenges, this work proposes a Tourism image classification model termed as Improved Weiner Filtering and Ensemble of Classification model for Tourism Image Classification (IWF-ECTIC) model which includes steps like preprocessing, segmentation, feature extraction, and classification. The preprocessing is done via improved wiener filtering which provides a more robust assessment of image quality. The segmentation is done by the Hierarchical Density-Based Spatial Clustering of Applications with Noise (HDBSCAN) model which can effectively handle outliers and noisy points. Subsequent to the segmentation, MLGBPHS (Modified Local Gabor Binary Pattern Histogram Sequence), MBP (Median Binary Pattern), color, shape and Improved Entropy-based features are extracted in the feature extraction stage to reduce the dimensionality of the data. Finally, ensemble classification is done by combining the Multihead Convolutional Neural Network with Attention Mechanism (MHCNN-AM), Bidirectional Long Short-Term Memory (Bi-LSTM), and Recurrent Neural Network (RNN) classifiers to classify the images. To improve the classification accuracy, optimal training will be done in MHCNN-AM, Bi-LSTM, and RNN classifier using the proposed Cuckoo Updated Chimp Optimization (CUCO) algorit
The modernization of IoT technology has introduced new cybersecurity challenges, with Distributed Denial of Service (DDoS) attacks posing significant risks to the availability of IoT services. These attacks disrupt ta...
详细信息
This study examines the influence of vegetation on sediment movement in a 180° river bend with rigid emergent vegetation on the outer side. Using an Acoustic Doppler Velocimeter, researchers collected 504 velocit...
详细信息
暂无评论