Deep gaining knowledge is a synthetic intelligence approach used to recognize complex facts. It uses many facts to recognize the complicated relationships between various factors. With deep learning, models can be bui...
Deep gaining knowledge is a synthetic intelligence approach used to recognize complex facts. It uses many facts to recognize the complicated relationships between various factors. With deep learning, models can be built which might be capable of understanding the complicated patterns within big datasets. In this paper, we can explore using deep mastering in regression and type tasks on large datasets. In particular, we can use deep learning strategies to build a regression model that predicts a target value primarily based on the functions in a massive dataset and a classification version that can classify an object into one of the many classes of an extensive dataset. We can then discuss the overall performance of the fashions in comparison to standard models and talk about capability areas for development. Moreover, we will analyze the scalability of deep mastering strategies used on large datasets and discuss exceptional practices for building deep mastering fashions on massive datasets. Finally, we will summarize our findings and offer our conclusions approximately the use of deep studying in regression and type obligations on large datasets.
Sports science is an interdisciplinary and multidisciplinary science that strives to increase athletic performance and endurance. Sport science recognizes and prevents injuries. Sensors and statistics formalize Sports...
Sports science is an interdisciplinary and multidisciplinary science that strives to increase athletic performance and endurance. Sport science recognizes and prevents injuries. Sensors and statistics formalize Sports science. Runners need coaches and teams to support them before, during, and after the run race. Coaches generate running training plans to boost performance. Running race performances may be impacted by air pollution exposure while training, so coaches should consider limiting air pollution exposure when training. One of the external factors is Particulate Matter (PM 2.5 and PM 10 ). Sensors connecting to the Internet can record external factors and produce csv data. The foundation of supervised machine learning is the labeling process. Labeling a set of data is one of the laborious and time-consuming phases in every machine-learning application because it requires verifying the accuracy of the labels and making any necessary revisions. This research aimed to find a solution to automatically label numerous air particulate matter raw data using a rule based on parameters to reduce manual work, human errors and faster processes. This labeled data will later be used for supervised machine learning classification to support the coach in generating training programs for the runners in a Sports information System. Based on Indonesia Air Quality index rule-based approach, labeled text data in csv has been generated and tested with PM 2.5 and PM 10 parameters in three scenarios with a 100% success rate. It was possible to automate the labeling process, and it explained how automation results in fast and accurate results.
Low-Rate Denial of Service (LDoS) attacks, an emerging breed of DoS attacks, present a formidable challenge in terms of their detectability. Within the realm of network security, these attacks cast a substantial shado...
Low-Rate Denial of Service (LDoS) attacks, an emerging breed of DoS attacks, present a formidable challenge in terms of their detectability. Within the realm of network security, these attacks cast a substantial shadow over the availability and integrity of services. This study seeks to delve into the intricate mechanics of LDoS attacks and explore the landscape of developed detection methodologies. Employing bibliometric analysis, we employ a multifaceted approach to gather research insights pertaining to LDoS attack detection. This approach encompasses the examination of evolving research trends, scrutiny of widely cited scientific publications within the same domain, identification of prolific researchers shaping the field, and a judicious selection of reputable publishers for dependable information acquisition. Furthermore, our study undertakes a meticulous categorization and classification of previously proposed LDoS detection methods documented in well-established scientific literature. This endeavor not only provides a more profound comprehension of the evolutionary trajectory and thematic thrusts within LDoS attack detection research, but it also facilitates the identification of gaps and directions for future exploration. The culmination of these analytical efforts culminates in a comprehensive framework. This contribution manifests through structured categorization, classification, and a forward-looking research roadmap, which collectively serve as valuable tools for both current and aspiring researchers engaged in LDoS attack detection. In summary, this research not only delves into the complexities of LDoS attacks and their mechanisms but also augments the current understanding of detection capabilities. Through its nuanced contributions, this paper extends a helpful guide to the dynamic realm of LDoS attack detection, thus catalyzing advancements and insights for the future.
This paper examines the reproducibility of massive information analytics under particular factors. The paper proposes the “performing Scalable Inference” technique to cope with scalability troubles and to exploit cu...
This paper examines the reproducibility of massive information analytics under particular factors. The paper proposes the “performing Scalable Inference” technique to cope with scalability troubles and to exploit current big statistics platforms for efficient computing and statistics garage of the statistics. In particular, the paper describes how to perform leak-free, parallelizable visible analytics over massive datasets using present extensive records analytics frameworks such as Apache Flink. This method presents an automated manner to execute analytics that preserves reproducibility and the ability to make adjustments without re-running the entire technique. The paper also demonstrates how these analytics may help several real-world use instances, explore affected person cohorts for studies, and develop stratified patient cohorts for hospital therapy. In the end, the paper observes how the proposed method may be exercised within the real world. Actively scalable inference for massive information analytics is pivotal in optimizing decision-making and allocation of assets. Typically, such inferences are made based on information accumulated from numerous sources, databases, unstructured data, and different digital sources. So one can ensure scalability, a complete cloud-primarily based platform has to be hired. This solution will permit the ***, deploying the essential records series and evaluation algorithms are prime here. It could permit the platform to recognize the styles inside the statistics and discover any ability correlations or traits. Additionally, predictive analytics and system mastering strategies may be incorporated to provide insights into the results of the information. In the long run, by leveraging those techniques, the platform can draw efficient inferences and appropriately compare situations in an agile and green way..
The energy control of a Wireless Sensor Network (WSN) often leads to an unbalanced state between the battery storage system, energy extraction through photovoltaic systems energy, and energy utilization in the WSN. Th...
详细信息
ISBN:
(数字)9798350364101
ISBN:
(纸本)9798350364118
The energy control of a Wireless Sensor Network (WSN) often leads to an unbalanced state between the battery storage system, energy extraction through photovoltaic systems energy, and energy utilization in the WSN. These disparities result in suboptimal wireless sensor network performance. The reliance on batteries is a major factor contributing to this problem. The Q-Learning Energy Management System (Q-EMS) is designed to address these challenges and improve energy management strategies. The Q-EMS algorithm used a learning process resulting in optimal actions for sensor nodes in different situations. The rewards or punishments nodes receive determine their decisions, which are determined by the Q-EMS algorithm. The Q-learning model approach is aimed at reducing energy consumption and supply in WSNs. Energy supply is categorized into battery-based, transfer-based, and harvesting, while energy consumption can be classified into task-cycle, mobility-based, and data-driven. The development of the Q Learning algorithm model has three scenarios: determining energy needs for WSN, effective energy harvesting strategy, and effective energy transfer. As a result, effective energy demand, harvesting and transfer using the Q-learning algorithm are balanced. However, it needs to be studied in more depth using real data in the field. In future research, I will use real data and optimize the use of the Q-Learning algorithm.
The notion of the sub-exact sequence is the generalization of exact sequence in algebra especially on a module. A module over a ring R is a generalization of the notion of vector space over a field F. Refers to a spec...
详细信息
Education to the public on green products has been widely carried out. Consumers who buy green products can be profiled by segmenting green consumers. In green marketing, consumer education on sustainable consumption ...
详细信息
This research proposes a methodology for identifying the optimal feature combination using Support Vector Machine (SVM) based on edge and texture features. Canny, Sobel, and Prewitt for edge detection and GLCM, Gabor,...
This research proposes a methodology for identifying the optimal feature combination using Support Vector Machine (SVM) based on edge and texture features. Canny, Sobel, and Prewitt for edge detection and GLCM, Gabor, and LBP for texture analysis are popular feature extraction techniques widely employed in various cases. The approach combines edge and texture features with SVM to classify rice types into five categories: Ipsala, Jasmine, Karacadag, Arborio, and Basmati. Experimental results demonstrate that SVM using Canny, Sobel, and Prewitt combined Gabor texture features outperforms the comparative methods. A comparative analysis based on accuracy scores using confusion matrices is conducted. Combining all feature edge methods, Canny, Sobel, and Prewitt with Gabor on SVM classification is the best model for rice object classification, achieving an accuracy of 96%.
As the global population ages, the number of elderly people needing hearing aids is increasing. This paper proposed an architecture designed to enhance speech based on noise conditions. After noise types are identifie...
详细信息
ISBN:
(数字)9798350377088
ISBN:
(纸本)9798350377095
As the global population ages, the number of elderly people needing hearing aids is increasing. This paper proposed an architecture designed to enhance speech based on noise conditions. After noise types are identified by a Convolutional Neural Network (CNN) noise classification model, these noise conditions are combined with the noisy speech and input into a conditional Generative Adversarial Network (cGAN) for speech enhancement. The CNN noise classification model achieves an average recognition accuracy of nearly 92%, with a Kappa value as high as 0.88. The cGAN speech enhancement achieves lower error values in the Mel-Cepstral Distortion (MCD) measure. Although some parts of the human voice may be filtered out, the results demonstrate the potential development of the proposed algorithm.
Every patient has health record, it was written as statement of patient's conditions, treatments, and medications, and nowadays it is become digitalized, it can be copied and shared easily, but the nature of EHR i...
详细信息
Every patient has health record, it was written as statement of patient's conditions, treatments, and medications, and nowadays it is become digitalized, it can be copied and shared easily, but the nature of EHR is private, not for public, private between patients and healthcare provider. Privacy preserving can be done by implementing Blockchain network for storing EHR, only authorized users or nodes can access and write data to the network. New block in blockchain created as per consensus among nodes, every node can join a blockchain network if it is public, to meet a consensus required resources intensive cryptographic challenge, in private blockchain, only selected can join and involved in the network or specific transaction. In healthcare, data privacy is the main concerned, need a secure, scalable, and efficient blockchain hence implementing consortium blockchain is perfect match for EHR where multiple healthcare provider can operate in a single platform to conduct transaction or sharing EHR among members. This study analyzes 15 articles derived from IEEE Explore, science Direct and Scopus by implementing screening with keyword Consortium Blockchain and EHR and proposing new blockchain model accommodating solution for challenges on access, storage, sharing, privacy, and performance.
暂无评论