Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system ***, due to the model's inherent uncertainty, rigorous vali...
详细信息
Deep reinforcement learning(DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system ***, due to the model's inherent uncertainty, rigorous validation is requisite for its application in real-world tasks. Specific tests may reveal inadequacies in the performance of pre-trained DRL models, while the “black-box” nature of DRL poses a challenge for testing model behavior. We propose a novel performance improvement framework based on probabilistic automata,which aims to proactively identify and correct critical vulnerabilities of DRL systems, so that the performance of DRL models in real tasks can be improved with minimal model ***, a probabilistic automaton is constructed from the historical trajectory of the DRL system by abstracting the state to generate probabilistic decision-making units(PDMUs), and a reverse breadth-first search(BFS) method is used to identify the key PDMU-action pairs that have the greatest impact on adverse outcomes. This process relies only on the state-action sequence and final result of each trajectory. Then, under the key PDMU, we search for the new action that has the greatest impact on favorable results. Finally, the key PDMU, undesirable action and new action are encapsulated as monitors to guide the DRL system to obtain more favorable results through real-time monitoring and correction mechanisms. Evaluations in two standard reinforcement learning environments and three actual job scheduling scenarios confirmed the effectiveness of the method, providing certain guarantees for the deployment of DRL models in real-world applications.
Sleep apnea (SA) is a sleep-related breathing disorder characterized by breathing pauses during sleep. A person’s sleep schedule is significantly influenced by that person’s hectic lifestyle, which may include unhea...
详细信息
Modern apps require high computing resources for real-time data processing, allowing app users (AUs) to access real-time information. Edge computing (EC) provides dynamic computing resources to AUs for real-time data ...
详细信息
Modern apps require high computing resources for real-time data processing, allowing app users (AUs) to access real-time information. Edge computing (EC) provides dynamic computing resources to AUs for real-time data processing. However, due to resources and coverage constraints, edge servers (ESs) in specific areas can only serve a limited number of AUs. Hence, the app user allocation problem (AUAP) becomes challenging in the EC environment. This paper proposes a quantum-inspired differential evolution algorithm (QDE-UA) for efficient user allocation in the EC environment. The quantum vector is designed to provide a complete solution to the AUAP. The fitness function considers the minimum use of ES, user allocation rate (UAR), energy consumption, and load balance. Extensive simulations and hypotheses-based statistical analyses (ANOVA, Friedman test) are performed to show the significance of the proposed QDE-UA. The results indicate that QDE-UA outperforms the majority of the existing strategies with an average UAR improvement of 112.42%, and 140.62% enhancement in load balance while utilizing 13.98% fewer ESs. Due to the higher UAR, QDE-UA shows 59.28% higher total energy consumption on average. However, the lower energy consumption per AU is evidence of its energy efficiency. IEEE
The drug traceability model is used for ensuring drug quality and its safety for customers in the medical supply chain. The healthcare supply chain is a complex network, which is susceptible to failures and leakage of...
详细信息
This systematic review gave special attention to diabetes and the advancements in food and nutrition needed to prevent or manage diabetes in all its forms. There are two main forms of diabetes mellitus: Type 1 (T1D) a...
详细信息
The primary objective of fog computing is to minimize the reliance of IoT devices on the cloud by leveraging the resources of fog network. Typically, IoT devices offload computation tasks to fog to meet different task...
详细信息
The primary objective of fog computing is to minimize the reliance of IoT devices on the cloud by leveraging the resources of fog network. Typically, IoT devices offload computation tasks to fog to meet different task requirements such as latency in task execution, computation costs, etc. So, selecting such a fog node that meets task requirements is a crucial challenge. To choose an optimal fog node, access to each node's resource availability information is essential. Existing approaches often assume state availability or depend on a subset of state information to design mechanisms tailored to different task requirements. In this paper, OptiFog: a cluster-based fog computing architecture for acquiring the state information followed by optimal fog node selection and task offloading mechanism is proposed. Additionally, a continuous time Markov chain based stochastic model for predicting the resource availability on fog nodes is proposed. This model prevents the need to frequently synchronize the resource availability status of fog nodes, and allows to maintain an updated state information. Extensive simulation results show that OptiFog lowers task execution latency considerably, and schedules almost all the tasks at the fog layer compared to the existing state-of-the-art. IEEE
In the enormous field of Natural Language Processing (NLP), deciphering the intended significance of a word among a multitude of possibilities is referred to as word sense disambiguation. This process is essential for...
详细信息
The earthquake early warning(EEW) system provides advance notice of potentially damaging ground shaking. In EEW, early estimation of magnitude is crucial for timely rescue operations. A set of thirty-four features is ...
详细信息
The earthquake early warning(EEW) system provides advance notice of potentially damaging ground shaking. In EEW, early estimation of magnitude is crucial for timely rescue operations. A set of thirty-four features is extracted using the primary wave earthquake precursor signal and site-specific *** Japan's earthquake magnitude dataset, there is a chance of a high imbalance concerning the earthquakes above strong impact. This imbalance causes a high prediction error while training advanced machine learning or deep learning models. In this work, Conditional Tabular Generative Adversarial Networks(CTGAN), a deep machine learning tool, is utilized to learn the characteristics of the first arrival of earthquake P-waves and generate a synthetic dataset based on this information. The result obtained using actual and mixed(synthetic and actual) datasets will be used for training the stacked ensemble magnitude prediction model, MagPred, designed specifically for this study. There are 13295, 3989, and1710 records designated for training, testing, and validation. The mean absolute error of the test dataset for single station magnitude detection using early three, four, and five seconds of P wave are 0.41, 0.40,and 0.38 MJMA. The study demonstrates that the Generative Adversarial Networks(GANs) can provide a good result for single-station magnitude prediction. The study can be effective where less seismic data is available. The study shows that the machine learning method yields better magnitude detection results compared with the several regression models. The multi-station magnitude prediction study has been conducted on prominent Osaka, Off Fukushima, and Kumamoto earthquakes. Furthermore, to validate the performance of the model, an inter-region study has been performed on the earthquakes of the India or Nepal region. The study demonstrates that GANs can discover effective magnitude estimation compared with non-GAN-based methods. This has a high potential for wid
Cardiovascular disease remains a major issue for mortality and morbidity, making accurate classification crucial. This paper introduces a novel heart disease classification model utilizing Electrocardiogram (ECG) sign...
详细信息
Large language models (LLMs) have demonstrated promising in-context learning capabilities, especially with instructive prompts. However, recent studies have shown that existing large models still face challenges in sp...
详细信息
暂无评论