Dynamic flexible job shop scheduling (DFJSS) aims to achieve the optimal efficiency for production planning in the face of dynamic events. In practice, deep Q-network (DQN) algorithms have been intensively studied for...
详细信息
Dynamic flexible job shop scheduling (DFJSS) aims to achieve the optimal efficiency for production planning in the face of dynamic events. In practice, deep Q-network (DQN) algorithms have been intensively studied for solving various DFJSS problems. However, these algorithms often cause moving targets for the given job-shop state. This will inevitably lead to unstable training and severe deterioration of the performance. In this paper, we propose a training algorithm based on genetic algorithm to efficiently and effectively address this critical issue. Specifically, a state feature extraction method is first developed, which can effectively represent different job shop scenarios. Furthermore, a genetic encoding strategy is designed, which can reduce the encoding length to enhance search ability. In addition, an evaluation strategy is proposed to calculate a fixed target for each job-shop state, which can avoid the parameter update of target networks. With the designs, the DQNs could be stably trained, thus their performance is greatly improved. Extensive experiments demonstrate that the proposed algorithm outperforms the state-of-the-art peer competitors in terms of both effectiveness and generalizability to multiple scheduling scenarios with different scales. In addition, the ablation study also reveals that the proposed algorithm can outperform the DQN algorithms with different updating frequencies of target networks. IEEE
Multi-access edge computing has become an effective paradigm to provide offloading services for computation-intensive and delay-sensitive tasks on vehicles. However, high mobility of vehicles usually incurs spatio-tem...
详细信息
On July 18, 2021, the PKU-DAIR Lab1)(data and Intelligence Research Lab at Peking University) openly released the source code of Hetu, a highly efficient and easy-to-use distributed deep learning(DL) framework. Hetu i...
On July 18, 2021, the PKU-DAIR Lab1)(data and Intelligence Research Lab at Peking University) openly released the source code of Hetu, a highly efficient and easy-to-use distributed deep learning(DL) framework. Hetu is the first distributed DL system developed by academic groups in Chinese universities, and takes into account both high availability in industry and innovation in academia. Through independent research and development, Hetu is completely decoupled from the existing DL systems and has unique characteristics. The public release of the Hetu system will help researchers and practitioners to carry out frontier MLSys(machine learning system) research and promote innovation and industrial upgrading.
1 Introduction On-device deep learning(DL)on mobile and embedded IoT devices drives various applications[1]like robotics image recognition[2]and drone swarm classification[3].Efficient local data processing preserves ...
详细信息
1 Introduction On-device deep learning(DL)on mobile and embedded IoT devices drives various applications[1]like robotics image recognition[2]and drone swarm classification[3].Efficient local data processing preserves privacy,enhances responsiveness,and saves ***,current ondevice DL relies on predefined patterns,leading to accuracy and efficiency *** is difficult to provide feedback on data processing performance during the data acquisition stage,as processing typically occurs after data acquisition.
Sleep apnea (SA) is a sleep-related breathing disorder characterized by breathing pauses during sleep. A person’s sleep schedule is significantly influenced by that person’s hectic lifestyle, which may include unhea...
详细信息
The primary objective of fog computing is to minimize the reliance of IoT devices on the cloud by leveraging the resources of fog network. Typically, IoT devices offload computation tasks to fog to meet different task...
详细信息
The primary objective of fog computing is to minimize the reliance of IoT devices on the cloud by leveraging the resources of fog network. Typically, IoT devices offload computation tasks to fog to meet different task requirements such as latency in task execution, computation costs, etc. So, selecting such a fog node that meets task requirements is a crucial challenge. To choose an optimal fog node, access to each node's resource availability information is essential. Existing approaches often assume state availability or depend on a subset of state information to design mechanisms tailored to different task requirements. In this paper, OptiFog: a cluster-based fog computing architecture for acquiring the state information followed by optimal fog node selection and task offloading mechanism is proposed. Additionally, a continuous time Markov chain based stochastic model for predicting the resource availability on fog nodes is proposed. This model prevents the need to frequently synchronize the resource availability status of fog nodes, and allows to maintain an updated state information. Extensive simulation results show that OptiFog lowers task execution latency considerably, and schedules almost all the tasks at the fog layer compared to the existing state-of-the-art. IEEE
Temporal knowledge graph(TKG) reasoning, has seen widespread use for modeling real-world events, particularly in extrapolation settings. Nevertheless, most previous studies are embedded models, which require both enti...
详细信息
Temporal knowledge graph(TKG) reasoning, has seen widespread use for modeling real-world events, particularly in extrapolation settings. Nevertheless, most previous studies are embedded models, which require both entity and relation embedding to make predictions, ignoring the semantic correlations among different entities and relations within the same timestamp. This can lead to random and nonsensical predictions when unseen entities or relations occur. Furthermore, many existing models exhibit limitations in handling highly correlated historical facts with extensive temporal depth. They often either overlook such facts or overly accentuate the relationships between recurring past occurrences and their current counterparts. Due to the dynamic nature of TKG, effectively capturing the evolving semantics between different timestamps can be *** address these shortcomings, we propose the recurrent semantic evidenceaware graph neural network(RE-SEGNN), a novel graph neural network that can learn the semantics of entities and relations simultaneously. For the former challenge, our model can predict a possible answer to missing quadruples based on semantics when facing unseen entities or relations. For the latter problem, based on an obvious established force, both the recency and frequency of semantic history tend to confer a higher reference value for the current. We use the Hawkes process to compute the semantic trend, which allows the semantics of recent facts to gain more attention than those of distant facts. Experimental results show that RE-SEGNN outperforms all SOTA models in entity prediction on 6 widely used datasets, and 5 datasets in relation prediction. Furthermore, the case study shows how our model can deal with unseen entities and relations.
The drug traceability model is used for ensuring drug quality and its safety for customers in the medical supply chain. The healthcare supply chain is a complex network, which is susceptible to failures and leakage of...
详细信息
Software-defined networks(SDNs) present a novel network architecture that is widely used in various datacenters. However, SDNs also suffer from many types of security threats, among which a distributed denial of servi...
详细信息
Software-defined networks(SDNs) present a novel network architecture that is widely used in various datacenters. However, SDNs also suffer from many types of security threats, among which a distributed denial of service(DDoS) attack, which aims to drain the resources of SDN switches and controllers,is one of the most common. Once the switch or controller is damaged, the network services can be *** defense schemes against DDoS attacks have been proposed from the perspective of attack detection;however, such defense schemes are known to suffer from a time consuming and unpromising accuracy, which could result in an unavailable network service before specific countermeasures are taken. To address this issue through a systematic investigation, we propose an elaborate resource-management mechanism against DDoS attacks in an SDN. Specifically, by considering the SDN topology, we leverage the M/M/c queuing model to measure the resistance of an SDN to DDoS attacks. Network administrators can therefore invest a reasonable number of resources into SDN switches and SDN controllers to defend against DDoS attacks while guaranteeing the quality of service(QoS). Comprehensive analyses and empirical data-based experiments demonstrate the effectiveness of the proposed approach.
data race is one of the most important concurrent anomalies in multi-threaded *** con-straint-based techniques are leveraged into race detection,which is able to find all the races that can be found by any oth-er soun...
详细信息
data race is one of the most important concurrent anomalies in multi-threaded *** con-straint-based techniques are leveraged into race detection,which is able to find all the races that can be found by any oth-er sound race ***,this constraint-based approach has serious limitations on helping programmers analyze and understand data ***,it may report a large number of false positives due to the unrecognized dataflow propa-gation of the ***,it recommends a wide range of thread context switches to schedule the reported race(in-cluding the false one)whenever this race is exposed during the constraint-solving *** ad hoc recommendation imposes too many context switches,which complicates the data race *** address these two limitations in the state-of-the-art constraint-based race detection,this paper proposes DFTracker,an improved constraint-based race detec-tor to recommend each data race with minimal thread context ***,we reduce the false positives by ana-lyzing and tracking the dataflow in the *** this means,DFTracker thus reduces the unnecessary analysis of false race *** further propose a novel algorithm to recommend an effective race schedule with minimal thread con-text switches for each data *** experimental results on the real applications demonstrate that 1)without removing any true data race,DFTracker effectively prunes false positives by 68%in comparison with the state-of-the-art constraint-based race detector;2)DFTracker recommends as low as 2.6-8.3(4.7 on average)thread context switches per data race in the real world,which is 81.6%fewer context switches per data race than the state-of-the-art constraint based race ***,DFTracker can be used as an effective tool to understand the data race for programmers.
暂无评论