This paper deals with the development of a new time-varying adaptive feedforward sliding mode control (SMC). The proposed controller is designed based on the original SMC by adding a feedforward term, which can be ada...
详细信息
This paper deals with the development of a new time-varying adaptive feedforward sliding mode control (SMC). The proposed controller is designed based on the original SMC by adding a feedforward term, which can be adaptive, in order to estimate the dynamic parameters. The main contribution of this study is a time-varying adaptation gain, which is proposed to efficiently identify these parameters. The objective is then to outperform the original SMC scheme. The proposed controller has been designed and applied to parallel kinematic manipulators (PKMs). It is validated through numerical simulation for different operating conditions on the VELOCE PKM. The obtained simulation results show the efficiency of the proposed controller in terms of tracking performances and robustness, with improvements up to 55% with respect to another adaptive controller (Adaptive Feedforward PD: AFFPD). Copyright (C) 2022 The Authors.
Graph are ubiquitous because the fields of application are varied. Well-known examples are social networks, biological networks and path-finding in road networks. real-world graphs processing is very challenging becau...
详细信息
Locking protocol is an essential component in resource management of real-timesystems, which coordinates mutually exclusive accesses to shared resources from different tasks. OpenMP is a promising framework for multi...
详细信息
ISBN:
(纸本)9781728190747
Locking protocol is an essential component in resource management of real-timesystems, which coordinates mutually exclusive accesses to shared resources from different tasks. OpenMP is a promising framework for multi-core real-time embedded systems as well as provides spin locks to protect shared resources. In this paper, we propose a resource model for analyzing OpenMP programs with spin locks. Based on our resource model, we also develop a technique for analyzing the blocking time which impacts the total workload. Notably, the resource model provides detailed resource access behavior of the programs, making our blocking analysis more accurate. Further, we derive the schedulability analysis for real-time OpenMP tasks with spin locks protecting shared resources. Experiments with realistic OpenMP programs are conducted to evaluate the performance of our method.
As a non-linear data structure consisting of nodes and edges, the graph data span many different domains. In the real world, applications based on such data structures are always time-sensitive, that is, the value of ...
详细信息
As a non-linear data structure consisting of nodes and edges, the graph data span many different domains. In the real world, applications based on such data structures are always time-sensitive, that is, the value of graph data tends to decrease with time. Furthermore, the application based on spatio-temporal graph is one of the typical representatives of time-sensitive, since the time dimension is an inherent feature of spatio-temporal data. The distributed Stream Processing Engine (DSPE) seems an excellent choice for the above requirement, which is commonly partitioned and concurrently processed by a number of threads to maximize the throughput. However, it is not feasible to do such mission directly using the traditional DSPE. In this paper, we propose a computational model suitable for handling the spatio-temporal graph in DSPE, by reconstructing the DSPE's parallel processing slots. Specifically, our proposal includes a general processing framework to deal with the data structure of the spatio-temporal graph, a state information compensation mechanism to ensure the correctness of processing such stateful operation in DSPE, a lightweight summary information calculation method to ensure the performance of the system. Empirical studies on real-world stream applications validate the usefulness of our proposals and prove the considerable advantage of our approaches over state-of-the-art solutions in the literature.
Deep learning technologies are empowering IoT devices with an increasing number of intelligent services. However, the contradiction between resource-constrained IoT devices and intensive computing makes it common to t...
详细信息
Deep learning technologies are empowering IoT devices with an increasing number of intelligent services. However, the contradiction between resource-constrained IoT devices and intensive computing makes it common to transfer data to the cloud center for executing all DNN inference, or dynamically allocate DNN computations between IoT devices and the cloud center. Existing approaches perform a strong dependence on the cloud center, and require the support of a reliable and stable network. Thus, it may directly cause unreliable or even unavailable service in extreme or unstable environments. We propose DeColla, a decentralized and collaborative deep learning inference system for IoT devices, which completely migrates DNN computations from the cloud center to the IoT device side, relying on the collaborative mechanism to accelerate the DNN inference that is difficult for an individual IoT device to accomplish. DeColla uses a parallel acceleration strategy via a DRL-based adaptive allocation for collaborative inference, which aims to improve inference efficiency and robustness. To illustrate the advantages and robustness of DeColla, we built a testbed and employ DeColla to evaluate MobileNet DNN network trained on the ImageNet dataset, and also recognize the object for a mobile web AR application and conduct extensive experiments to analyze the latency, resource usage, and robustness against existing methods. Numerical results show that DeColla outperforms other methods in terms of latency and resource usage, which can especially reduce at least 2.5 times latency than the hierarchical inference method when the collaboration is interrupted abnormally.
This paper proposes a new method called the Compensated distributed-Parameter Line Decoupling (CDLD) method, and used for decoupling distribution networks among parallel processing cores in a real-time multi-core envi...
详细信息
ISBN:
(纸本)9781665405072
This paper proposes a new method called the Compensated distributed-Parameter Line Decoupling (CDLD) method, and used for decoupling distribution networks among parallel processing cores in a real-time multi-core environment. Due to the short length of distribution grid lines, decoupling the distribution network for real-time processing is challenging and approaches such as Stublines and State Space Nodal (SSN) solvers have been proposed. Both approaches have limitations and the method proposed in this work addresses these limitations to implement improvements. Specifically, the CDLD method can be used as an enhanced Stubline decoupling, improving on its accuracy and transient response, or it can be combined with an SSN solver to improve its computational performance and remove bottleneck issues. The CDLD method was tested on three IEEE benchmark systems in a real-time environment, and significant improvement was realized in network response and computational performance compared to the prevailing methods. The combined SSN-CDLD method proved to be the most promising approach for network decoupling.
All existing coflow scheduling algorithms compute dynamic-rate schedules that change the rates of flows during transmission. In this paper, we make a crucial finding: although dynamically adjusting the rates of flows ...
All existing coflow scheduling algorithms compute dynamic-rate schedules that change the rates of flows during transmission. In this paper, we make a crucial finding: although dynamically adjusting the rates of flows could lead to a better coflow completion time (CCT) in theory, it would introduce additional pressures on the congestion control mechanism in the underlying network, which result in poor CCT in practice. This difference between theoretical CCT and practical CCT is further exacerbated in wide-area networks, where the topology does not provide any bisection guarantee as data center networks do. To this end, we designed a fixed-rate coflow scheduling policy called FSCO. Although in theory, the best fixed-rate schedule is usually suboptimal, it keeps the in-flight traffic relatively steady, reducing the risk of triggering congestion control. The core of FSCO is an efficient scheduling algorithm based on the classic network utilization maximization (NUM) framework. We implement a prototype of FSCO and evaluate its performance extensively using real-world topologies and coflow traces. Experimental results show that the total CCT reduces up to 30% compared to baselines while yielding up to 12× speedups compared to the solver.
The paper presents a new approach and a related real-timeparallel simulation tool for modeling and analyzing a swarm of more than 60 distributed autonomous mobile robots communicating over an unreliable and capacity ...
详细信息
The paper presents a new approach and a related real-timeparallel simulation tool for modeling and analyzing a swarm of more than 60 distributed autonomous mobile robots communicating over an unreliable and capacity restricted wireless communication network. It includes a physical simulation of static obstacles, dynamic obstacles with scriptable movement, soil condition, active jammers, static and dynamic link obstacles with configurable damping as well as thermal noise. The simulated ground based mobile robots use CPBP [12] as probabilistic model predictive closed loop controller in combination with gradient free cost functions to evaluate complex unsteady goodness aggregations. The goal of this approach is the development of connection aware swarm behavior for complex missions such as terrain exploration, formations, convoy escorting or creation of a mobile ad hoc network in dynamic disaster areas under realistic environmental conditions. The missions can be combined in any manner and the target extraction of the high-level commands for each agent is done implicitly. To validate the developed behavior, the independent software control kernel is able to be used on real robots as well. (C) 2021 The Authors. Published by Elsevier B.V.
Recently, several parallel frameworks have emerged to utilize the increasing computational capacity of multiprocessors. parallel tasks are distinguished from traditional sequential tasks in that the subtasks contained...
详细信息
Recently, several parallel frameworks have emerged to utilize the increasing computational capacity of multiprocessors. parallel tasks are distinguished from traditional sequential tasks in that the subtasks contained in a single parallel task can simultaneously execute on multiple processors. In this study, we consider the scheduling problem of minimizing the number of processors on which the parallelreal-time tasks feasibly run. In particular, we focus on scheduling sporadic parallelreal-time tasks, in which precedence constraints between subtasks of each parallel task are expressed using a directed acyclic graph (DAG). To address the problem, we formulate an optimization problem that aims to minimize the maximum processing capacity for executing the given tasks. We then suggest a polynomial solution consisting of three steps: (1) transform each parallelreal-time task into a series of multithreaded segments, while respecting the precedence constraints of the DAG;(2) selectively extend the segment lengths;and (3) interpret the problem as a flow network to balance the flows on the terminal edges. We also provide the schedulability bound of the proposed solution: it has a capacity augmentation bound of 2. Our experimental results show that the proposed approach yields higher performance than one developed in a recent study.
暂无评论