Ego-pose estimation and dynamic object tracking are two critical problems for autonomous driving systems. The solutions to these problems are generally based on their respective assumptions, i.e., the static world ass...
详细信息
Kubernetes has become the basic platform for building cloud native applications. However, existing horizontal scaling methods based on Kubernetes have problems with resource redundancy. Furthermore, the combined horiz...
详细信息
ISBN:
(数字)9798350365221
ISBN:
(纸本)9798350365238
Kubernetes has become the basic platform for building cloud native applications. However, existing horizontal scaling methods based on Kubernetes have problems with resource redundancy. Furthermore, the combined horizontal and vertical scaling based on Kubernetes takes a long time. Regarding the issue above, first, we describe the execution processes of three scaling techniques and compare them in different dimensions; second, we construct a formal elastic scaling model for expansion scenarios, with the goal of minimizing the total cost; and third, we introduce a heuristic expansion algorithm after balancing resource redundancy and execution time as the heuristic information. The experiment results show that the elastic scaling algorithm proposed in this paper achieves better performance than other algorithms.
This paper presents an inertial estimator learning automata scheme by which both the short-term and long-term perspectives of the environment can be incorporated in the stochastic estimator-the long term information c...
详细信息
Differential evolution (DE) is a widely recognized method to solve complex optimization problems as shown by many researchers. Yet, non-adaptive versions of DE suffer from insufficient exploration ability and uses no ...
详细信息
Differential evolution (DE) is a widely recognized method to solve complex optimization problems as shown by many researchers. Yet, non-adaptive versions of DE suffer from insufficient exploration ability and uses no historical information for its performance enhancement. This work proposes Fractional Order Differential Evolution (FODE) to enhance DE performance from two aspects. Firstly, a bi-strategy co-deployment framework is proposed. The population-based and parameter-based strategies are combined to leverage their respective advantages. Secondly, the fractional order calculus is first applied to the differential vector to enhance DE’s exploration ability by using the historical information of populations, and ensures the diversity of population in an evolutionary process. We use the 2017 IEEE Congress on Evolutionary Computation (CEC) test functions, and CEC2011 real-world problems to evaluate FODE’s performance. Its sensitivity to parameter changes is discussed and an ablation study of multi-strategies is systematically performed. Furthermore, the variations of exploration and exploitation in FODE are visualized and analyzed. Experimental results show that FODE is superior to other state-of-the-art DE variants, the winners of CEC competitions, other fractional order calculus-based algorithms, and some powerful variants of classic algorithms. IEEE
In this paper, we study capacity scaling laws of the deterministic dissemination (DD) in random wireless networks under the generalized physical model (GphyM). This is truly not a new topic. Our motivation to readdres...
详细信息
In this paper, we study capacity scaling laws of the deterministic dissemination (DD) in random wireless networks under the generalized physical model (GphyM). This is truly not a new topic. Our motivation to readdress this issue is two-fold: Firstly, we aim to propose a more general result to unify the network capacity for general homogeneous random models by investigating the impacts of different parameters of the system on the network capacity. Secondly, we target to close the open gaps between the upper and the lower bounds on the network capacity in the literature. We derive the general upper bounds on the capacity for the arbitrary case of (λ, nd, ns) by introducing the Poisson Boolean model of continuum percolation, where λ, nd, and ns are the general node density, the number of destinations for each session, and the number of sessions, respectively. We prove that the derived upper bounds are tight according to the existing general lower bounds constructed in the literature.
IEEE 802.15.4/ZigBee sensor networks support small power consumption and node expansion compared to other network standards for WSN. Body sensor networks (BSN) require a number of sensors for sensing medical informati...
详细信息
IEEE 802.15.4/ZigBee sensor networks support small power consumption and node expansion compared to other network standards for WSN. Body sensor networks (BSN) require a number of sensors for sensing medical information from human body, and low power consumption to monitor a patient's status for a long time. However, ZigBee has limited bandwidth and is thus hard to support real time data transmission because of the adoption of CSMA-CA as its medium access control (MAC) protocol. In this paper, we will analyze the reasonable number of nodes, size of payload and packet interval for best QoS for such network. It is found that an appropriate MAC parameters setting can improve the QoS compared to the default setting in IEEE 802.15.4 specification. The effective data rate, average end-to-end delay and packet delivery ratio are found via simulation for various network settings. The results are useful for the construction of ZigBee networks for patient monitoring and care.
In this work, for a wireless sensor network (WSN) of n randomly placed sensors with node density λ ∈ [1, n], we study the tradeoffs between the aggregation throughput and gathering efficiency. The gathering efficien...
详细信息
In this work, for a wireless sensor network (WSN) of n randomly placed sensors with node density λ ∈ [1, n], we study the tradeoffs between the aggregation throughput and gathering efficiency. The gathering efficiency refers to the ratio of the number of the sensors whose data have been gathered to the total number of sensors. Specifically, we design two efficient aggregation schemes, called single-hop-length (SLH) scheme and multiple-hop-length (MLH) scheme. By novelly integrating these two schemes, we theoretically prove that our protocol achieves the optimal tradeoffs, and derive the optimal aggregation throughput depending on a given threshold value (lower bound) on gathering efficiency. Particularly, we show that under the MLH scheme, for a practically important set of symmetric functions called perfectly compressible functions, including the mean, max, or various kinds of indicator functions, etc., the data from Θ(n) sensors can be aggregated to the sink at the throughput of a constant order Θ(1), implying that our MLH scheme is indeed scalable.
The ability of road networks to withstand external disturbances is a crucial measure of transportation system performance, where resilience distinctly emerges as an effective perspective for its unique insights into t...
详细信息
The past recent years have witnessed more and more applications on image retrieval. As searching a large image database is often costly, to improve the efficiency, high dimensional indexes may help. This paper propose...
详细信息
The present study utilized motor imaginary-based brain-computer interface technology combined with rehabilitation training in 20 stroke patients. Results from the Berg Balance Scale and the Holden Walking Classificati...
详细信息
The present study utilized motor imaginary-based brain-computer interface technology combined with rehabilitation training in 20 stroke patients. Results from the Berg Balance Scale and the Holden Walking Classification were significantly greater at 4 weeks after treatment (P 〈 0.01), which suggested that motor imaginary-based brain-computer interface technology improved balance and walking in stroke patients.
暂无评论