This paper introduces a novel charging strategy for wireless rechargeable sensor networks (WRSNs), in which a mobile charger (MC) moves and wirelessly transfers the power to the sensor nodes. The first distinct point ...
详细信息
ISBN:
(数字)9781728183268
ISBN:
(纸本)9781728183268
This paper introduces a novel charging strategy for wireless rechargeable sensor networks (WRSNs), in which a mobile charger (MC) moves and wirelessly transfers the power to the sensor nodes. The first distinct point of this work is designing the MC's chargingalgorithm under the consideration of target coverage and connectivity. As a solution, we introduce a novel on-demandcharging scheme for WRSNs that optimize the charging time at each MC's charging location. Moreover, we take advantage of the Q-learning technique (i.e., hence named our algorithms Q-charging) to maximize the number of monitored targets. Q-charging can prioritize the sensor nodes, which play a more critical role in the network. Hence, Q-charging can select a suitable charging location aiming to provide sufficient power for the prioritized sensors. We have evaluated our proposal in comparison to the previous works. The evaluation results show that Q-charging can prolong the time until the first target is not monitored by 5:2 times on the average, and 14:3 times in the best case, compared to existing algorithms.
Wireless rechargeable sensor networks (WRSNs) have emerged as a potential solution to solve the challenge of prolonging battery-powered sensor networks' lifetime. In a WRSN, a mobile charger moves around and charg...
详细信息
Wireless rechargeable sensor networks (WRSNs) have emerged as a potential solution to solve the challenge of prolonging battery-powered sensor networks' lifetime. In a WRSN, a mobile charger moves around and charges the rechargeable sensors when stopping at charging spots. This paper newly considers a joint optimization of the charging location and the charging time to avoid node failure in WRSNs. We formulate the optimization and propose a solution for that. The proposal includes an algorithm to find a list of potential charging locations and their associated optimal charging time. Moreover, the Q-learning technique is adopted to determine the optimal next charging location among the candidates. We implement the algorithm and conduct experiments to compare it to the related works. The results show that the proposed algorithm outperforms the others in terms of network lifetime. Specifically, the highest performance gap of our proposal to the best algorithm among the others is 8.29 times.
The charging issue in Wireless Rechargeable Sensor Networks (WRSNs) is a popular research problem. With the help of wireless energy transfer technology, electrical energy can be transfer from Wireless charging Equipme...
详细信息
The charging issue in Wireless Rechargeable Sensor Networks (WRSNs) is a popular research problem. With the help of wireless energy transfer technology, electrical energy can be transfer from Wireless charging Equipment (WCE) to the sensor nodes, providing a new paradigm to prolong the network lifetime. Existing research usually takes the periodical and deterministic charging approach, but ignore the limited energy of the WCE and the influences of non-deterministic factors such as topological changes and node failures, making them unsuitable for real networks. In this study, we aim to minimize the number of dead sensor nodes while maximizing energy utilization of WCE under the limited energy of the WCE. Furthermore, the Swarm Reinforcement Learning (SRL) method is firstly introduced to achieve the autonomous planning ability of WCE. Moreover, to solve the problem of insufficient search in existing SRL algorithm, we improve the SRL by firefly algorithm. And a novel chargingalgorithm, named Swarm Reinforcement Learning based on Firefly algorithm (SRL-FA), is proposed for the on-demandcharging architecture. To evaluate the performance of the proposed algorithm, SRL-FA is compared with the existing swarm reinforcement learning algorithms and classic on-demand charging algorithms in two network scenarios. The Extensive simulation shows that the proposed algorithm can achieve promising performance in energy utilization of WCE, charging success rate and other performance metrics.
In wireless rechargeable sensor networks (WRSNs), a mobile charger (MC) moves around to compensate for sensor nodes' energy via a wireless medium. In such a context, designing a charging strategy that optimally pr...
详细信息
In wireless rechargeable sensor networks (WRSNs), a mobile charger (MC) moves around to compensate for sensor nodes' energy via a wireless medium. In such a context, designing a charging strategy that optimally prolongs the network lifetime is challenging. This work aims to solve the challenges by introducing a novel, on-demand charging algorithm for MC that attempts to maximize the network lifetime, where the term "network lifetime" is defined by the interval from when the network starts till the first target is not monitored by any sensor. The algorithm, named Fuzzy Q-charging, optimizes both the time and location in which the MC performs its charging tasks. Fuzzy Q-charging uses Fuzzy logic to determine the optimal charging-energy amounts for sensors. From that, we propose a method to find the optimal charging time at each charging location. Fuzzy Q-charging leverages Q-learning to determine the next charging location for maximizing the network lifetime. To this end, Q-charging prioritizes the sensor nodes following their roles and selects a suitable charging location where MC provides sufficient power for the prioritized sensors. We have extensively evaluated the effectiveness of Fuzzy Q-charging in comparison to the related works. The evaluation results show that Fuzzy Q-charging outperforms the others. First, Fuzzy Q-charging can guarantee an infinite lifetime in the WSRNs, which have a sufficient large sensor number or a commensurate target number. Second, in other cases, Fuzzy Q-charging can extend the time until the first target is not monitored by 6.8 times on average and 33.9 times in the best case, compared to existing algorithms.
暂无评论