Deep learning plays a growing and crucial role on the internet of Things (IoT), especially in intelligent data analysis, decision support, and automation control. YOLOv5, as an efficient model for target detection in ...
详细信息
Over the last few years, drone base station (DBS) technology has been recognized as a promising solution to the problem of network design for wireless communication systems, due to its highly flexible deployment and d...
详细信息
Over the last few years, drone base station (DBS) technology has been recognized as a promising solution to the problem of network design for wireless communication systems, due to its highly flexible deployment and dynamic mobility features. This article focuses on the 3-D mobility control of the DBS to boost transmission coverage and network connectivity. We propose a dynamic and scalable control strategy for drone mobility using deep reinforcement learning (DRL). The design goal is to maximize communication coverage and network connectivity for multiple real-time users over a time horizon. The proposed method functions according to the received signals of mobile users, without the information of user locations. It is divided into two hierarchical stages. First, a time-series convolutional neural network (CNN)-based link quality estimation model is used to determine the link quality at each timeslot. Second, a deep Q-learning algorithm is applied to control the movement of the DBS in hotspot areas to meet user requirements. Simulation results show that the proposed method achieves significant networkperformance in terms of both communication coverage and network throughput in a dynamic environment, compared with the Q-learning algorithm.
Cloud-based game streaming is emerging as a convenient way to play games when clients have a good network connection. However, high-quality game streams need high bitrates and low latencies, a challenge when competing...
详细信息
ISBN:
(纸本)9781450392594
Cloud-based game streaming is emerging as a convenient way to play games when clients have a good network connection. However, high-quality game streams need high bitrates and low latencies, a challenge when competing for network capacity with other flows. While some network aspects of cloud-based game streaming have been studied, missing are comparative performance and congestion responses to competing TCP flows. This paper presents results from experiments that measure how three popular commercial cloud-based game streaming systems - Google Stadia, NVidia GeForce Now, and Amazon Luna - respond and then recover to TCP Cubic and TCP BBR flows on a congested network link. Analysis of bitrates, loss rates and round-trip times show the three systems have markedly different responses to the arrival and departure of competing network traffic.
The concepts of internet of Things (IoT) and Cyber Physical systems (CPS) are closely related to each other. IoT is often used to refer to small interconnected devices like those in smart home while CPS often refers t...
详细信息
ISBN:
(纸本)9798350371000;9798350370997
The concepts of internet of Things (IoT) and Cyber Physical systems (CPS) are closely related to each other. IoT is often used to refer to small interconnected devices like those in smart home while CPS often refers to large interconnected devices like industry machines and smart cars. In this paper, we present a unified view of IoT and CPS: from the perspective of network architecture, IoT and CPS are similar given that they are based on either the OSI model or TCP/IP model. In both IoT and CPS, networking/communication modules are attached to original things so that isolated things can be integrated into cyber space. If needed, actuators can also be integrated with a thing so as to control the thing. With this unified view, we can perform risk assessment of an IoT/CPS system from six factors, hardware, networking, operating system (OS), software, data and human. To illustrate the use of such risk analysis framework, we analyze an air quality monitoring network, smart home using smart plugs and building automation system (BAS). We also discuss challenges such as cost and secure OS in IoT security.
The scheduling of real-time and dependent tasks on multicore systems plays an important role in influencing system performance. During the scheduling process, multiple constraints should be taken into account, e.g., t...
详细信息
ISBN:
(纸本)9798350387780;9798350387797
The scheduling of real-time and dependent tasks on multicore systems plays an important role in influencing system performance. During the scheduling process, multiple constraints should be taken into account, e.g., task dependency, real-time response, and energy efficiency. The complex task scheduling problem makes it difficult to balance solution quality and computation time. Existing works either use time-consuming methods to find optimal solutions or get feasible solutions through heuristic methods. In this paper, we propose a GAT (graph attention network)-based deep reinforcement learning (DRL) algorithm to solve task scheduling problems. This method combines the benefits of deep learning (DL) networks and reinforcement learning (RL) algorithms. It can achieve adaptive learning and adjust information features by constructing the GAT to model the dependencies between tasks. At the same time, we use the soft actor-critic (SAC) algorithm to optimize the task scheduling policy to minimize the makespan of scheduling results. The experimental results show that our method outperforms other scheduling methods regarding scheduling efficiency (the quality of task scheduling problem) and algorithm running time.
Ultrareliable and delay-intolerant message delivery is one of the key components for internet of Things (IoT) and cyber-physical systems to support real-time control and real-time interaction. In this article, we prov...
详细信息
Ultrareliable and delay-intolerant message delivery is one of the key components for internet of Things (IoT) and cyber-physical systems to support real-time control and real-time interaction. In this article, we provide a spatiotemporal framework that captures the packet loss rate (PLR) for large-scale grant-free uplink IoT networks with multiple types of hard-deadline traffic. An independent and a prioritized packet scheduling schemes are proposed and investigated for efficiently realizing frequency diversity. Tools from stochastic geometry and queuing theory are utilized to account for the macroscopic coverage probability and microscopic PLR. The expressions of devices' coverage, steady state distribution, and PLR of services are derived for both scheduling schemes. Detailed system-level simulations are used to identify network design guidelines. The results show that the independent scheme provides better network scalability and consumes lower average transmit power at the devices, while the prioritized scheme enhances the PLR performance of high priority service and requires lower peak transmit power of devices.
In the context of satellite communication, unique challenges arise, such as long delay links that can impact the networkperformance also at the Transport Layer. With regard to the Transmission control Protocol (TCP),...
详细信息
As wireless technologies and mobile are continuously expanding, it become necessary for future wireless communication to incorporate multiple networks with unique features. Mobile devices must seamlessly switch betwee...
详细信息
The rise of smart cities, driverless automobiles, smart watches, and mobile banking has led to increased reliance on the internet. Although technology has enormous advantages for people and society, it also introduces...
详细信息
internet of Things solutions typically involve interaction between sensors, actuators, the cloud, embedded systems and user applications. Often in such cases, there are time constraints specifying the maximum response...
详细信息
暂无评论