Software-defined networks (SDN) provide an efficient network architecture by enhancing global network monitoring and performance through the separation of the control plane from the data plane. In extensive SDN implem...
详细信息
Software-defined networks (SDN) provide an efficient network architecture by enhancing global network monitoring and performance through the separation of the control plane from the data plane. In extensive SDN implementations for the internet-of-Things (IoT), achieving high scalability and reducing controller load necessitates deploying multiple distributed controllers that collaboratively manage the network. Each controller oversees a subset of switches and gathers information about these switches and their interconnections, which can lead to imbalances in link and controller loads. Addressing these imbalances is crucial for improving quality of service (QoS) in SDN-enabled Industrial internet-of-Things (IIoT) environments. In this paper, we present the NP-hardness of the link and controller load balancing routing (LCLBR) problem within IIoT. To tackle this issue, we propose an enhanced flow control and load balancing approach for SDN-enabled Industrial internet-of-Things (EFLB-IIoT). EFLB-IIoT is an approximation-based technique that effectively maintains network activity among distributed controllers. Simulation results indicate that our proposed strategy reduces the maximum link load by 76% and the maximum controller response time by 85% compared to existing techniques, demonstrating superior performance over state-of-the-art methods.
The presence of diverse traffic types is well-established in IoT networks. Various probability distributions have been found to describe packets inter-arrival time contrasting with the familiar exponential distributio...
详细信息
The presence of diverse traffic types is well-established in IoT networks. Various probability distributions have been found to describe packets inter-arrival time contrasting with the familiar exponential distribution in traditional networks. These findings suggest the need to develop appropriate traffic models for performance analysis of IoT networksystems. An essential component in IoT network is gateway as it provides connectivity to the core internet. The IoT gateway also performs functions such as protocol translation and traffic aggregation. Therefore, efficient design of the IoT gateway is necessary for better network management. The paper presents a new analytical model, N-Gamma/M/1, for analyzing the performance of IoT gateway. The equivalence of N-Gamma/M/1 and Gamma/M/1 models is proved mathematically. Additionally, an in-depth performance evaluation of the IoT gateway under various arrival patterns is conducted through simulation. The numerical analysis of the proposed N-Gamma/M/1 model emphasizes the need for more buffers at the gateway when traffic from input devices has varying values of gamma distribution parameters. It is also noted that the IoT gateway experiences a longer mean queue length resulting in higher mean waiting time and packet loss when inter-arrival time distribution of packets follows generalized Pareto, Weibull and lognormal distributions with different parameter values. This makes the task of IoT network management challenging. Adaptive and intelligent resource allocation policies along with dynamic congestion control algorithms may provide a solution to minimize packet loss and ensure quality of service.
The internet of Things (IoT) represents the internet's future by including devices that can connect with one another. IoT devices in IoT networks collect data from their surroundings and send it in bursts to a ser...
详细信息
The internet of Things (IoT) represents the internet's future by including devices that can connect with one another. IoT devices in IoT networks collect data from their surroundings and send it in bursts to a server in a remote location. The quantity of applications using computer networks causes resource competition, which in turn causes congestion. The most difficult task in enhancing networkquality of service (QoS) is IoT congestion control. For IoT items with modest resource requirements, various protocols have arisen. Different shortages and issues affect basic congestion control, which increases bandwidth use, data loss, and delay. Thus, to solve this issue, in this paper, we propose a bio-inspired fuzzy inference system (FIS)-based constrained application protocol (BIFIS-CoAP) for avoiding congestion over network. In order to determine the level of congestion, we use the bottleneck bandwidth gradient (BG-gradient) and round-trip time gradient (RT-gradient) as inputs for BIFIS-CoAP. By using this indication, BIFIS-CoAP may anticipate impending congestion and alter the sending rate to prevent it. In order to achieve high performance for burst data transfer, BIFIS-CoAP constantly checks for bandwidth that is available and uses the congestion degree for updating the variable retransmission time out (RTO) for retransmissions. Using the adaptive Tasmanian devil optimization method (ATDO), input parameters like the RT-gradient and BG-gradient are modified to enhance the performance of the FIS. Numerous simulation studies have shown that BIFIS-CoAP is feasible, and the simulation results show that it performs better than basic CoAP and fuzzy CoAP in terms of delay, throughput, delivery ratio, and retransmissions.
Joint estimation is crucial in the industrial internet of Things (IIoT) by integrating data from diverse devices to improve monitoring accuracy. network slicing can meet the heterogeneous needs of devices through logi...
详细信息
Joint estimation is crucial in the industrial internet of Things (IIoT) by integrating data from diverse devices to improve monitoring accuracy. network slicing can meet the heterogeneous needs of devices through logical isolation. However, existing methods often overlook the interaction of multiple slices on estimation performance, leading to potential estimation bias and ineffective resource costs. To address this, we propose an Age of Task (AoT)-driven associated network slicing method tailored for joint estimation scenarios. Specifically, we design an association-oriented slicing architecture for joint estimation that considers both the heterogeneous requirements of individual slices and the interactive effects of multiple slices. We define slice association based on the AoT to quantify the coupling relationship between slicing strategies and estimated performances. Moreover, we develop a dynamic-fitness multivariable particle swarm optimization algorithm to achieve associated slicing. Simulation results show that the associated slicing scheme achieves a flexible balance between timeliness and accuracy.
The highly heterogeneous ecosystem of Next Generation (NextG) wireless communication systems calls for novel networking paradigms where functionalities and operations can be dynamically and optimally reconfigured in r...
详细信息
The highly heterogeneous ecosystem of Next Generation (NextG) wireless communication systems calls for novel networking paradigms where functionalities and operations can be dynamically and optimally reconfigured in real time to adapt to changing traffic conditions and satisfy stringent and diverse quality of Service (QoS) demands. Open Radio Access network (RAN) technologies, and specifically those being standardized by the ORAN Alliance, make it possible to integrate network intelligence into the once monolithic RAN via intelligent applications, namely, xApps and rApps. These applications enable flexible control of the network resources and functionalities, network management, and orchestration through data-driven intelligent control loops. Recent work has showed how Deep Reinforcement Learning (DRL) is effective in dynamically controlling O-RAN systems. However, how to design these solutions in a way that manages heterogeneous optimization goals and prevents unfair resource allocation is still an open challenge, with the logic within DRL agents often considered as a opaque system. In this paper, we introduce PandORA, a framework to automatically design and train DRL agents for Open RAN applications, package them as xApps and evaluate them in the Colosseum wireless network emulator. We benchmark 23 xApps that embed DRL agents trained using different architectures, reward design, action spaces, and decision-making timescales, and with the ability to hierarchically control different network parameters. We test these agents on the Colosseum testbed under diverse traffic and channel conditions, in static and mobile setups. Our experimental results indicate how suitable fine-tuning of the RAN control timers, as well as proper selection of reward designs and DRL architectures can boost networkperformance according to the network conditions and demand. Notably, finer decision-making granularities can improve Massive Machine-Type Communications (mMTC)'s performance by si
Considering the scientific and economic opportunities, several public and private organizations are going to establish colonies on the Moon. In particular, lunar colonization can be a first step for deep space mission...
详细信息
Considering the scientific and economic opportunities, several public and private organizations are going to establish colonies on the Moon. In particular, lunar colonization can be a first step for deep space missions, and the initial phase is accomplished with the deployment of many internet of Things (IoT) devices and systems. Therefore, a dedicated Earth-Moon backbone, which results from the combination of terrestrial and lunar satellite segments, must be designed. Considering that its elements are inherently mobile, to ensure the connection, the constituent devices are supposed to be programmed to properly operate during specific time intervals. The features of the Software-Defined networking (SDN) paradigm allows achieving this aim. Moreover, the Temporal networks (TNs) theoretical framework makes it possible to optimize the forwarding rules. In light of these principles, this paper proposes an SDN-based architecture and analyzes the overall communications scenario proposing a specific strategy to optimize the data rate. The performance was evaluated considering the End-to-End (E2E) best path duration, the number of hops, the control packets latency, the power budget and capacity. The results point out that it is feasible to establish a networking strategy on-demand to support the transmission of continuous IoT data flows with limited overhead.
VoIP is a revolutionary technology for transmission of voice over internet protocol than from plain telephony systems. But current VoIP solutions have some important issues which are packet loss, latency, and jitter e...
详细信息
The Industrial internet of Things (IIoT) enables real-time data collection, analysis, and decision making by tightly connecting physical devices, sensors, controlsystems, and information systems. Meanwhile, in the co...
详细信息
The Industrial internet of Things (IIoT) enables real-time data collection, analysis, and decision making by tightly connecting physical devices, sensors, controlsystems, and information systems. Meanwhile, in the communication environment of industrial parks, a multi-dimensional heterogeneous information sensing network is constructed by sensor networks, shop Ethernet, field buses, etc. In this kind of network, multi-source heterogeneous data types complex and diverse, the data scale is huge, and all kinds of data flows have different quirements for transmission volume and real-time performance. Especially in a communication environment with limited network resources, the transmission delay of real-time service makes it difficult to meet the actual production requirements. These lead to problems as low real-time and insufficient reliability of sensory data transmission. To address problems, we propose an Adaptive Scheduling Algorithm for the Industrial internet of Things Based on Multi-swarm Co-evolution(AS-MPCA). The algorithm combines a two-stage multiple swarm genetic algorithm with an adaptive routing mechanism. Firstly, the two-stage multiple swarm genetic algorithm expands the search space and enhances the diversity of the scheduling scheme through the combination of global search and multiple swarm strategies, which provides diversified path selection strategies for the adaptive routing mechanism. Then, the adaptive routing mechanism dynamically adjusts the optimal path according to the scheduling results the above genetic algorithm. Simulation and experimental results demonstrate that the proposed method significantly enhances system schedulability. Compared with traditional algorithms, proposed algorithm improves the task acceptance rate by an average of 10% across various conditions, effectively reduces the transmission delay of time-sensitive data, and ensures quality of service for industrial IoT communication systems.
Software Defined networks (SDNs) support different applications' data and control operations through operational plane differentiations. Such differentiations rely on the service providers' user density and pr...
详细信息
Software Defined networks (SDNs) support different applications' data and control operations through operational plane differentiations. Such differentiations rely on the service providers' user density and processing capacity. This article introduces a controlled Service Scheduling Scheme (CS3) to ensure responsive user service support. This scheme exploits the SDN's operation plane differentiation to confine immobile request stagnancies. The routed regression learning model decides the SDN plane selection. This learning is a modified version of linear learning where the scheduling rate is the plane differentiator. The process is un-iterated until the combination of device processing capacity and number of devices is less than the service population observed. In the scheduling process, the operation to data plane migrations is decided using the maximum routed threshold. The threshold is computed for the operation and data plane from which the rate of service response or capacity of service admittance is decided. The routed regression analyzes the change in the threshold factor to ensure flexible scheduling is achieved regardless of dense IoT requests. This scheme achieves a high scheduling rate for maximizing service distributions under controlled delay. The experimental findings show that compared to the current models, the suggested method improves the scheduling rate by 13.92%, increases the distribution of services by 8.31%, and decreases delays by 11.58%. Further evidence of the approach's efficacy in managing heavy IoT traffic is its low distribution failure rate of 1.7%. These findings demonstrate that the scheme can enhance performance in ever-changing internet of Things settings by optimizing the allocation of resources.
Recently, 6G has attracted widespread attention from both academia and industry. 6G networks are expected to exhibit even more heterogeneity than 5G networks, and support various emerging scenarios and applications su...
详细信息
Recently, 6G has attracted widespread attention from both academia and industry. 6G networks are expected to exhibit even more heterogeneity than 5G networks, and support various emerging scenarios and applications such as virtual and augmented reality (VR/AR), air/ space/ground networks, and internet of Things. Such massive heterogeneous devices pose huge challenges for networkcontrol and management. Recently, advances in artificial intelligence have brought a new clan of networks, termed as self-driving 6G networks. It utilizes network telemetry, artificial intelligence, and DevOps to simplify networks and operations, helping network owners improve networkquality, as well as increase efficiency. However, the current network is built on closed merchant switching ASICs and a homegrown management and control system. Self-driving an opaque system without really knowing and controlling what they do is a hazardous and fruitless endeavor. Fortunately, the network programmability technology opens the possibility for running self-driving algorithms over the whole network. In this paper, we design a full-dimensional programmability empowered self-driving 6G network architecture. We discuss how network programmability can promote the release of network intelligence, from the view of both verticality (control and data plane) and horizontality (end to end). Moreover, three use cases are designed to demonstrate that the proposed architecture can automatically cope with the dynamically changing complex network environment, realize automatic perception and automatic decision, and then facilitate the automation level of the network and enhance the performance of the network.
暂无评论