The orchestration of Service function Chains (SFCs) in Mobile Edge Computing (MEC) becomes crucial for ensuring efficient service provision, especially under dynamic and uncertain demand. Meanwhile, the parallelizatio...
详细信息
The orchestration of Service function Chains (SFCs) in Mobile Edge Computing (MEC) becomes crucial for ensuring efficient service provision, especially under dynamic and uncertain demand. Meanwhile, the parallelization of Virtual networkfunctions (VNFs) within an SFC can further optimize resource usage and reduce the risk of deadline violations. However, most existing works formulate the SFC orchestration problem in MEC with deterministic demands and costly runtime resource reprovisioning to handle dynamic demands. This paper introduces a Robust Deadline-aware network function parallelization framework under Demand Uncertainty (RDPDU) designed to address the challenges posed by unpredictable fluctuations in user demand and resource availability within MEC networks. RDPDU to consider end-to-end latency for SFC assembly by modeling load- dependent processing latency and load-independent propagation latency. Also, RDPDU formulates the problem assuming uncertain demand by Quadratic Integer Programming (QIP) to be resistant to dynamic service demand fluctuations. By discovering dependencies between VNFs, the RDPDU effectively assembles multiple sub-SFCs instead of the original SFC. Finally, our framework uses Deep Reinforcement Learning (DRL) to assemble sub-SFCs with guaranteed latency and deadline. By integrating DRL into the SFC orchestration problem, the framework adapts to changing network conditions and demand patterns, improving the overall system's flexibility and robustness. Experimental evaluations show that the proposed framework can effectively deal with demand fluctuations, latency, deadline, and scalability and improve performance against recent algorithms.
Resource distribution policy and how to assemble the Service function Chain (SFC) in Multi-access Edge Computing (MEC) networks to meet service quality standards poses an important challenge for networkfunction Virtu...
详细信息
Resource distribution policy and how to assemble the Service function Chain (SFC) in Multi-access Edge Computing (MEC) networks to meet service quality standards poses an important challenge for networkfunction Virtualization (NFV) technology. Increasing the number of Virtual networkfunctions (VNFs) leads to high-latency SFC assembly, which can be countered by network function parallelization. However, existing studies parallelize VNF for resource allocation in MEC by assuming that the demanded resources do not change during SFC assembly. To address these issues, this paper develops a Latency-aware VNF parallelization strategy under Resource demand Uncertainty (LVPRU) in MEC. We formulate LVPRU under the assumption of resource uncertainty in MEC via Quadratic Integer Programming (QIP) and show that the problem is NP-hard. LVPRU parallelizes VNFs by discovering dependencies between them and assembles multiple sub-SFCs instead of the original SFC. We apply Asynchronous Advantage Actor-Critic (A3C) as a deep reinforcement learning algorithm to assemble sub-SFCs. We finally evaluate the performance of LVPRU through trace-driven simulations. The evaluation results of proposed strategy are promising in different scenarios compared to benchmark algorithms.
The emergence of networkfunction Virtualization (NFV) and Service function Chaining (SFC) together enable flexible and agile network management and traffic engineering. Recently, networkfunction Parallelism (NFP) ha...
详细信息
The emergence of networkfunction Virtualization (NFV) and Service function Chaining (SFC) together enable flexible and agile network management and traffic engineering. Recently, networkfunction Parallelism (NFP) has been proposed to break the need for sequential services and, hence, significantly reduce the latency of SFC. While some studies have investigated how to identify parallel paths to enhance the efficiency of parallelism, less effort has been paid to efficient networkfunction embedding with consideration of parallelism opportunities. Hence, this work aims at solving the instance deployment problem so as to prevent parallel functions from waiting for each other before continuing to the next stage. As it is extremely difficult to optimize VNF (Virtual networkfunctions) embedding for uncertain parallelism opportunities, we propose a practical embedding scoring mechanism to enhance the likelihood of low-cost function parallelism. Our scoring design tends to cluster independent functions for efficient parallelism while ensuring load balancing. After instance deployment, we then assign function instances to parallel chains so as to minimize their end-to-end latency while balancing the workload of instances. Our evaluation results show that the proposed scoringbased embedding scheme ensures homogeneous delays of parallel functions and, hence, reduces the end-to-end latency for NFP by up to 22% as compared to the embedding algorithm without considering parallelism opportunities.
The emergence of networkfunction Virtualization (NFV) and Service function Chaining (SFC) together enable flexible and agile network management and traffic engineering. Due to the sequential execution nature of SFC, ...
详细信息
The emergence of networkfunction Virtualization (NFV) and Service function Chaining (SFC) together enable flexible and agile network management and traffic engineering. Due to the sequential execution nature of SFC, the latency would grow linearly with the number of functions. To resolve this issue, functionparallelization has recently been proposed to enable independent functions to work simultaneously. Existing solutions, however, assume all the function instances are installed in the same physical machine and, thus, can be parallelized with only a little overhead. Nowadays, most of the networks deploy function instances in distributed servers for load balancing, parallelization across different servers would, in fact, introduce a non-negligible cost of duplicating or merging packets. Hence, in this work, we propose PPC (Partial Parallel Chaining), which only parallelizes functions if parallelization can indeed reduce the latency after considering function placement and the required additional parallelization cost. To this end, we design two schemes, partial parallelism enumeration and instance assignment to identify the optimal partial parallelism that minimizes the latency. Our simulation results show that PPC effectively adapts the degree of parallelism and, hence, outperforms both sequential chaining and full parallelism in any general scenario. Overall, the latency reduction can be up to 47.2% and 35.2%, respectively, as compared to sequential chaining and full parallelism.
暂无评论