Enforcing scheduling policies with software schedulers at end-hosts leads to high CPU consumption, low throughput, and inaccuracies. To address these issues, offloading packet schedulers to the network interface card ...
详细信息
Enforcing scheduling policies with software schedulers at end-hosts leads to high CPU consumption, low throughput, and inaccuracies. To address these issues, offloading packet schedulers to the network interface card presents a promising research direction. However, existing attempts suffer from inflexible on-NIC scheduling that cannot execute complex hierarchies of network policies. In this article, we propose FlowValve(+), a general framework for multi-queue packet scheduling on SoC-based SmartNICs. The key insight behind FlowValve(+) is to abstract the inherent queueing system as a single FIFO queue and perform specialized tail drop to mix the FIFO queue with expected flow proportions. FlowValve(+) leverages hardware accelerations to produce high throughput while substantially reducing CPU and memory usage on end-hosts. We prototype FlowValve(+) on Netronome Agilio 40GbE and NVIDIA Bluefield-2 100GbE SmartNICs to demonstrate its ability to accurately enforce network policies while driving TCP traffic at 40 Gbps and 80 Gbps on both platforms, respectively. Moreover, FlowValve(+) can save two CPU cores compared to DPDK packet schedulers.
暂无评论