Memory efficiency with compact data structures for Internet Protocol (IP) lookup has recently regained much interest in the research community. In this paper, we revisit the classic trie-based approach for solving the...
详细信息
ISBN:
(纸本)9780769543017
Memory efficiency with compact data structures for Internet Protocol (IP) lookup has recently regained much interest in the research community. In this paper, we revisit the classic trie-based approach for solving the longest prefix matching (LPM) problem used in IP lookup. In particular, we target our solutions for a class of large and sparsely-distributed routing tables, such as those potentially arising in the next-generation IPv6 routing protocol. Due to longer prefix and much larger address space, straight-forward implementation of trie-based LPM can significantly increase the number of nodes and/or memory required for IP lookup. Additionally, due to the available on-chip memory and the number of I/O pins of in the state-of-theart Field Programmable Gate Arrays (FPGAs), existing designs cannot support large IPv6 routing tables consisting of over 300K prefixes. We propose two algorithms to compress the uni-bit-trie representation of a given routing table: (1) single-prefix distance-bounded path compression and (2) multiple-prefix distance-bounded path compression. These algorithms determine the optimal maximum skip distance at each node of the trie to minimize the total memory requirement. Our algorithms demonstrate substantial reduction in the memory footprint compared with the uni-bit-trie algorithm (1.86x for IPv4 and 6.16x for IPv6), and with the original path compression algorithm (1.77x for IPv4 and 1.53x for IPv6). Furthermore, implementation on a state-of-the-art FPGA shows that our algorithms achieve 466 million lookups per second and are well suited for 100Gbps lookup. The implementation also scales to support larger routing tables and longer prefix length when we go from IPv4 to IPv6.
暂无评论