We introduce Nada, the first framework to autonomously design network algorithms by leveraging the generative capabilities of large language models (LLMs). Starting with an existing algorithm implementation, Nada enab...
详细信息
ISBN:
(纸本)9798400712722
We introduce Nada, the first framework to autonomously design network algorithms by leveraging the generative capabilities of large language models (LLMs). Starting with an existing algorithm implementation, Nada enables LLMs to create a wide variety of alternative designs in the form of code blocks. It then efficiently identifies the top-performing designs through a series of filtering techniques, minimizing the need for full-scale evaluations and significantly reducing computational costs. Using adaptive bitrate (ABR) streaming as a case study, we demonstrate that Nada produces novel ABR algorithms-previously unknown to human developers-that consistently outperform the original algorithm in diverse network environments, including broadband, satellite, 4G, and 5G.
In this article, we explicitly derive the limiting degree distribution of the shortest path tree from a single source on various random network models with edge weights. We determine the asymptotics of the degree dist...
详细信息
In this article, we explicitly derive the limiting degree distribution of the shortest path tree from a single source on various random network models with edge weights. We determine the asymptotics of the degree distribution for large degrees of this tree and compare it to the degree distribution of the original graph. We perform this analysis for the complete graph with edge weights that are powers of exponential random variables (weak disorder in the stochastic mean-field model of distance), as well as on the configuration model with edge-weights drawn according to any continuous distribution. In the latter, the focus is on settings where the degrees obey a power law, and we show that the shortest path tree again obeys a power law with the same degree power-law exponent. We also consider random r-regular graphs for large r, and show that the degree distribution of the shortest path tree is closely related to the shortest path tree for the stochastic mean-field model of distance. We use our results to shed light on an empirically observed bias in network sampling methods. This is part of a general program initiated in previous works by Bhamidi, van der Hofstad and Hooghiemstra [Ann. Appl. Probab. 20 (2010) 1907 - 1965], [Combin. Probab. Cornput. 20 (2011) 683-707], [Adv. in AppL Probab. 42 (2010) 706-738] of analyzing the effect of attaching random edge lengths on the geometry of random network models.
Blockchain leader-based protocols elect leaders for proposing the next block of transactions. Proposed blocks need to pass a validation routine in order to be added to the blockchain. Proposers may prioritize certain ...
详细信息
Blockchain leader-based protocols elect leaders for proposing the next block of transactions. Proposed blocks need to pass a validation routine in order to be added to the blockchain. Proposers may prioritize certain transactions based on their fees or accounts, which enables attackers to gain profits through block building, while simultaneously causing negative impacts on other users. A fair block selection follows a random selection of pending transactions among those that a proposer is aware of. We propose CFTO, a protocol that aims at encouraging fair block selection in a leader-based blockchain network while taking into account real network conditions, such as the network's topology structure and the forwarding protocol. CFTO offers two main contributions. First, it provides incentives for acting honestly and diminishing malicious and dishonest nodes. To accomplish this, we use a reputation system, whereby each node is given a reputation score based on its actions. Second, it consists of an algorithm that evaluates the proposed blocks based on the zone structure of the network. Furthermore, we adapt the evaluation algorithm to fit the additional order constraints implied in Ethereum transaction ordering. We demonstrate the improved accuracy of CFTO in detecting fair blocks, in terms of increasing the probability of approving fair blocks and decreasing the probability of approving unfair blocks, by implementing experiments and comparing them with Helix (Yakira et al., 2021), a previously proposed consensus algorithm for fair block selection. As part of our experiments, we also compare certain features with those of previous studies.
Payment channel networks (PCNs) are a leading method to scale the transaction throughput in cryptocurrencies. Two participants can use a bidirectional payment channel for making multiple mutual payments without commit...
详细信息
Payment channel networks (PCNs) are a leading method to scale the transaction throughput in cryptocurrencies. Two participants can use a bidirectional payment channel for making multiple mutual payments without committing them to the blockchain. Opening a payment channel is a slow operation that involves an on-chain transaction locking a certain amount of funds. These aspects limit the number of channels that can be opened or maintained. Users may route payments through a multi-hop path and thus avoid opening and maintaining a channel for each new destination. Unlike regular networks, in PCNs capacity depends on the usage patterns and, moreover, channels may become unidirectional. Since payments often fail due to channel depletion, a protection scheme to overcome failures is of interest. We define the stopping time of a payment channel as the time at which the channel becomes depleted. We analyze the mean stopping time of a channel as well as that of a network with a set of channels and examine the stopping time of channels in particular topologies. We then propose a scheme for optimizing the capacity distribution among the channels in order to increase the minimal stopping time in the network. We conduct experiments and demonstrate the accuracy of our model and the efficiency of the proposed optimization scheme.
We consider a stateless `amnesiac' variant of the stateful distributed network flooding algorithm, expanding on our conference papers [PODC'19, STACS'20]. Flooding begins with a set of source `initial'...
详细信息
We consider a stateless `amnesiac' variant of the stateful distributed network flooding algorithm, expanding on our conference papers [PODC'19, STACS'20]. Flooding begins with a set of source `initial' nodes I seeking to broadcast a message M in rounds, in a network represented by an undirected graph (G, E) with set of nodes G and edges E. In every round, nodes with M send M to all neighbours from which they did not receive M in the previous round. Nodes do not remember earlier rounds, raising the possibility that M circulates indefinitely. Stateful flooding overcomes this by nodes recording every message circulated and ignoring M if received again. This overhead was assumed to be necessary. We show that almost optimal broadcast can still be achieved without this overhead. We prove that amnesiac flooding terminates on every finite graph and derive sharp bounds for termination times. Define (G, E) to be I-bipartite if the quotient graph, contracting all nodes in I to a single node, is bipartite. We prove that, if d is the diameter and e( I) the eccentricity of the set I, flooding terminates in e( I) rounds if (G, E) is I-bipartite and j rounds with e( I) < j <= e( I)+ d + 1 <= 2d + 1 if ( G, E) is non I-bipartite. The separation in the termination times can be used for distributed discovery of topologies/distances in graphs. Termination is guaranteed if edges are lost during flooding but not, in general, if there is a delay at an edge. However, the cases of single-edge fixed delays of duration tau rounds in single-source bipartite graphs terminate by round 2d + tau- 1, and all cases of multiple-edge fixed delays in multiple-source cycles terminate.
The Invertible Bloom Lookup Table (IBLT) is a probabilistic concise data structure for set representation that supports a listing operation as the recovery of the elements in the represented set. Its applications can ...
详细信息
The Invertible Bloom Lookup Table (IBLT) is a probabilistic concise data structure for set representation that supports a listing operation as the recovery of the elements in the represented set. Its applications can be found in network synchronization and traffic monitoring as well as in error-correction codes. IBLT can list its elements with probability affected by the size of the allocated memory and the size of the represented set, such that it can fail with small probability even for relatively small sets. While previous works only studied the failure probability of IBLT, this work initiates the worst case analysis of IBLT that guarantees successful listing for all sets of a certain size. The worst case study is important since the failure of IBLT imposes high overhead. We describe a novel approach that guarantees successful listing when the set satisfies a tunable upper bound on its size. To allow that, we develop multiple constructions that are based on various coding techniques such as stopping sets and the stopping redundancy of error-correcting codes, and Steiner systems as well as new methodologies we develop. We analyze the sizes of IBLTs with listing guarantees obtained by the various methods as well as their mapping memory and runtime overheads. Lastly, we study lower bounds on the achievable sizes of IBLT with listing guarantees and verify the results in the paper by simulations.
With the rapid development of the Internet and mobile applications, traffic classification plays an increasingly crucial role in network management. Traditional traffic classification methods have limitations as they ...
详细信息
ISBN:
(纸本)9798350376784;9798350376777
With the rapid development of the Internet and mobile applications, traffic classification plays an increasingly crucial role in network management. Traditional traffic classification methods have limitations as they rely on pre-defined features, and technologies in modern applications, such as HTTPS and content delivery networks (CDN), which may reduce their classification accuracy. Deep learning based methods can capture higher-level features, but overly complex models can lead to higher computational complexity and affect generalization performance. In this paper, we propose RFT-ACG, a method using a cascaded classifier to classify encrypted application traffic. Generally, the majority of encrypted flows can be simply classified using only the feature of primary session. Such flows are referred to as "simple flows", while the rest of encrypted flows which require complicated session features of multi-party interactions to achieve accurate traffic classification, are referred to as "complex flows". Given traffic blocks generated by applications, a divide-and-conquer method is first exploited to determine whether they are simple or complex flows. As to a simple flow, it is directly classified using simple machine learning methods;otherwise, the complex flow is handed over to more complex learning methods for further processing. Our experimental results indicate that compared to the recently proposed methods, RFT-ACG can classify most of the network traffic in the first stage, effectively reducing the proportion of network traffic that needs to be classified by constructing complex traffic interaction graphs, while achieving an accuracy improvement of approximately 4%.
This paper addresses the problem of finding the optimal location for the hubs in a network, under demand uncertainty, and where the allocation of the nodes is treated as a second stage decision. The contribution is re...
详细信息
ISBN:
(纸本)9798350319552;9798350319545
This paper addresses the problem of finding the optimal location for the hubs in a network, under demand uncertainty, and where the allocation of the nodes is treated as a second stage decision. The contribution is represented by a mathematical model that we carved to fit the operational needs of GUROBI Optimizer. The latter allows us to preliminary assessing the performance of the proposed model.
Emerging software-defined networking technologies enable more adaptive communication infrastructures, allowing for quick reactions to changes in networking requirements by exploiting the workload's temporal struct...
详细信息
ISBN:
(纸本)9781450398923
Emerging software-defined networking technologies enable more adaptive communication infrastructures, allowing for quick reactions to changes in networking requirements by exploiting the workload's temporal structure. However, operating networks adaptively is algorithmically challenging, as meeting networks' stringent dependability requirements relies on maintaining basic consistency and performance properties, such as loop freedom and congestion minimization, even during the update process. This paper leverages an augmentation-speed tradeoff to significantly speed up consistent network updates. We show that allowing for a small and short (hence practically tolerable, e.g., using buffering) oversubscription of links allows us to solve many network update instances much faster, as well as to reduce computational complexities (i.e., the running times of the algorithms). We first explore this tradeoff formally, revealing the computational complexity of scheduling updates. We then present and analyze algorithms that maintain logical and performance properties during the update. Using an extensive simulation study, we find that the tradeoff is even more favorable in practice than our analytical bounds suggest. In particular, we find that by allowing just 10% augmentation, update times reduce by more than 32% on average, across a spectrum of real-world networks.
The survivable logical topology mapping (SLTM) problem in IP-over-WDM networks is to map each link in the logical topology (IP layer) onto a lightpath in the physical topology (optical layer) such that a failure of a ...
详细信息
The survivable logical topology mapping (SLTM) problem in IP-over-WDM networks is to map each link in the logical topology (IP layer) onto a lightpath in the physical topology (optical layer) such that a failure of a physical link does not cause the logical topology to become disconnected. This problem is known to be NP complete. For this SLTM problem, two lines of investigations have been reported in the literature: the mathematical programming approach [1] and the structural approach introduced by Kurant and Thiran in [2] and pursued by Thulasiraman et al. [3,4,5]. In this paper we present an integrated treatment of the theoretical foundation of the survivable topology mapping problem presented in [3,4,5]. We believe that the algorithmic strategy developed in this paper will serve as an important phase in any strategy in the emerging area of resilient slicing of elastic optical networks. We conclude with a comparative evaluation, based on simulations, of the different algorithmic strategies developed in the paper, and also pointing to applications beyond IP-over-WDM optical networks, in particular, survivable design of inter-dependent multi-layer cyber physical systems such as smart power grids.
暂无评论