In this paper we report on extensive experiments for determining partial dominating sets of small size for various types of real and synthetic social networks. Our experiments ran on several real network datasets made...
详细信息
In this paper we report on extensive experiments for determining partial dominating sets of small size for various types of real and synthetic social networks. Our experiments ran on several real network datasets made available by the Stanford Network Analysis Project and on some synthetic power-law and random networks created with social network generators. To compute partial dominating sets on these networks we used five algorithms compared in [4], which were adapted for partial dominating sets. Our experiments showed that there are several good algorithms that can efficiently find quality approximations for the minimum-size partial dominating set problem. The best algorithm choice is dependent on the network characteristics and the value of the coverage parameter.
We study the spectrum assignment (SA) problem in ring networks with shortest path (or, more generally, fixed) routing. With fixed routing, each traffic demand follows a predetermined path to its destination. In earlie...
详细信息
ISBN:
(纸本)9781479969609
We study the spectrum assignment (SA) problem in ring networks with shortest path (or, more generally, fixed) routing. With fixed routing, each traffic demand follows a predetermined path to its destination. In earlier work, we have shown that the SA problem can be viewed as a multiprocessor problem. Based on this insight, we prove that, under the shortest path assumption, the SA problem can be solved in polynomial time in small rings, and we develop constant-ratio approximation algorithms for large rings. For rings of size up to 16 nodes (the maximum size of a SONET/SDH ring), the approximation ratios of our algorithms are strictly smaller than the best known ratio to date.
k-slow burning is a model for contagion in social networks. In this model, given an undirected graph G in every time step, first every burning vertex spreads the fire to up to k of its neighbours, before second one ad...
详细信息
k-slow burning is a model for contagion in social networks. In this model, given an undirected graph G in every time step, first every burning vertex spreads the fire to up to k of its neighbours, before second one additional source of fire is ignited. The k-slow burning number, denoted by bs(k, G), is the minimum number of time steps needed until the whole graph is burning. This model can be seen as a combination of the classic graph burning problem and the much older (k-)broadcasting problem. We prove NP-hardness of the k-slow burning problem for every fixed k on path forests, spider graphs and most notably the class of graphs of radius 1, where normal graph burning is solvable in polynomial time. Furthermore, we show that among all connected graphs on n vertices, the k-slow burning number of the star graph, bs(k, Sn-1), is maximal fork is an element of {1, 2} and asymptotically maximal for fixed k >= 3. This observation motivates a generalisation of the burning number conjecture for k-slow burning. Finally, we give a 3/2-approximation for the k-slow burning problem on path forests and a 2-approximation on trees. (c) 2024 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://***/licenses/by-nc-nd/4.0/).
Identifying the source of epidemic-like spread in networks is crucial for removing internet viruses or finding the source of rumors in online social networks. The challenge lies in tracing the source from a snapshot o...
详细信息
Identifying the source of epidemic-like spread in networks is crucial for removing internet viruses or finding the source of rumors in online social networks. The challenge lies in tracing the source from a snapshot observation of infected nodes. How do we accurately pinpoint the source? Utilizing snapshot data, we apply a probabilistic approach, focusing on the graph boundary and the observed time, to detect sources via an effective maximum likelihood algorithm. A novel starlike tree approximation extends applicability to general graphs, demonstrating versatility. Unlike previous works that rely heavily on structural properties alone, our method also incorporates temporal data for more precise source detection. We highlight the utility of the Gamma function for analyzing the ratio of the likelihood being the source between nodes asymptotically. Comprehensive evaluations confirm algorithmic effectiveness in diverse network scenarios, advancing source detection in large-scale network analysis and information dissemination strategies.
Nowadays, data storage, server replicas/mirrors, virtual machines, and various kinds of services can all be regarded as different types of resources. These resources play an important role in today’s computer world b...
详细信息
Nowadays, data storage, server replicas/mirrors, virtual machines, and various kinds of services can all be regarded as different types of resources. These resources play an important role in today’s computer world because of the continuing advances in information technology. It is usual that similar resources are grouped together at the same site, and can then be allocated to geographically distributed clients. This is the resource allocation paradigm considered in this thesis. Optimizing solutions to a variety of problems arising from this paradigm remains a key challenge, since these problems are NP-hard. For all the resource allocation problems studied in this thesis, we are given a set of sites containing facilities as resources, a set of clients to access these facilities, an opening cost for each facility, and a connection cost for each allocation of a facility to a client. The general goal is to decide the number of facilities to open at each site and allocate the open facilities to clients so that the total cost incurred is minimized. This class of the problems extends the classical NP-hard facility location problems with additional abilities to capture various practical resource allocation scenarios. To cope with the NP-hardness of the resource allocation problems, the thesis focuses on the design and analysis of approximation algorithms. The main techniques we adopt are linear programming based, such as primal-dual schema, linear program rounding, and reductions via linear programs. Our developed solutions have great potential for optimizing the performances of many contemporary distributed systems such as cloud computing, content delivery networks, Web caching, and Web services provisioning
In this article we prove that the minimum-degree greedy algorithm, with adversarial tie-breaking, is a (2/3)-approximation for the MAXIMUM INDEPENDENT SET problem on interval graphs. We show that this is tight, even o...
详细信息
In this article we prove that the minimum-degree greedy algorithm, with adversarial tie-breaking, is a (2/3)-approximation for the MAXIMUM INDEPENDENT SET problem on interval graphs. We show that this is tight, even on unit interval graphs of maximum degree 3. We show that on chordal graphs, the greedy algorithm is a (1/2)-approximation and that this is again tight. These results contrast with the known (tight) approximation ratio of 3/triangle+2 of the greedy algorithm for general graphs of maximum degree triangle. (c) 2024 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY license (http://***/licenses/by/4.0/).
In this work, we consider convex optimization problems with smooth objective function and nonsmooth functional constraints. We propose a new stochastic gradient algorithm, called the stochastic halfspace approximation...
详细信息
In this work, we consider convex optimization problems with smooth objective function and nonsmooth functional constraints. We propose a new stochastic gradient algorithm, called the stochastic halfspace approximation method (SHAM), to solve this problem, where at each iteration we first take a gradient step for the objective function and then we perform a projection step onto one halfspace approximation of a randomly chosen constraint. We propose various strategies to create this stochastic halfspace approximation and we provide a unified convergence analysis that yields new convergence rates for the SHAM algorithm in both optimality and feasibility criteria evaluated at some average point. In particular, we derive convergence rates of order O(1/root k) , when the objective function is only convex, and O(1/k) when the objective function is strongly convex. The efficiency of SHAM is illustrated through detailed numerical simulations.
In this paper, we study a connected submodular function maximization problem, which arises from many applications including deploying UAV networks to serve users and placing sensors to cover Points of Interest (PoIs)....
详细信息
In this paper, we study a connected submodular function maximization problem, which arises from many applications including deploying UAV networks to serve users and placing sensors to cover Points of Interest (PoIs). Specifically, given a budget K , the problem is to find a subset S with K nodes from a graph G , so that a given submodular function f(S) on S is maximized and the induced subgraph G[S] by the nodes in S is connected, where the submodular function f can be used to model many practical application problems, such as the number of users within different service areas of the deployed UAVs in S , the sum of data rates of users served by the UAVs, the number of covered PoIs by placed sensors, etc. We then propose a novel 1-1/e/2h+2 -approximation algorithm for the problem, improving the best approximation ratio 1-1/e/2h+3 for the problem so far, through estimating a novel upper bound on the problem and designing a smart graph decomposition technique, where e is the base of the natural logarithm, h is a parameter that depends on the problem and its typical value is 2. In addition, when h=2 , the algorithm approximation ratio is at least 1-1/e/5 and may be as large as 1 in some special cases when K <= 23 , and is no less than 1-1/e/6 when K >= 24 , compared with the current best approximation ratio 1-1/e/7(=1-1/e/2h+3) for the problem. Finally, experimental results in the application of deploying a UAV network demonstrate that, the number of users within the service area of the deployed UAV network by the proposed algorithm is up to 7.5% larger than those by existing algorithms, and the throughput of the deployed UAV network by the proposed algorithm is up to 9.7% larger than those by the algorithms. Furthermore, the empirical approximation ratio of the proposed algorithm is between 0.7 and 0.99, which is close to the theoretical maximum value one.
We initiate a broad study of classical problems in the streaming model with insertions and deletions in the setting where we allow the approximation factor α to be much larger than 1. Such algorithms can use signific...
详细信息
ISBN:
(纸本)9783959772495
We initiate a broad study of classical problems in the streaming model with insertions and deletions in the setting where we allow the approximation factor α to be much larger than 1. Such algorithms can use significantly less memory than the usual setting for which α = 1 + ϵ for an ϵ ∈ (0, 1). We study large approximations for a number of problems in sketching and streaming, assuming that the underlying n-dimensional vector has all coordinates bounded by M throughout the data stream: 1. For the p norm/quasi-norm, 0 Θ(1), which holds even for randomly ordered streams or for streams in the bounded deletion model. 2. For estimating the p norm, p > 2, we show an upper bound of O(n1-2/p(log n log M)/α2) bits for an α-approximation, and give a matching lower bound for linear sketches. 3. For the 2-heavy hitters problem, we show that the known lower bound of Ω(k log n log M) bits for identifying (1/k)-heavy hitters holds even if we are allowed to output items that are 1/(αk)-heavy, provided the algorithm succeeds with probability 1 - O(1/n). We also obtain a lower bound for linear sketches that is tight even for constant failure probability algorithms. 4. For estimating the number 0 of distinct elements, we give an n1/t-approximation algorithm using O(t log log M) bits of space, as well as a lower bound of Ω(t) bits, both excluding the storage of random bits, where n is the dimension of the underlying frequency vector and M is an upper bound on the magnitude of its coordinates. 5. For α-approximation to the Schatten-p norm, we give near-optimal Õ(n2-4/p/α4) sketching dimension for every even integer p and every α ≥ 1, while for p not an even integer we obtain near-optimal sketching dimension once α = Ω(n1/q-1/p), where q is the largest even integer less than p. The latter is surprising as it is unknown what the complexity of Schatten-p norm estimation is for constant approximation;we show once the approximation factor is at least n1/q-1/p, we can obtain near-optimal sketc
In hyperspectral anomaly detection (HAD), tensor low-rankness is essential for effectively separating background and anomaly. However, most of the current low-rank-based methods do not use the spatial-spectral low-ran...
详细信息
In hyperspectral anomaly detection (HAD), tensor low-rankness is essential for effectively separating background and anomaly. However, most of the current low-rank-based methods do not use the spatial-spectral low-rankness and the nonlocal self-similarity simultaneously. To address this issue, we propose a tensor double nuclear norm-based tensor approximation (TDNN-TA) model with all the priors in a unified convex framework, which can be efficiently handled through a well-organized alternating direction method of multipliers. Especially, to thoroughly model the background by tensor approximation, we propose a tensor double nuclear norm (TDNN), which achieves a more precise and flexible exploration of low-rankness and nonlocal self-similarity by applying different low-rank constraints to the global tensor and the group tensor. Moreover, to explore the intrinsic characteristics of different priors by various tensor ranks, we employ the Fourier transform-based three-directional tensor nuclear norm to approximate the nonlocal group tensor rank, and the framelet-based three-modal tensor nuclear norm to approximate the global tensor rank. Experimental results validated on several real hyperspectral datasets demonstrate that TDNN-TA is effective in detecting different sizes of anomalous targets and achieves competitive results for various scenes.
暂无评论