Malleable scheduling is a model that captures the possibility of parallelization to expedite the completion of time-critical tasks. A malleable job can be allocated and processed simultaneously on multiple machines, o...
详细信息
Malleable scheduling is a model that captures the possibility of parallelization to expedite the completion of time-critical tasks. A malleable job can be allocated and processed simultaneously on multiple machines, occupying the same time interval on all these machines. We study a general version of this setting, in which the functions determining the joint processing speed of machines for a given job follow different discrete concavity assumptions (subadditivity, fractional subadditivity, submodularity, and matroid ranks). We show that under these assumptions, the problem of scheduling malleable jobs at minimum makespan can be approximated by a considerably simpler assignment problem. Moreover, we provide efficient approximation algorithms for both the scheduling and the assignment problem, with increasingly stronger guarantees for increasingly stronger concavity assumptions, including a logarithmic approximation factor for the case of submodular processing speeds and a constant approximation factor when processing speeds are determined by matroid rank functions. Computational experiments indicate that our algorithms outperform the theoretical worst-case guarantees.
It remains an open problem to find the optimal configuration of phase shifts under the discrete constraint for intelligent reflecting surface (IRS) in polynomial time. The above problem is widely believed to be diffic...
详细信息
It remains an open problem to find the optimal configuration of phase shifts under the discrete constraint for intelligent reflecting surface (IRS) in polynomial time. The above problem is widely believed to be difficult because it is not linked to any known combinatorial problems that can be solved efficiently. The branch-and-bound algorithms and the approximation algorithms constitute the best results in this area. Nevertheless, this letter shows that the global optimum can actually be reached in linear time on average in terms of the number of reflective elements (REs) of IRS. The main idea is to geometrically interpret the discrete beamforming problem as choosing the optimal point on the unit circle. Although the number of possible combinations of phase shifts grows exponentially with the number of REs, it turns out that there are only a linear number of circular arcs that possibly contain the optimal point. Furthermore, the proposed algorithm can be viewed as a novel approach to a special case of the discrete quadratic program (QP).
Network lifetime maximization in Internet of things (IoT) is of paramount importance to ensure uninterrupted data transmission and reduce the frequency of battery replacement. This letter deals with the joint lifetime...
详细信息
Network lifetime maximization in Internet of things (IoT) is of paramount importance to ensure uninterrupted data transmission and reduce the frequency of battery replacement. This letter deals with the joint lifetime-outage optimization in relay-enabled IoT networks employing a multiple relay selection (MRS) scheme. The considered MRS problem is essentially a general nonlinear 0-1 programming which is NP-hard. In this work, we use the application of the double deep Q network (DDQN) algorithm to solve the MRS problem. Our results reveal that the proposed DDQN-MRS scheme can achieve superior performance than the benchmark MRS schemes.
We consider robust single machine scheduling problems. First, we prove that with uncertain processing times, minimizing the number of tardy jobs is NP-hard. Second, we show that the weighted variant of the problem has...
详细信息
We consider robust single machine scheduling problems. First, we prove that with uncertain processing times, minimizing the number of tardy jobs is NP-hard. Second, we show that the weighted variant of the problem has the same complexity as the nominal counterpart whenever only the weights are uncertain. Last, we provide approximation algorithms for the problems minimizing the weighted sum of completion times. Noticeably, our algorithms extend to more general robust combinatorial optimization problems with cost uncertainty, such as max-cut.(c) 2023 Elsevier B.V. All rights reserved.
The adaptive minimum symbol error rate (AMSER) equalizer is known to have better symbol error rate (SER) performance than the adaptive minimum mean square error equalizer. Furthermore, the normalized AMSER (NAMSER) eq...
详细信息
The adaptive minimum symbol error rate (AMSER) equalizer is known to have better symbol error rate (SER) performance than the adaptive minimum mean square error equalizer. Furthermore, the normalized AMSER (NAMSER) equalizer outperforms the AMSER equalizer, which can be regarded as the improvement of the normalized least mean square (NLMS) equalizer by incorporating the minimum SER (MSER) criterion. Inspired by that, we propose an improved recursive least squares-based NAMSER equalizer (RLS-NAMSER) that takes the advantage of faster convergence of the RLS algorithm over the NLMS algorithm. The RLS algorithm is first reconsidered from the perspective of optimization problem and an approximate RLS (ARLS) algorithm is proposed which converges faster than the NLMS algorithm. The RLS-NAMSER equalizer is then proposed by combining the ARLS equalizer with the MSER criterion. Simulation results show that the RLS-NAMSER equalizer has better convergence performance than the NAMSER equalizer while having nearly the same steady state performance as the NAMSER equalizer.
The orthogonal matching pursuit (OMP) is one of the mainstream algorithms for sparse data reconstruction or approximation. It acts as a driving force for the development of several other greedy methods for sparse data...
详细信息
The orthogonal matching pursuit (OMP) is one of the mainstream algorithms for sparse data reconstruction or approximation. It acts as a driving force for the development of several other greedy methods for sparse data reconstruction, and it also plays a vital role in the development of compressed sensing theory for sparse signal and image reconstruction. In this article, we propose the so-called dynamic orthogonal matching pursuit (DOMP) and enhanced dynamic orthogonal matching pursuit (EDOMP) algorithms which are more efficient than OMP for sparse data reconstruction from a numerical point of view. We carry out a rigorous analysis to establish the reconstruction error bound for DOMP under the restricted isometry property of the measurement matrix. The main result claims that the reconstruction error via DOMP can be controlled and measured in terms of the number of iterations, sparsity level of data, and the noise level of measurements. Moveover, the finite convergence of DOMP for a class of large-scale compressed sensing problems is also shown.
An easy-to-implement iterative algorithm that enables efficient and scalable spectral analysis of dense matrices is presented. The algorithm relies on the approximation of a matrix's singular values by those of a ...
详细信息
An easy-to-implement iterative algorithm that enables efficient and scalable spectral analysis of dense matrices is presented. The algorithm relies on the approximation of a matrix's singular values by those of a series of smaller matrices formed from uniform random sampling of its rows and columns. It is shown that, for sufficiently incoherent and rank-deficient matrices, the singular values [are expected to] decay at the same rate as those of matrices formed via this sampling scheme, which permits such matrices' ranks to be accurately estimated from the smaller matrices' spec-tra. Moreover, for such a matrix of size m x n, it is shown that the dominant singular values are [expected to be] root mn/k times those of a k x k matrix formed by randomly sampling k of its rows and columns. Starting from a small initial guess k = k(0), the algorithm repeatedly doubles k until two convergence criteria are met;the criteria to ensure that k is sufficiently large to estimate the singular values, to the desired accuracy, are presented. The algorithm's properties are analyzed theoretically and its efficacy is studied numerically for small to very-large matrices that result from discretization of integral-equation operators, with various physical kernels common in electromagnetics and acoustics, as well as for artificial matrices of various incoherence and rank-deficiency properties.
We experimentally evaluate the performance of several Max Cut approximation algorithms. In particular, we compare the results of the Goemans and Williamson algorithm using semidefinite programming with Trevisan's ...
详细信息
Diversification is a useful tool for exploring large collections of information items. It has been used to reduce redundancy and cover multiple perspectives in information-search settings. Diversification finds applic...
详细信息
ISBN:
(纸本)9798400713293
Diversification is a useful tool for exploring large collections of information items. It has been used to reduce redundancy and cover multiple perspectives in information-search settings. Diversification finds applications in many different domains, including presenting search results of information-retrieval systems and selecting suggestions for recommender systems. Interestingly, existing measures of diversity are defined over sets of items, rather than evaluating sequences of items. This design choice comes in contrast with commonly-used relevance measures, which are distinctly defined over sequences of items, taking into account the ranking of items. The importance of employing sequential measures is that information items are almost always presented in a sequential manner, and during their information-exploration activity users tend to prioritize items with higher ranking. In this paper, we study the problem of maximizing sequential diversity. This is a new measure of diversity, which accounts for the ranking of the items, and incorporates item relevance and user behavior. The overarching framework can be instantiated with different diversity measures, and here we consider the measures of sum diversity and coverage diversity. The problem was recently proposed by Coppolillo et al. [11], where they introduce empirical methods that work well in practice. Our paper is a theoretical treatment of the problem: we establish the problem hardness and present algorithms with constant approximation guarantees for both diversity measures we consider. Experimentally, we demonstrate that our methods are competitive against strong baselines.
We study the problem of throughput maximization in multihop wireless networks with end-to-end delay constraints for each session. This problem has received much attention starting with the work of Grossglauser and Tse...
详细信息
We study the problem of throughput maximization in multihop wireless networks with end-to-end delay constraints for each session. This problem has received much attention starting with the work of Grossglauser and Tse (2002), and it has been shown that there is a significant tradeoff between the end-to-end delays and the total achievable rate. We develop algorithms to compute such tradeoffs with provable performance guarantees for arbitrary instances, with general interference models. Given a target delay-bound Delta(c) for each session c, our algorithm gives a stable flow vector with a total throughput within a factor of O(log Delta(m)/ log log Delta(m)) of the maximum, so that the per-session (end-to-end) delay is, O(((log Delta(m)/ log log Delta(m))Delta(c))(2)) where Delta(m) = max(c){Delta(c)};note that these bounds depend only on the delays, and not on the network size, and this is the first such result, to our knowledge.
暂无评论