The simultaneous elementary E-matching problem for an equational theory E is to decide whether there is an E-matcher for a given system of equations in which the only nonconstant function symbols occurring in the term...
详细信息
The simultaneous elementary E-matching problem for an equational theory E is to decide whether there is an E-matcher for a given system of equations in which the only nonconstant function symbols occurring in the terms to be matched are the ones constrained by the equational axioms of E. We study the computational complexity of simultaneous elementary matching problems for the equational theories A of semigroups, AC of commutative semigroups, and ACU of commutative monoids. In each case, we delineate the boundary between NP-completeness and solvability in polynomial time by considering two parameters, the number of equations in the systems and the number of constant symbols in the signature. Moreover, we analyze further the intractable cases of simultaneous elementary AC-matching and ACU-matching by also taking into account the maximum number of occurrences of each variable. Using combinatorial optimization techniques, we show that if each variable is restricted to having at most two occurrences, then several cases of simultaneous elementary AC-matching and ACU-matching can be solved in polynomial time.
Spreading processes on networks are often analyzed to understand how the outcome of the process (e.g. the number of affected nodes) depends on structural properties of the underlying network. Most available results ar...
详细信息
Spreading processes on networks are often analyzed to understand how the outcome of the process (e.g. the number of affected nodes) depends on structural properties of the underlying network. Most available results are ensemble averages over certain interesting graph classes such as random graphs or graphs with a particular degree distributions. In this paper, we focus instead on determining the expected spreading size and the probability of large spreadings for a single (but arbitrary) given network and study the computational complexity of these problems using reductions from well-known network reliability problems. We show that computing both quantities exactly is intractable, but that the expected spreading size can be efficiently approximated with Monte Carlo sampling. When nodes are weighted to reflect their importance, the problem becomes as hard as the s-t reliability problem, which is not known to yield an efficient randomized approximation scheme up to now. Finally, we give a formal complexity-theoretic argument why there is most likely no randomized constant-factor approximation for the probability of large spreadings, even for the unweighted case. A hybrid Monte Carlo sampling algorithm is proposed that resorts to specialized s-t reliability algorithms for accurately estimating the infection probability of those nodes that are rarely affected by the spreading process.
N-K fitness landscapes have been used widely as examples and test functions in the field of evolutionary computation. Thus, the computational complexity of these landscapes as optimization problems is of interest. We ...
详细信息
N-K fitness landscapes have been used widely as examples and test functions in the field of evolutionary computation. Thus, the computational complexity of these landscapes as optimization problems is of interest. We investigate the computational complexity of the problem of optimizing the N-K fitness functions and related fitness functions. We give an algorithm to optimize adjacent-model N-K fitness functions, which is polynomial in N. We show that the decision problem corresponding to optimizing random-model N-K fitness functions is NP-complete for K > 1, and is polynomial for K = 1. If the restriction that the ith component function depends on the ith bit is removed, then the problem is NP-complete, even for K = 1. We also give a polynomial-time approximation algorithm for the arbitrary-model N-K optimization problem.
An attempt to reduce the computational complexity of the advancing front triangulation is described. The method is first decomposed into subtasks, and the computational complexity is investigated separately for them. ...
详细信息
An attempt to reduce the computational complexity of the advancing front triangulation is described. The method is first decomposed into subtasks, and the computational complexity is investigated separately for them. It is shown that a major subtask, namely the geometric compatibility (mesh correctness) checks, can be carried out with linear growth rate. The applied techniques include modified advancing front management, and a localization device in the form of a regular grid (stored as a hypermatrix). The other subtask (access to mesh control function) could not be made of linear computational complexity for all modes of mesh control (ad hoc and adaptive). While the ad hoc gradation control yields ail algorithm with ideal overall computational complexity, the adaptive gradation control gives still a suboptimal complexity (of order O(N log N)).
Response property is a kind of liveness property. Response property problem is defined as follows: Given two activities alpha and beta whenever alpha is executed, is beta always executed after that? In this paper, we ...
详细信息
Response property is a kind of liveness property. Response property problem is defined as follows: Given two activities alpha and beta whenever alpha is executed, is beta always executed after that? In this paper, we tackled the problem in terms of Workflow Petri nets (WF-nets for short). Our results are (i) the response property problem for acyclic WF-nets is decidable, (ii) the problem is intractable for acyclic asymmetric choice (AC) WF-nets, and (iii) the problem for acyclic bridge-less well-structured WF-nets is solvable in polynomial time. We illustrated the usefulness of the procedure with an application example.
Experimental results based on offline processing reported at optical conferences increasingly rely on neural network-based equalizers for accurate data recovery. However, achieving low-complexity implementations that ...
详细信息
Experimental results based on offline processing reported at optical conferences increasingly rely on neural network-based equalizers for accurate data recovery. However, achieving low-complexity implementations that are efficient for real-time digital signal processing remains a challenge. This paper addresses this critical need by proposing a systematic approach to designing and evaluating low-complexity neural network equalizers. Our approach focuses on three key phases: training, inference, and hardware synthesis. We provide a comprehensive review of existing methods for reducing complexity in each phase, enabling informed choices during design. For the training and inference phases, we introduce a novel methodology for quantifying complexity. This includes new metrics that bridge software-to-hardware considerations, revealing the relationship between complexity and specific neural network architectures and hyperparameters. We guide the calculation of these metrics for both feed-forward and recurrent layers, highlighting the appropriate choice depending on the application's focus (software or hardware). Finally, to demonstrate the practical benefits of our approach, we showcase how the computational complexity of neural network equalizers can be significantly reduced and measured for both teacher (biLSTM+CNN) and student (1D-CNN) architectures in different scenarios. This work aims to standardize the estimation and optimization of computational complexity for neural networks applied to real-time digital signal processing, paving the way for more efficient and deployable optical communication systems.
Discrete tomography deals with problems of determining shape of a discrete object from a set of projections. In this paper, we deal with a fundamental problem in discreet tomography: reconstructing a discrete object i...
详细信息
Discrete tomography deals with problems of determining shape of a discrete object from a set of projections. In this paper, we deal with a fundamental problem in discreet tomography: reconstructing a discrete object in R-3 from its orthogonal projections, which we call three-dimensional discrete tomography. This problem has been mostly studied under the assumption that complete data of the projections are available. However, in practice, there might be missing data in the projections, which come from, e.g., the lack of precision in the measurements. In this paper, we consider the three-dimensional discrete tomography with missing data. Specifically, we consider the following three fundamental problems in discrete tomography: the consistency, counting, and uniqueness problems, and classify the computational complexities of these problems in terms of the length of one dimension. We also generalize these results to higher-dimensional discrete tomography, which has applications in operations research and statistics.
[1] The main factors affecting the overall efficiency of any numerical procedure for the solution of large antenna or scattering problems, that is, the problem size, the memory occupation, and the computational cost, ...
详细信息
[1] The main factors affecting the overall efficiency of any numerical procedure for the solution of large antenna or scattering problems, that is, the problem size, the memory occupation, and the computational cost, are introduced and are briefly discussed. It is shown how the size can be rigorously defined and estimated and the corresponding minimum, ideal computational cost is determined. Then the problem of developing algorithms approaching the ideal limit is examined, and possible ways to achieve the goal are enumerated. In particular, it is shown that in the case of large metallic scatterers in free space, the method of auxiliary sources, coupled to some kind of multilevel fast multipole algorithm, can allow development of numerical procedures whose effectiveness approaches the ideal limit.
Recently there has been a flurry of research in the area of production planning for multi-echelon production-distribution systems with deterministic non-stationary demands and no capacity constraints. A variety of alg...
详细信息
Recently there has been a flurry of research in the area of production planning for multi-echelon production-distribution systems with deterministic non-stationary demands and no capacity constraints. A variety of algorithms have been proposed to optimally solve these problems, with varying success. This paper investigates the issue of computational complexity of the problem for all commonly studied product structures, i.e. the single item, the serial system, the assembly system, the one-warehouse- N -retailer system, the distribution system, the joint replenishment system, and the general production-distribution system. Polynomial time algorithms are available for the single-item, serial and assembly systems. We prove that the remaining problems are NP-complete.
In this paper we prove that the general avalanche problem AP is in NC for all decreasing sandpile models in one dimension. It extends the developments of [5], and requires a careful attention on the general rule set c...
详细信息
In this paper we prove that the general avalanche problem AP is in NC for all decreasing sandpile models in one dimension. It extends the developments of [5], and requires a careful attention on the general rule set considered, stressing the importance of the decreasing property. This work continues the study of dimension sensitive problems since in higher dimensions the problem is P-complete (for monotone sandpiles).
暂无评论