Yajisan-Kazusan and Stained Glass are Nikoli's pencil puzzles. We study the computational complexity of Yajisan-Kazusan and Stained Glass puzzles. It is shown that deciding whether a given instance of each puzzle ...
详细信息
Yajisan-Kazusan and Stained Glass are Nikoli's pencil puzzles. We study the computational complexity of Yajisan-Kazusan and Stained Glass puzzles. It is shown that deciding whether a given instance of each puzzle has a solution is NP-complete.
In this paper, the recoverable robust shortest path problem is investigated. Discrete budgeted interval uncertainty representation is used to model uncertain second-stage arc costs. The known complexity results for th...
详细信息
In this paper, the recoverable robust shortest path problem is investigated. Discrete budgeted interval uncertainty representation is used to model uncertain second-stage arc costs. The known complexity results for this problem are strengthened. Namely, it is shown that the recoverable robust shortest path problem is Sigma(p)(3)-hard for the arc exclusion and arc symmetric difference neighborhoods. Furthermore, it is also proven that the inner adversarial problem for these neighborhoods is Pi(p)(2)-hard. (c) 2025 Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
In 1979, Hylland and Zeckhauser [J. Polit. Econ., 87 (1979), pp. 293--314] gave a simple and general mechanism for a one-sided matching market, given cardinal utilities of agents over goods. They use the power of a pr...
详细信息
In 1979, Hylland and Zeckhauser [J. Polit. Econ., 87 (1979), pp. 293--314] gave a simple and general mechanism for a one-sided matching market, given cardinal utilities of agents over goods. They use the power of a pricing mechanism, which endows their mechanism with several desirable properties---it produces an allocation that is Pareto optimal and envy free, and the mechanism is incentive compatible in the large. It therefore provides an attractive, off-the-shelf method for running an application involving such a market. With matching markets becoming ever more prevalent and impactful, it is imperative to characterize the computational complexity of this mechanism. We present the following results: (1) A combinatorial, strongly polynomial time algorithm for the dichotomous case, i.e., 0/1 utilities, and more generally, when each agent's utilities come from a bivalued set. (2) An example that has only irrational equilibria;hence this problem is not in PPAD. (3) A proof of membership of the problem in the class FIXP;as a corollary we get that a Hylland--Zeckhauser (HZ) equilibrium can always be expressed via algebraic numbers. For this purpose, we give a new proof of the existence of an HZ equilibrium using Brouwer's fixed point theorem;the proof of Hylland and Zeckhauser used Kakutani's fixed point theorem, which is more involved. (4) A proof of membership of the problem of computing an approximate HZ equilibrium in the class PPAD. In subsequent work [T. Chen et al., SODA 2022, SIAM, Philadelphia, pp. 2253-2268], the problem of computing an approximate HZ equilibrium was shown to be PPAD-hard, thereby establishing it to be PPAD-complete. We leave open the (difficult) question of determining if computing an exact HZ equilibrium is FIXP-hard. We also give pointers to the substantial body of work on cardinal-utility matching markets which followed [V. V. Vazirani and M. Yannakakis, LIPIcs. Leibniz Int. Proc. Inform. 185, Schloss Dagstuhl, Wadern Germany, 59].
Time reversal (TR) has much potential to be used in more intelligent imaging applications. Developed with the TR imaging system of the large-scale array and wide bandwidth, the large elements and high sampling frequen...
详细信息
Time reversal (TR) has much potential to be used in more intelligent imaging applications. Developed with the TR imaging system of the large-scale array and wide bandwidth, the large elements and high sampling frequencies jointly increase the computational complexity significantly. Meanwhile, the weakened imaging accuracy is always accompanied on account of the reduced computations. To balance the computational complexity and imaging accuracy, this article proposes the optimal truncated space-frequency multiple signal classification (OTSF-MUSIC) imaging method. In this approach, the principal components of space-frequency multistatic data matrix (SF-MDM) are first obtained by the optimal truncated space-frequency (OTSF) operator. Second, the orthogonality between the SF-MDM and the noise space decomposed by the singular value decomposition (SVD) is demonstrated. Finally, the equivalent noise subspace constructed by the OTSF operator, which is proved to be also orthogonal to the SF-MDM, can be used in OTSF-MUSIC. The numerical results show that the proposed OTSF-MUSIC without SVD can save approximately 91% of runtimes and decline the imaging errors of 0.62 lambda(c) at the extremely strong perturbated conditions, which indicates that OTSF-MUSIC achieves a tradeoff between the computational complexity and imaging accuracy.
The spot flexibility markets are before the real-time energy exchange, allowing demand-side management to reduce energy consumption during peak periods. In these markets, demand aggregators must quickly choose the cus...
详细信息
The spot flexibility markets are before the real-time energy exchange, allowing demand-side management to reduce energy consumption during peak periods. In these markets, demand aggregators must quickly choose the customers' reduction bids that fulfill grid requirements. This clearing procedure is challenging due to the computational complexity of selecting the optimal bids. Therefore, developing a clearing mechanism that avoids searching the entire flexibility bid space while respecting grid constraints is essential for the smooth operation of the spot flexibility market. This paper presents a clearing mechanism with reduced computational complexity of the winner determination problem in spot flexibility market for demand aggregators carrying out reductions in energy consumption. The proposed approach transforms customers' flexibility bids into a reward-based function. Afterward, the gradient-based optimization solves the bid selection problem. This approach helps demand aggregators achieve satisfactory energy reductions within an appropriate delay for spot flexibility markets. A comparative study presents the effectiveness of the proposed approach against commonly used approaches: hybrid particle swarm optimization genetic algorithm and combinatorial search.
Navigation without Global Navigation Satellite Systems (GNSS) poses a significant challenge in aerospace engineering, particularly in the environments where satellite signals are obstructed or unavailable. This paper ...
详细信息
Navigation without Global Navigation Satellite Systems (GNSS) poses a significant challenge in aerospace engineering, particularly in the environments where satellite signals are obstructed or unavailable. This paper offers an in-depth review of various methods, sensors, and algorithms for Unmanned Aerial Vehicle (UAV) localization in outdoor environments where GNSS signals are unavailable or denied. A key contribution of this study is the establishment of a critical classification system that divides GNSS-denied navigation techniques into two primary categories: absolute and relative localization. This classification enhances the understanding of the strengths and weaknesses of different strategies in various operational contexts. Vision-based localization is identified as the most effective approach in GNSS-denied environments. Nonetheless, it's clear that no single-sensor-based localization algorithm can fulfill all the needs of a comprehensive navigation system in outdoor environments. Therefore, it's vital to implement a hybrid strategy that merges various algorithms and sensors for effective outcomes. This detailed analysis emphasizes the challenges and possible solutions for achieving reliable and effective outdoor UAV localization in environments where GNSS is unreliable or unavailable. This multi-faceted analysis, highlights the complexities and potential pathways for achieving efficient and dependable outdoor UAV localization in GNSS-denied environments.
Biological systems are widely regarded as performing computations. It is much less clear, however, what exactly is computed and how biological computation fits within the framework of standard computer science. Here w...
详细信息
Biological systems are widely regarded as performing computations. It is much less clear, however, what exactly is computed and how biological computation fits within the framework of standard computer science. Here we explore the idea that evolution confines biological computation to subsets of instances that can be solved efficiently with algorithms that are 'hardcoded' in the system itself. We use RNA secondary structure prediction as a simple surrogate for developmental programs to demonstrate that the salient features of the genotype-phenotype map remain intact even if 'simpler' algorithms are employed that correctly compute the structures only for small subsets of instances, albeit quantitative differences depending on the choice of alternative algorithms can be observed.
Within the reductionist framework, researchers in the special sciences formulate key terms and concepts and try to explain them with lower-level science terms and concepts. For example, behavioural vision scientists d...
详细信息
Within the reductionist framework, researchers in the special sciences formulate key terms and concepts and try to explain them with lower-level science terms and concepts. For example, behavioural vision scientists describe contrast perception with a psychometric function, in which the perceived brightness increases logarithmically with the physical contrast of a light patch (the Weber-Fechner law). Visual neuroscientists describe the output of neural circuits with neurometric functions. Intuitively, the key terms from two adjacent scientific domains should map onto each other;for instance, psychometric and neurometric functions may map onto each other. Identifying such mappings has been the very goal of neuroscience for nearly two centuries. Yet mapping behaviour to brain measures has turned out to be difficult. Here, we provide various arguments as to why the conspicuous lack of robust brain-behaviour mappings is rather a rule than an exception. First, we provide an overview of methodological and conceptual issues that may stand in the way of successful brain-behaviour mapping. Second, extending previous theoretical work (Herzog, Doerig and Sachse, 2023), we show that brain-behaviour mapping may be limited by complexity barriers. In this case, reduction may be impossible.
The computational complexity of polynomial ideals and Gröbner bases has been studied since the 1980s. In recent years the related notions of polynomial subalgebras and SAGBI bases have gained more and more attent...
详细信息
Local and sparse attention effectively reduce the high computational cost of global self-attention. However, they suffer from non-global dependency and coarse feature capturing, respectively. While some subsequent mod...
详细信息
Local and sparse attention effectively reduce the high computational cost of global self-attention. However, they suffer from non-global dependency and coarse feature capturing, respectively. While some subsequent models employ effective interaction techniques for better classification performance, we observe that the computation and memory of these models overgrow as the resolution increases. Consequently, applying them to downstream tasks with large resolutions takes time and effort. In response to this concern, we propose an effective backbone network based on a novel attention mechanism called Concatenating glObal tokens in Local Attention (COLA) with a linear computational complexity. The implementation of COLA is straightforward, as it incorporates global information into the local attention in a concatenating manner. We introduce a learnable condensing feature (LCF) module to capture high-quality global information. LCF possesses the following properties: (1) performing a function similar to clustering, aggregating image patches into a smaller number of tokens based on similarity. (2) a constant number of aggregated tokens regardless of the image size, ensuring that it is a linear complexity operator. Based on COLA, we build COLAFormer, which achieves global dependency and fine-grained feature capturing with linear computational complexity and demonstrates impressive performance across various vision tasks. Specifically, our COLAFormer-S achieves 84.5% classification accuracy, surpassing other advanced models by 0.4% with similar or less resource consumption. Furthermore, our COLAFormer-S can achieve a better object detection performance while consuming only 1/4 of the resources compared to other state-of-the-art models. The code and models will be made publicly available.
暂无评论