Link reversal is a versatile algorithm design paradigm, originally proposed by Gafni and Bertsekas in 1981 for routing and subsequently applied to other problems including mutual exclusion, leader election, and resour...
详细信息
Link reversal is a versatile algorithm design paradigm, originally proposed by Gafni and Bertsekas in 1981 for routing and subsequently applied to other problems including mutual exclusion, leader election, and resource allocation. Although these algorithms are well known, until now there have been only preliminary results on time complexity, even for the simplest link reversal algorithm for routing, called Full Reversal. In Full Reversal, a sink reverses all its incident links, whereas in other link reversal algorithms (e.g., Partial Reversal), a sink reverses only some of its incident links. Charron-Bost et al. introduced a generalization, called LR, that includes Full and Partial Reversal as special cases. In this article, we present an exact expression for the time complexity of LR. The expression is stated in terms of simple properties of the initial graph. The result specializes to exact formulas for the time complexity of any node in any initial acyclic directed graph for both Full and Partial Reversal. Having the exact formulas provides insight into the behavior of Full and Partial Reversal on specific graph families. Our first technical insight is to describe the behavior of Full Reversal as a dynamical system and to observe that this system is linear in min-plus algebra. Our second technical insight is to overcome the difficulty posed by the fact that LR is not linear by transforming every execution of LR from an initial graph into an execution of Full Reversal from a different initial graph while maintaining the execution's work and time complexity.
The neural network algorithms, such as the deep-learning approach, have been widely applied in dealing with the computer vision problems. The more sophisticated the neural network model is designed;the more computing ...
详细信息
ISBN:
(纸本)9781728166957
The neural network algorithms, such as the deep-learning approach, have been widely applied in dealing with the computer vision problems. The more sophisticated the neural network model is designed;the more computing resources and processing time will be consumed. The time-complexity analysis discloses the training and validating time processed versus the accuracy outcome of the neural network model. This paper proposes a rigorous framework of conducting the time-complexity analysis against the neural network models. From the experiment results against the progressively sophisticated neural network model design, the paper argues that pursuing an unreasonably high accurate result or in persisting in finding the perfect algorithm may not be worth and in practical from the time-complexity perspective if the data quality was not consistently in favor of the algorithm chosen.
Rapid development of information technology brings along the constant need for inclusion of new and attractive modules into ICT specialist curricula and there quite common attempts to reduce less attractive parts of I...
详细信息
ISBN:
(纸本)9781728132228
Rapid development of information technology brings along the constant need for inclusion of new and attractive modules into ICT specialist curricula and there quite common attempts to reduce less attractive parts of ICT curricula and, unfortunately, theoretical subjects are very often among the candidates for possible cuts or even replacements by something more applicable. We believe that the theoretical foundations of computer science are very important and that there is a possibility to persuade our students that they must be aware of the theoretical principles in order to become a real specialist in this field. This paper describes our approach taken to teaching the module of algorithms complexity with a special focus on the types of practical examples utilized to demonstrate the usefulness and applicability of the theory of algorithms complexity.
Scientists, business analysts, and others in a growing number of fields are trying to cope with the vast amount of data being generated. The lack of software that can efficiently process large data sets hinders insigh...
详细信息
ISBN:
(纸本)9781479984541
Scientists, business analysts, and others in a growing number of fields are trying to cope with the vast amount of data being generated. The lack of software that can efficiently process large data sets hinders insight into complex relationships. One of the most important concepts in learning how to construct efficient code is time complexity analysis with Big-O notation. Students often find time complexity difficult to learn and too abstract to apply in any meaningful way. Common instructional methods consist of a combination of mathematics and intuitive analysis which are often too cumbersome for practical application or cannot be extended to complex algorithms. Unfortunately, there are few tools available for teaching time complexity that students find concrete, straightforward, and applicable to real problems. In this paper, we present O-Charts, a first step in the development of a practical toolkit for teaching and applying time complexity analysis. O-Charts allow the systematic analysis of deeply nested loops where the use of control variables makes the number of executions difficult to define, calculate, and explain. We report initial results and our plans for future work.
We investigate time complexities of finite difference methods for solving the high -dimensional linear heat equation, the high-dimensional linear hyperbolic equation and the multiscale hyperbolic heat system with quan...
详细信息
We investigate time complexities of finite difference methods for solving the high -dimensional linear heat equation, the high-dimensional linear hyperbolic equation and the multiscale hyperbolic heat system with quantum algorithms (hence referred to as the "quantum difference methods"). For the heat and linear hyperbolic equations we study the impact of explicit and implicit time discretizations on quantum advantages over the classical difference method. For the multiscale problem, we find the time complexity of both the classical treatment and quantum treatment for the explicit scheme scales as O(1/epsilon), where epsilon is the scaling parameter, while the scaling for the multiscale Asymptotic -Preserving (AP) schemes does not depend on epsilon. This indicates that it is still of great importance to develop AP schemes for multiscale problems in quantum computing.(c) 2022 Elsevier Inc. All rights reserved.
Cover Song Identification (CSI) is a task in Music Information Retrieval (MIR) that attempts to identify other versions of a song containing different structures, tonalities, and tempos, what brings several challenges...
详细信息
Cover Song Identification (CSI) is a task in Music Information Retrieval (MIR) that attempts to identify other versions of a song containing different structures, tonalities, and tempos, what brings several challenges to this task. Some of frameworks proposed to identify cover songs were evaluated through the Music Information Retrieval Evaluation eXchange (MIREX) competition that aims to assess algorithms for MIR tasks. Among the problems faced by researchers is the time complexity of the algorithms considered in the context of CSI, what limits or makes unfeasible their application in real-world scenarios. Considering this challenge, we present a time complexity evaluation of the two most popular frameworks presented and ranked at the MIREX competition. This assessment confirms both algorithms have the same worst-case scenario of O(n(2)), besides a significant difference in terms of their constants, which impact in overall processing time. Finally, this analysis can support researchers to improve algorithms for the CSI task, thus leading to more suitable frameworks for real-world scenarios. (C) 2020 Elsevier Ltd. All rights reserved.
The dynamical evolutionary algorithm (DEA) is a new evolutionary algorithm based on the theory of statistical mechanics, however, DEA converges slowly and often converge at local optima for some function optimization ...
详细信息
ISBN:
(纸本)9783037855744
The dynamical evolutionary algorithm (DEA) is a new evolutionary algorithm based on the theory of statistical mechanics, however, DEA converges slowly and often converge at local optima for some function optimization problems. In this paper, a hybrid dynamical evolutionary algorithm (HDEA) with multi-parent crossover and differential evolution mutation is proposed for accelerating convergence velocity and easily escaping suboptimal solutions. Moreover, the population of HDEA is initialized by chaos. In order to confirm the effectiveness of our algorithm, HDEA is applied to solve the typical numerical function minimization problems. The computational complexity of HDEA is analyzed, and the experimental results show that HDEA outperforms the DEA in the aspect of convergence velocity and precision, even the two algorithms have the similar time complexity.
String matching is an integral part of various important areas of computer science viz search engines, DNA matching, information retrieval, security, and biological systems. String matching algorithms are used in find...
详细信息
ISBN:
(纸本)9789811001291;9789811001277
String matching is an integral part of various important areas of computer science viz search engines, DNA matching, information retrieval, security, and biological systems. String matching algorithms are used in finding a pattern in a string. In this paper, the authors have given a novel algorithm to reduce the time complexity of string matching. The proposed algorithm is based on the concept of hashing with chaining. Further, the authors have found reduced time complexity in most of the cases and equal in few.
There exists a variety of techniques for the time complexity analysis of algorithms and functions. This analysis is used to find out the upper-bound on time complexity in big-oh notation, which is denoted by O(g(n)) w...
详细信息
ISBN:
(纸本)9789806560550
There exists a variety of techniques for the time complexity analysis of algorithms and functions. This analysis is used to find out the upper-bound on time complexity in big-oh notation, which is denoted by O(g(n)) with g(n) is a function of n, and n is the size of the given problem. Besides the big-oh complexity, there axe other complexity notations, such as Omega, theta, and small o complexities. complexity analysis is used to select an appropriate algorithm for solving a given problem using computers. Unfortunately there does not exist any efficient, generalized framework for the time complexity analysis. Most of the existing techniques are complex, obsolete, and hard to use in practice. There is a misconception in some literature that 0(g(n)) is a function rather than a set. In this paper, it has been established that 0(g(n) is basically a set, which includes all algorithms with an upper bound on the order of g(n). Moreover, a generalized framework for analyzing the time-complexity function f (n) for algorithms in obtaining the big-oh notational complexity has been proposed. The method has been extended for functions f (n(1), n(2), ... , n(m)), m = 1, 2, ... involving multiple variables. Other complexity orders are discussed, and compared with the big-oh complexity.
Generic Dijkstra is a novel algorithm for finding the optimal shortest path in both wavelength-division multiplexed networks (WDM) and elastic optical networks (EON), claimed to outperform known algorithms considerabl...
详细信息
ISBN:
(纸本)9783903176324
Generic Dijkstra is a novel algorithm for finding the optimal shortest path in both wavelength-division multiplexed networks (WDM) and elastic optical networks (EON), claimed to outperform known algorithms considerably. Because of its novelty, it has not been independently implemented and verified. Its time complexity also remains unknown. In this paper, we perform run-time analysis and show that Generic Dijkstra running time grows quadratically with the number of graph vertices and logarithmically with the number of edge units. We also discover that the running time of the Generic Dijkstra algorithm in the function of network utilization is not monotonic, as peak running time is at approximately 0.25 network utilization. Additionally, we provide an independent open source implementation of Generic Dijkstra in the Python language. We confirm the correctness of the algorithm and its superior performance. In comparison to the Filtered Graphs algorithm, Generic Dijkstra is approximately 2.3 times faster in networks with 25 to 500 nodes, and in 90% of calls its computation takes less time.
暂无评论