Lua (Ierusalimschy et al., 1996) is a well-known scripting language, popular among many programmers, most notably in the gaming industry. Remarkably, the only data-structuring mechanism in Lua is given by associative ...
详细信息
Lua (Ierusalimschy et al., 1996) is a well-known scripting language, popular among many programmers, most notably in the gaming industry. Remarkably, the only data-structuring mechanism in Lua is given by associative arrays, called tables. With Lua 5.0, the reference implementation of Lua introduced hybrid tables to implement tables using both a hashmap and a dynamically growing array combined together: the values associated with integer keys are stored in the array part, when suitable, everything else is stored in the hashmap. All this is transparent to the user, who gets a unique simple interface to handle tables. In this paper we carry out a theoretical analysis of the performance of Lua's tables, by considering various worst-case and probabilistic scenarios. In particular, we uncover some problematic situations for the simple probabilistic model where we add a new key with some fixed probability p> (1)/2 and delete a key with probability 1-p: the cost of performing T such operations is proved to be Q(T log T) with high probability, where linear complexity would be assumed for such data structure. If there is no deletion, we prove that the tables behave better overall. In particular, we establish that inserting the T integers from 1 to T in a random order is done in an essentially linear time.
In this paper, an entirely novel discrete probabilistic model is presented to generate 0-1 Knapsack Problem instances. We analyze the expected behavior of the greedy algorithm, the eligible-first algorithm and the lin...
详细信息
In this paper, an entirely novel discrete probabilistic model is presented to generate 0-1 Knapsack Problem instances. We analyze the expected behavior of the greedy algorithm, the eligible-first algorithm and the linear relaxation algorithm for these instances;all used to bound the solution of the 0-1 Knapsack Problem (0-1 KP) and/or its approximation. The probabilistic setting is given and the main random variables are identified. The expected performance for each of the aforementioned algorithms is analytically established in closed forms in an unprecedented way.
In order to create artificial enzymatic networks capable of increasingly complex behavior, an improved methodology in understanding and controlling the kinetics of these networks is needed. Here, we introduce a Bayesi...
详细信息
In order to create artificial enzymatic networks capable of increasingly complex behavior, an improved methodology in understanding and controlling the kinetics of these networks is needed. Here, we introduce a Bayesian analysis method allowing for the accurate inference of enzyme kinetic parameters and determination of most likely reaction mechanisms, by combining data from different experiments and network topologies in a single probabilisticanalysis framework. This Bayesian approach explicitly allows us to continuously improve our parameter estimates and behavior predictions by iteratively adding new data to our models, while automatically taking into account uncertainties introduced by the experimental setups or the chemical processes in general. We demonstrate the potential of this approach by characterizing systems of enzymes compartmentalized in beads inside flow reactors. The methods we introduce here provide a new approach to the design of increasingly complex artificial enzymatic networks, making the design of such networks more efficient, and robust against the accumulation of experimental errors.
This paper proposes a new sequential probabilistic back analysis approach for probabilistically determining the uncertain geomechanical parameters of shield tunnels by using time-series monitoring data. The approach i...
详细信息
This paper proposes a new sequential probabilistic back analysis approach for probabilistically determining the uncertain geomechanical parameters of shield tunnels by using time-series monitoring data. The approach is proposed based on the recently developed Bayesian updating with subset simulation. Within the framework of the proposed approach, a complex Bayesian back analysis problem is transformed into an equivalent structural reliability problem based on subset simulation. Hermite polynomial chaos expansion-based surrogate models are constructed to improve the computational efficiency of probabilistic back analysis. The reliability of tunneling-induced ground settlements is updated in the process of sequential back analyses. A real shield tunnel project of No. 1 Nanchang Metro Line in China is investigated to assess the effectiveness of the approach. The proposed approach is able to infer the posterior distributions of uncertain geomechanical parameters (i.e., Young's moduli of surrounding soil layers and ground vehicle load). The reliability of tunneling-induced ground settlements can be updated in a real-time manner by fully utilizing the time-series monitoring data. The results show good agreement with the variation trend of field monitoring data of ground settlement and the post-event investigations.
Recursive sequences of laws of random variables (and random vectors) are considered where an independence assumption which is usually made within the setting of the contraction method is dropped. This restricts the st...
详细信息
Finding the largest triangle in an n-nodes edge-weighted graph belongs to a set of problems all equivalent under subcubic reductions. Namely, a truly subcubic algorithm for any one of them would imply that they are al...
详细信息
Finding the largest triangle in an n-nodes edge-weighted graph belongs to a set of problems all equivalent under subcubic reductions. Namely, a truly subcubic algorithm for any one of them would imply that they are all subcubic. A recent strong conjecture states that none of them can be solved in less than Theta(n(3)) time, but this negative result does not rule out the possibility of algorithms with average, rather than worst-case, subcubic running time. Indeed, in this work we describe the first truly-subcubic average complexity procedure for this problem for graphs whose edge lengths are uniformly distributed in [0,1]. Our procedure finds the largest triangle in average quadratic time, which is the best possible complexity of any algorithm for this problem. We also give empirical evidence that the quadratic average complexity holds for many other random distributions of the edge lengths. A notable exception is when the lengths are distances between random points in Euclidean space, for which the algorithm takes average cubic time. (C) 2020 Elsevier B.V. All rights reserved.
Recently Avis and Jordan have demonstrated the efficiency of a simple technique called budgeting for the parallelization of a number of tree search algorithms. The idea is to limit the amount of work that a processor ...
详细信息
Recently Avis and Jordan have demonstrated the efficiency of a simple technique called budgeting for the parallelization of a number of tree search algorithms. The idea is to limit the amount of work that a processor performs before it terminates its search and returns any unexplored nodes to a master process. This limit is set by a critical budget parameter which determines the overhead of the process. In this paper we study the behaviour of the budget parameter on conditional Galton-Watson trees obtaining asymptotically tight bounds on this overhead. We present empirical results to show that this bound is surprisingly accurate in practice.
A recursive function on a tree is a function in which each leaf has a given value, and each internal node has a value equal to a function of the number of children, the values of the children, and possibly an explicit...
详细信息
A recursive function on a tree is a function in which each leaf has a given value, and each internal node has a value equal to a function of the number of children, the values of the children, and possibly an explicitly specified random elementU. The value of the root is the key quantity of interest in general. In this study, all node values and function values are in a finite setS. In this note, we describe the limit behavior when the leaf values are drawn independently from a fixed distribution onS, and the treeT(n)is a random Galton-Watson tree of sizen.
A fundamental algorithm for selecting ranks from a finite subset of an ordered set is Radix Selection. This algorithm requires the data to be given as strings of symbols over an ordered alphabet, e.g., binary expansio...
详细信息
A fundamental algorithm for selecting ranks from a finite subset of an ordered set is Radix Selection. This algorithm requires the data to be given as strings of symbols over an ordered alphabet, e.g., binary expansions of real numbers. Its complexity is measured by the number of symbols that have to be read. In this paper the model of independent data identically generated from a Markov chain is considered. The complexity is studied as a stochastic process indexed by the set of infinite strings over the given alphabet. The orders of mean and variance of the complexity and, after normalization, a limit theorem with a centered Gaussian process as limit are derived. This implies an analysis for two standard models for the ranks: uniformly chosen ranks, also called grand averages, and the worst case rank complexities which are of interest in computer science. For uniform data and the asymmetric Bernoulli model (i.e, memoryless sources), we also find weak convergence for the normalized process of complexities when indexed by the ranks while for more general Markov sources these processes are not tight under the standard normalizations. (C) 2018 Elsevier B.V. All rights reserved.
This article analyzes the stochastic runtime of a Cross-Entropy algorithm mimicking an Max-MM Ant System with iteration-best reinforcement. It investigates the impact of magnitude of the sample size on the runtime to ...
详细信息
This article analyzes the stochastic runtime of a Cross-Entropy algorithm mimicking an Max-MM Ant System with iteration-best reinforcement. It investigates the impact of magnitude of the sample size on the runtime to find optimal solutions for TSP instances. For simple TSP instances that have a {1,n}-valued distance function and a unique optimal solution, we show that sample size N is an element of omega(Inn) results in a stochastically polynomial runtime, and N is an element of O(In n) results in a stochastically exponential runtime, where "stochastically" means with a probability of 1 - n(-omega(1)), and n represents number of cities. In particular, for N is an element of omega(In n), we prove a stochastic runtime of O(N . n(6)) with the vertex-based random solution generation, and a stochastic runtime of O(N . n(3) Inn) with the edge-based random solution generation. These runtimes are very close to the best known expected runtime for variants of Max-Min Ant System with best-so-far reinforcement by choosing a small N is an element of omega(In n). They are obtained for the stronger notion of stochastic runtime, and analyze the runtime in most cases. We also inspect more complex instances with n vertices positioned on an m x m grid. When the n vertices span a convex polygon, we obtain a stochastic runtime of O(n(4)m(3+is an element of)) with the vertex-based random solution generation, and a stochastic runtime of O(n(3)m(3+is an element of)) for the edge-based random solution generation. When there are k is an element of O(1) many vertices inside a convex polygon spanned by the other n k vertices, we obtain a stochastic runtime of O(n(4)m(5+is an element of) + n(6k-1)m(is an element of)) with the vertex-based random solution generation, and a stochastic runtime of O(n(3)m(5+is an element of) + n(3k)m(is an element of)) with the edge-based random solution generation. These runtimes are better than the expected runtime for the so-called (mu+lambda) EA reported in a r
暂无评论