Adaptive time-stepping based on linear digital control theory has several advantages: the algorithms can be analyzed in terms of stability and adaptivity, and they can be designed to produce smoother stepsize sequence...
详细信息
Adaptive time-stepping based on linear digital control theory has several advantages: the algorithms can be analyzed in terms of stability and adaptivity, and they can be designed to produce smoother stepsize sequences resulting in significantly improved regularity and computational stability. Here, we extend this approach by viewing the closed-loop transfer map H-(φ) over cap: log (φ) over cap bar right arrow log h as a digital filter, processing the signal log (φ) over cap ( the principal error function) in the frequency domain, in order to produce a smooth stepsize sequence log h. The theory covers all previously considered control structures and offers new possibilities to construct stepsize selection algorithms in the asymptotic stepsize-error regime. Without incurring extra computational costs, the controllers can be designed for special purposes such as higher order of adaptivity ( for smooth ODE problems) or a stronger ability to suppress high-frequency error components (nonsmooth problems, stochastic ODEs). Simulations verify the controllers' ability to produce stepsize sequences resulting in improved regularity and computational stability.
Automated hyperparameter optimization (HPO) has gained great popularity and is an important component of most automated machine learning frameworks. However, the process of designing HPO algorithms is still an unsyste...
详细信息
Automated hyperparameter optimization (HPO) has gained great popularity and is an important component of most automated machine learning frameworks. However, the process of designing HPO algorithms is still an unsystematic and manual process: new algorithms are often built on top of prior work, where limitations are identified and improvements are proposed. Even though this approach is guided by expert knowledge, it is still somewhat arbitrary. The process rarely allows for gaining a holistic understanding of which algorithmic components drive performance and carries the risk of overlooking good algorithmic design choices. We present a principled approach to automated benchmark-driven algorithm design applied to multifidelity HPO (MF-HPO). First, we formalize a rich space of MF-HPO candidates that includes, but is not limited to, common existing HPO algorithms and then present a configurable framework covering this space. To find the best candidate automatically and systematically, we follow a programming-by-optimization approach and search over the space of algorithm candidates via Bayesian optimization. We challenge whether the found design choices are necessary or could be replaced by more naive and simpler ones by performing an ablation analysis. We observe that using a relatively simple configuration (in some ways, simpler than established methods) performs very well as long as some critical configuration parameters are set to the right value.
Image segmentation is very important for various fields. With the development of computer technology, computer technology has become more and more effective for image segmentation, and it is studied on the basis of pa...
详细信息
Image segmentation is very important for various fields. With the development of computer technology, computer technology has become more and more effective for image segmentation, and it is studied on the basis of partial differential equations. The curve representation method in plane differential geometry is expounded, with the SegNet-v2 segmentation model analyzed and tested in medical image segmentation. The test results show that the partial differential equation image segmentation algorithm can achieve more accurate segmentation, especially in medical image segmentation, which can achieve good results, and it is worth in practice to further promote.
This paper reviews and evaluates several state-of-the-art feature description algorithms. The components of each feature description method are analyzed and their applications in dealing with specific challenges are i...
详细信息
This paper reviews and evaluates several state-of-the-art feature description algorithms. The components of each feature description method are analyzed and their applications in dealing with specific challenges are identified. In the paper, we compare state-of-the-art feature description methods including the SIFT, DAISY, HRI-CSLTP, LIOP, MROGH and MRRID with specific measurement regulation. The quantitative comparative results demonstrate these algorithms' applications in different scenes, which provide a certain guide for designing novel feature description algorithms. (C) 2014 Published by Elsevier GmbH.
algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example we show the recent results of G...
详细信息
algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example we show the recent results of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered-the prediction being read directly from the diagram without complex calculation.
Mining association rules from databases has attracted great interest because of its potentially very practical applications, Given a database, the problem of interest is how to mine association rules (which could desc...
详细信息
Mining association rules from databases has attracted great interest because of its potentially very practical applications, Given a database, the problem of interest is how to mine association rules (which could describe patterns of consumers' behaviors) in an efficient and effective way, The databases involved in today's business environments can be very large. Thus, fast and effective algorithms are needed to mine association rules out of large databases, Previous approaches may cause an exponential computing resource consumption. A combinatorial explosion occurs because existing approaches exhaustively mine all the rules. The proposed algorithm takes a previously developed approach, called the Randomized algorithm 1 (or RA1), and adapts it to mine association rules out of a database in an efficient way. The original RA1 approach was primarily developed for inferring logical clauses (i.e., a Boolean function) from examples. Numerous computational results suggest that the new approach is very promising. (C) 2003 Elsevier Science Ltd. All rights reserved.
We present calf, a cost-aware logical framework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of programming languages in terms of ...
详细信息
We present calf, a cost-aware logical framework for studying quantitative aspects of functional programs. Taking inspiration from recent work that reconstructs traditional aspects of programming languages in terms of a modal account of phase distinctions, we argue that the cost structure of programs motivates a phase distinction between intension and extension. Armed with this technology, we contribute a synthetic account of cost structure as a computational effect in which cost-aware programs enjoy an internal noninterference property: input/output behavior cannot depend on cost. As a full-spectrum dependent type theory, calf presents a unified language for programming and specification of both cost and behavior that can be integrated smoothly with existing mathematical libraries available in type theoretic proof assistants. We evaluate calf as a general framework for cost analysis by implementing two fundamental techniques for algorithm analysis: the method of recurrence relations and physicist's method for amortized analysis. We deploy these techniques on a variety of case studies: we prove a tight, closed bound for Euclid's algorithm, verify the amortized complexity of batched queues, and derive tight, closed bounds for the sequential and parallel complexity of merge sort, all fully mechanized in the Agda proof assistant. Lastly we substantiate the soundness of quantitative reasoning in calf by means of a model construction.
The task of finding the largest empty ellipsoid defined by a set of n point sites in R(d) is investigated. It is shown that this can be solved by enumerating the facets of the convex hull of the sites projected onto a...
详细信息
The task of finding the largest empty ellipsoid defined by a set of n point sites in R(d) is investigated. It is shown that this can be solved by enumerating the facets of the convex hull of the sites projected onto a manifold in R([d(d+3)/2]). While O(n([d(d+3)/4])) time is required in the worst case, it is found that O(n(d)) suffices on average for independent uniform points when d is fixed and n increases without bound. The effects on the running time caused by imposing certain restrictions on the orientation and shape of the ellipsoid are also described. Applications to motion planning and design centering are considered briefly.
We study the power-aware buffering problem in battery-powered sensor networks, focusing on the fixed-size and fixed-interval buffering schemes. The main motivation is to address the yet poorly understood size variatio...
详细信息
We study the power-aware buffering problem in battery-powered sensor networks, focusing on the fixed-size and fixed-interval buffering schemes. The main motivation is to address the yet poorly understood size variation-induced effect on power-aware buffering schemes. Our theoretical analysis elucidates the fundamental differences between the fixed-size and fixed-interval buffering schemes in the presence of data-size variation. It shows that data-size variation has detrimental effects on the power expenditure of the fixed-size buffering in general, and reveals that the size variation induced effects can be either mitigated by a positive skewness or promoted by a negative skewness in size distribution. By contrast, the fixed-interval buffering scheme has an obvious advantage of being eminently immune to the data-size variation. Hence the fixed-interval buffering scheme is a risk-averse strategy for its robustness in a variety of operational environments. In addition, based on the fixed-interval buffering scheme, we establish the power consumption relationship between child nodes and parent node in a static data-collection tree, and give an in-depth analysis of the impact of child bandwidth distribution on the parent's power consumption. This study is of practical significance: it sheds new light on the relationship among power consumption of buffering schemes, power parameters of radio module and memory bank, data arrival rate, and data-size variation, thereby providing well-informed guidance in determining an optimal buffer size (interval) to maximize the operational lifespan of sensor networks.
Given a set of n elements, each of which is colored one of c colors, we must determine an element of the plurality (most frequently occurring) color by pairwise equal/unequal color comparisons of elements. We focus on...
详细信息
Given a set of n elements, each of which is colored one of c colors, we must determine an element of the plurality (most frequently occurring) color by pairwise equal/unequal color comparisons of elements. We focus on the expected number of color comparisons when the c(n) colorings are equally probable. We analyze an obvious algorithm, showing that its expected performance is c(2) 1c 2/2cn-O(c(2)), with variance Theta(c(2)n). We present and analyze an algorithm for the case c = 3 colors whose average complexity on the 3(n) equally probable inputs is 7083/5425n+O(root n) = 1.3056....n+O(root n), substantially better than the expected complexity 5/3n + O(1) = 1.6666...n + O(1) of the obvious algorithm. We describe a similar algorithm for c = 4 colors whose average complexity on the 4(n) equally probable inputs is 761311/402850n + O(log n) = 1.8898...n+O(log n), substantially better than the expected complexity 9/4n + O(1) = 2.25n + O(1) of the obvious algorithm.
暂无评论