The scientific field of optimization has witnessed an increasing trend in the development of metaheuristic algorithms within the current decade. The vast majority of the proposed algorithms have been proclaimed as sup...
详细信息
The scientific field of optimization has witnessed an increasing trend in the development of metaheuristic algorithms within the current decade. The vast majority of the proposed algorithms have been proclaimed as superior and highly efficient compared to their contemporary counterparts by their own developers, which should be verified on a set of benchmark cases if it is to give conducive insights into their true capabilities. This study completes a comprehensive investigation of the general optimization capabilities of the recently developed nature-inspired metaheuristic algorithms, which have not been thoroughly discussed in past literature studies due to their new emergence. To overcome this deficiency in the existing literature, optimization benchmark problems with different functional characteristics will be solved by some of the widely used recent optimizers. Unconstrained standard test functions comprised of thirty-four unimodal scalable optimization problems with varying dimensionalities have been solved by these competitive algorithms, and respective estimated solutions have been evaluated relying on the performance metrics defined by the statistical analysis of the predictive results. Convergence curves of the algorithms have been construed to observe the evolution trends of objective function values. To further delve into comprehensive analysis on unconstrained test cases, CEC 2013 problems have been considered for comparison tools since their resemblances of the following features of real-world complex algorithms. The optimization capabilities of eleven metaheuristics algorithms have been comparatively analyzed on twenty-eight multidimensional problems. Finally, fourteen complex engineering problems have been optimized by the algorithms to scrutinize their effectiveness on handling the imposed design constraints.
In the field of recommender systems, shallow autoencoders have recently gained significant attention. One of the most highly acclaimed shallow autoencoders is EASE(R), favored for its competitive recommendation accura...
详细信息
ISBN:
(纸本)9798400702419
In the field of recommender systems, shallow autoencoders have recently gained significant attention. One of the most highly acclaimed shallow autoencoders is EASE(R), favored for its competitive recommendation accuracy and simultaneous simplicity. However, the poor scalability of EASE(R) (both in time and especially in memory) severely restricts its use in production environments with vast item sets. In this paper, we propose a hyperefficient factorization technique for sparse approximate inversion of the data-Gram matrix used in EASE(R). The resulting autoencoder, SANSA, is an end-to-end sparse solution with prescribable density and almost arbitrarily low memory requirements - even for training. As such, SANSA allows us to effortlessly scale the concept of EASE(R) to millions of items and beyond.
This research delves into the utilization of distributed machine learning methods for revenue forecasting in the realm of online advertising systems. As the domain of digital advertising continues to expand rapidly, t...
详细信息
In order to extract the analytic eigenvalues from a parahermitian matrix, the computational cost of the current state-of-the-art method grows factorially with the matrix dimension. Even though the approach offers bene...
详细信息
Finding decompositions of a graph into a family of communities is crucial to understanding its underlying structure. algorithms for finding communities in networks often rely only on structural information and search ...
详细信息
ISBN:
(纸本)9781450340731
Finding decompositions of a graph into a family of communities is crucial to understanding its underlying structure. algorithms for finding communities in networks often rely only on structural information and search for cohesive subsets of nodes. In practice however, we would like to find communities that are not only cohesive, but also in influential or important. In order to capture such communities, Li, Qin, Yu, and Mao introduced a novel community model called "k-influential community" based on the concept of k-core, with numerical values representing "influence" assigned to the nodes. They formulate the problem of finding the top-r most important communities as finding r connected k-core subgraphs ordered by the lower-bound of their importance. In this paper, our goal is to scale-up the computation of top-r, k-core communities to web-scale graphs of tens of billions of edges. We feature several fast new algorithms for this problem. With our implementations, we show that we can efficiently handle massive networks using a single consumer-level machine within a reasonable amount of time.
Multi fidelity optimization can be utilized for efficient design of airfoil shapes. In this paper, we investigate the scaling properties of algorithms exploiting this methodology. In particular, we study the relations...
详细信息
Multi fidelity optimization can be utilized for efficient design of airfoil shapes. In this paper, we investigate the scaling properties of algorithms exploiting this methodology. In particular, we study the relationship between the computational cost and the size of the design space. We focus on a specific optimization technique where, in order to reduce the design cost, the accurate high fidelity airfoil model is replaced by a cheap surrogate constructed from a low fidelity model and the shape preserving response prediction technique. In this study, we consider the design of transonic airfoils and use the compressible Euler equations in the high fidelity computational fluid dynamic (CFD) model. The low fidelity CFD model is same as the high fidelity one, but with coarser mesh resolution and reduced level of solver converge. The number of design variables varies from 3 to 11 by using NACA 4 digit airfoil shapes as well as airfoils constructed by Bézier curves. The results of the three optimization studies show that total cost increases from about 12 equivalent high fidelity model evaluations to 34. The number of high fidelity evaluations increases from 4 to 9, whereas the number of low fidelity evaluations increases more rapidly, from 600 to 2000. This indicates that, while the overall optimization cost scales more or less linearly with the dimensionality of the design space, further cost reduction can be obtained through more efficient optimization of the surrogate model.
In this paper, we propose a novel distributed navigation algorithm for people to escape from critical event region in wireless sensor networks (WSNs). Unlike existing works, the scenario discussed in the paper has no ...
详细信息
ISBN:
(纸本)9781424492688
In this paper, we propose a novel distributed navigation algorithm for people to escape from critical event region in wireless sensor networks (WSNs). Unlike existing works, the scenario discussed in the paper has no goal or exit as guidance, leading to a big challenge for the navigation problem. To solve it, our proposed navigation algorithm computes the convex hull of the event region just by some topological methods. With the reference of the convex hull, people can be easily navigated out of the event region. Both the computation complexity and communication overhead of the proposed algorithm are very low, as it only needs to flood two shortest path trees in a limited area around the event region with a distance [L/2 pi] + 1 to the event boundary, where L is the length of the boundary. Conducted simulations have verified the effectiveness and scalability of the proposed algorithm.
The enormous increase in digital scholarly data and computing power combined with recent advances in text mining, linguistics, network science, and scientometrics make it possible to scientifically study the structure...
详细信息
The enormous increase in digital scholarly data and computing power combined with recent advances in text mining, linguistics, network science, and scientometrics make it possible to scientifically study the structure and evolution of science on a large scale. This paper discusses the challenges of this 'BIG science of science'aEuro"also called 'computational scientometrics' research-in terms of data access, algorithm scalability, repeatability, as well as result communication and interpretation. It then introduces two infrastructures: (1) the Scholarly Database (SDB) ( http://*** ), which provides free online access to 22 million scholarly records-papers, patents, and funding awards which can be cross-searched and downloaded as dumps, and (2) Scientometrics-relevant plug-ins of the open-source Network Workbench (NWB) Tool ( http://*** ). The utility of these infrastructures is then exemplarily demonstrated in three studies: a comparison of the funding portfolios and co-investigator networks of different universities, an examination of paper-citation and co-author networks of major network science researchers, and an analysis of topic bursts in streams of text. The article concludes with a discussion of related work that aims to provide practically useful and theoretically grounded cyberinfrastructure in support of computational scientometrics research, education and practice.
Combination (ensembles) of classifiers is now a well established research line. It has been observed that the predictive accuracy of a combination of independent classifiers excels that of the single best classifier. ...
详细信息
Combination (ensembles) of classifiers is now a well established research line. It has been observed that the predictive accuracy of a combination of independent classifiers excels that of the single best classifier. While ensembles of classifiers have been mostly employed to achieve higher recognition accuracy, this paper focuses on the use of combinations of individual classifiers for handling several problems from the practice in the machine learning, pattern recognition and data mining domains. In particular, the study presented concentrates on managing the imbalanced training sample problem, scaling up some preprocessing algorithms and filtering the training set. Here, all these situations are examined mainly in connection with the nearest neighbour classifier. Experimental results show the potential of multiple classifier systems when applied to those situations.
The reconfigurable mesh (R-Mesh) has drawn much interest in recent years due, in part, to its ability to admit extremely fast algorithms for a large number of problems. The unrestricted R-Mesh creates a great variety ...
详细信息
The reconfigurable mesh (R-Mesh) has drawn much interest in recent years due, in part, to its ability to admit extremely fast algorithms for a large number of problems. The unrestricted R-Mesh creates a great variety of bus shapes that facilitate algorithm design and reduce running time. In this paper, we present a bus linearization procedure that transforms an arbitrary non-linear bus configuration of an R-Mesh into an equivalent acyclic linear bus configuration implementable on a linear R-Mesh (LR-Mesh), a weaker version of the unrestricted R-Mesh. This procedure gives an algorithm designer the liberty of using buses of arbitrary shape, while automatically translating the algorithm to run on a simpler platform. We illustrate our bus linearization method through two important applications. The first leads to a faster scaling simulation for the unrestricted R-Mesh. This scaling simulation has an overhead of log N (smaller than the log P log N/P overhead of the previous fastest scaling simulation) and uses an exclusive write LR-Mesh as the simulating model;prior simulations needed concurrent write. The second application adapts algorithms designed for R-Meshes to run on models with pipelined optical buses. (C) 2002 Elsevier Science (USA).
暂无评论