Particle swarm optimization (PSO) uses a social topology for particles to share information among neighbors during optimization. A large number of existing literatures have shown that the topology affects the performa...
详细信息
Particle swarm optimization (PSO) uses a social topology for particles to share information among neighbors during optimization. A large number of existing literatures have shown that the topology affects the performance of PSO and an optimal topology is problem dependent, but currently there is a lack of study on this issue. In this paper, we first analyze a class of deterministic regular topologies with regard to what affect the optimality of algorithmic parameters (e.g., the number of particles and the topological degree), so as to provide a guide to topology selections for PSO. Both theoretical analysis and numerical experiments are performed and reported in detail. The theoretical analysis reveals that the optimality of algorithmic parameters is dependent on the computational budget available. In particular, the optimal number of particles increases unstrictly as the computational budget increases, while for any fixed number of particles the optimal degree decreases unstrictly as computational budget increases. The only condition is that the computational budget cannot exceed a constant measuring the hardness of the benchmark function set With a total of 198 regular topologies and 9 different numbers of particles tested on 90 benchmark functions using a recently reported data profiling technique, numerical experiments verify the theoretical derivations. Based on these results, two formulas are developed to help choose optimal topology parameters for increased ease and applicability of PSO to real-world problems. (C) 2016 Elsevier Inc. All rights reserved.
Traditional image object classification and detection algorithms and strategies cannot meet the problem of video image acquisition and processing. Deep learning deliberately simulates the hierarchical structure of hum...
详细信息
Traditional image object classification and detection algorithms and strategies cannot meet the problem of video image acquisition and processing. Deep learning deliberately simulates the hierarchical structure of human brain, and establishes the mapping from low-level signals to high-level semantics, so as to achieve hierarchical feature representation of data. Deep learning technology has powerful visual information processing ability, which has become the forefront technology and domestic and international research hotspots to deal with this challenge. In order to solve the problem of target space location in video surveillance system, time-consuming and other problems, in this paper, we propose the algorithm based on RNN-LSTM deep learning. At the same time, according to the principle of OpenGL perspective imaging and photogrammetry consistency, we use 3D scene simulation imaging technology, relying on the corresponding relationship between video images and simulation images we locate the target object. In the 3D virtual scene, we set up the virtual camera to simulate the imaging processing of the actual camera, and the pixel coordinates in the video image of the surveillance target are substituted into the simulation image, next, the spatial coordinates of the target are inverted by the inverse process of the virtual imaging. The experimental results show that the detection of target objects has high accuracy, which has an important reference value for outdoor target localization through video surveillance images.
In most of the past research on scheduling, it is conventional to model machine deterioration by decreasing production speed over time. However, as is observed, there is a fair number of scenarios in practice where de...
详细信息
In most of the past research on scheduling, it is conventional to model machine deterioration by decreasing production speed over time. However, as is observed, there is a fair number of scenarios in practice where deterioration has a little impact on machine speed but a great impact on production accuracy and a machine is unusable unless it has been calibrated recently. In this paper, we consider a single-machine short-term scheduling issue in distributed production systems where the engineer calibrates the machines in a round-robin manner. The objective is to minimize total job completion time, which is a popular criterion indicating total holding or inventory cost. We consider two cases of the problem and show that one of them can be solved optimally by providing an extended shortest processing time first rule. As for the other case, which is intractable, we propose a polynomial-time approximation scheme based on the extended rule and an exchange procedure. Computational experiments show that the proposed polynomial-time approximation scheme is very promising even if limited number of exchangeable jobs is allowed.
Proportional link linkage (PLL) clustering methods are a parametric family of monotone invariant agglomerative hierarchical clustering methods. This family includes the single, minimedian, and complete linkage cluster...
详细信息
Proportional link linkage (PLL) clustering methods are a parametric family of monotone invariant agglomerative hierarchical clustering methods. This family includes the single, minimedian, and complete linkage clustering methods as special cases;its members are used in psychological and ecological applications. Since the literature on clustering space distortion is oriented to quantitative input data, we adapt its basic concepts to input data with only ordinal significance and analyze the space distortion properties of PLL methods. To enable PLL methods to be used when the number n of objects being clustered is large, we describe an efficient PLL algorithm that operates in O(N2 log n) time and O(n2) space.
Kuiper's statistic is a good measure for the difference of ideal distribution and empirical distribution in the goodness -of -fit test. However, it is a challenging problem to solve the critical value and upper ta...
详细信息
Kuiper's statistic is a good measure for the difference of ideal distribution and empirical distribution in the goodness -of -fit test. However, it is a challenging problem to solve the critical value and upper tail quantile, or simply Kuiper pair, of Kuiper's statistics due to the difficulties of solving the nonlinear equation and reasonable approximation of infinite series. In this work, the contributions lie in three perspectives: firstly, the second order approximation for the infinite series of the cumulative distribution of the critical value is used to achieve higher precision;secondly, the principles and fixed-point algorithms for solving the Kuiper pair are presented with details;finally, finally, a mistake about the critical value c n a for ( a, n ) = (0 . 01 , 30) in Kuiper's distribution table has been labeled and corrected where n is the sample capacity and a is the upper tail quantile. The algorithms are verified and validated by comparing with the table provided by Kuiper. The methods and algorithms proposed are enlightening and worth of introducing to the college students, computer programmers, engineers, experimental psychologists and so on.
In this paper, we address an optimization problem that arises in the context of cache placement in sensor networks. In particular, we consider the cache placement problem where the goal is to determine a set of nodes ...
详细信息
In this paper, we address an optimization problem that arises in the context of cache placement in sensor networks. In particular, we consider the cache placement problem where the goal is to determine a set of nodes in the network to cache/store the given data item, such that the overall communication cost incurred in accessing the item is minimized, under the constraint that the total communication cost in updating the selected caches is less than a given constant. In our network model, there is a single server (containing the original copy of the data item) and multiple client nodes (that wish to access the data item). For various settings of the problem, we design optimal, near-optimal, heuristic-based, and distributed algorithms, and evaluate their performance through simulations on randomly generated sensor networks. (C) 2006 Published by Elsevier B.V.
Data structures such as sets, lists, and arrays are fundamental in mathematics and computer science, playing a crucial role in numerous real-life applications. These structures represent a variety of entities, includi...
详细信息
Data structures such as sets, lists, and arrays are fundamental in mathematics and computer science, playing a crucial role in numerous real-life applications. These structures represent a variety of entities, including solutions, conditions, and objectives. In scenarios involving large datasets, eliminating duplicate elements is essential to reduce complexity and enhance performance. This paper introduces a novel algorithm that uses logarithmic prime numbers to efficiently sort data structures and remove duplicates. The algorithm is mathematically rigorous, ensuring correctness and providing a thorough analysis of its time complexity. To demonstrate its practicality and effectiveness, we compare our method with existing algorithms, highlighting its superior speed and accuracy. An extensive experimental analysis across one thousand random test problems shows that our approach significantly outperforms two alternative techniques from the literature. By discussing the potential applications of the proposed algorithm in various domains, including computer science, engineering, and data management, we illustrate its adaptability through two practical examples in which our algorithm solves the problem more than 3x104 and 7x104 times faster than the existing algorithms in the literature. The results of these examples demonstrate that the superiority of our algorithm becomes increasingly pronounced with larger problem sizes.
In the quest for interpretable models,two versions of a neural network rule extraction algorithm were proposed and *** two algorithms are called the Piece-Wise Linear Artificial Neural Network(PWL-ANN)and enhanced Pie...
详细信息
In the quest for interpretable models,two versions of a neural network rule extraction algorithm were proposed and *** two algorithms are called the Piece-Wise Linear Artificial Neural Network(PWL-ANN)and enhanced Piece-Wise Linear Artificial Neural Network(enhanced PWL-ANN)*** PWL-ANN algorithm is a decomposition artificial neural network(ANN)rule extraction algorithm,and the enhanced PWL-ANN algorithm improves upon the PWL-ANN algorithm and extracts multiple linear regression equations from a trained ANN model by approximating the hidden sigmoid activation functions using N-piece linear *** doing so,the algorithm provides interpretable models from the originally trained opaque ANN models.A detailed application case study illustrates how the generated enhanced-PWL-ANN models can provide understandable IF-THEN rules about a problem *** of the results generated by the two versions of the PWL-ANN algorithm showed that in comparison to the PWL-ANN models,the enhanced-PWL-ANN models support improved fidelities to the originally trained ANN *** results also showed that more concise rule sets could be generated using the enhanced-PWL-ANN *** a more simplified set of rules is desired,the enhanced-PWL-ANN algorithm can be combined with the decision tree *** application of the algorithms to domains related to petroleum engineering can help enhance understanding of the problems.
Group therapy is one of the major clinical interventions to help patients with mental disorders. However, finding an effective therapy group is very challenging, since there are three crucial criteria that should be c...
详细信息
Group therapy is one of the major clinical interventions to help patients with mental disorders. However, finding an effective therapy group is very challenging, since there are three crucial criteria that should be carefully considered simultaneously: 1) avoidance of isolation and loneliness;2) unfamiliarity of patients;and 3) size of the therapy group. Manual selection of therapy group members, which is the common approach for most psychiatrists and mental health professionals today, may incur human bias and is very time-consuming. Therefore, we propose a new research problem, namely, Unfamiliarity-Aware Therapy Group Selection with Noah's Ark Principle (UTNA) for automatic selections of therapy group members from the social network while addressing the three crucial criteria mentioned above. In this paper, we first analyze the NP-hardness and inapproximability of the UTNA problem. Then, we consider a special case of UTNA on trees, namely, Tree-based UTNA, and propose a linear-time dynamic programming algorithm to find the optimal solution. We then design an efficient algorithm, Effective Therapy Group Discovery, to select suitable members efficiently and effectively for the general UTNA problem. We invite 10 psychiatrists and clinical psychologists to conduct an expert study for validating our problem formulation, where all experts agree that our problem formulation is helpful for them, and the groups selected by our proposed approach are better than or similar to their manual configurations. In addition, we also conduct extensive experiments on three real data sets. Experimental results show that our proposed algorithms outperform other baselines in both efficiency and solution quality.
The authors describe how they achieved mainframe performance by coding the Cosmos/M finite-element analysis system, a commercial product, for a PC enhanced with a parallel-processing board. The discussion covers desig...
详细信息
The authors describe how they achieved mainframe performance by coding the Cosmos/M finite-element analysis system, a commercial product, for a PC enhanced with a parallel-processing board. The discussion covers design goals, algorithm design, implementation and debugging, performance optimization, and scalability. They focus on financial, algorithmic, and numerical issues, with almost no reference to the numerical principles involved in solving stress-analysis equations.
暂无评论