Resource demands are crucial parameters for modeling and predicting the performance of software systems. Currently, resource demand estimators are usually executed once for system analysis. However, the monitored syst...
详细信息
Resource demands are crucial parameters for modeling and predicting the performance of software systems. Currently, resource demand estimators are usually executed once for system analysis. However, the monitored system, as well as the resource demand itself, are subject to constant change in runtime environments. These changes additionally impact the applicability, the required parametrization as well as the resulting accuracy of individual estimation approaches. Over time, this leads to invalid or outdated estimates, which in turn negatively influence the decision-making of adaptive systems. In this article, we present SARDE, a framework for self-adaptive resource demand estimation in continuous environments. SARDE dynamically and continuously tunes, selects, and executes an ensemble of resource demand estimation approaches to adapt to changes in the environment. This creates an autonomous and unsupervised ensemble estimation technique, providing reliable resource demand estimations in dynamic environments. We evaluate SARDE using two realistic datasets. One set of different micro-benchmarks reflecting different possible system states and one dataset consisting of a continuously running application in a changing environment. Our results show that by continuously applying online optimization, selection and estimation, SARDE is able to efficiently adapt to the online trace and reduce the model error using the resulting ensemble technique.
The relaxation of the probabilistic constraint of the fuzzy c-means clustering model was proposed to provide robust algorithms that are insensitive to strong noise and outlier data. These goals were achieved by the po...
详细信息
The relaxation of the probabilistic constraint of the fuzzy c-means clustering model was proposed to provide robust algorithms that are insensitive to strong noise and outlier data. These goals were achieved by the possibilistic c-means (PCM) algorithm, but these advantages came together with a sensitivity to cluster prototype initialization. According to the original recommendations, the probabilistic fuzzy c-means (FCM) algorithm should be applied to establish the cluster initialization and possibilistic penalty terms for PCM. However, when FCM fails to provide valid cluster prototypes due to the presence of noise, PCM has no chance to recover and produce a fine partition. This paper proposes a two-stage c-means clustering algorithm to tackle with most problems enumerated above. In the first stage called initialization, FCM with two modifications is performed: (1) extra cluster added for noisy data;(2) extra variable and constraint added to handle clusters of various diameters. In the second stage, a modified PCM algorithm is carried out, which also contains the cluster width tuning mechanism based on which it adaptively updates the possibilistic penalty terms. The proposed algorithm has less parameters than PCM when the number of clusters is c > 2. Numerical evaluation involving synthetic and standard test data sets proved the advantages of the proposed clustering model.
Among the existing global optimization algorithms, Particle Swarm Optimization (PSO) is one of the most effective methods for non-linear and complex high-dimensional problems. Since PSO performance strongly depends on...
详细信息
Among the existing global optimization algorithms, Particle Swarm Optimization (PSO) is one of the most effective methods for non-linear and complex high-dimensional problems. Since PSO performance strongly depends on the choice of its settings (i.e., inertia, cognitive and social factors, minimum and maximum velocity), Fuzzy Logic (FL) was previously exploited to select these values. So far, FL-based implementations of PSO aimed at the calculation of a unique settings for the whole swarm. In this work we propose a novel self-tuning algorithm-called Fuzzy self-tuning PSO (FST-PSO)-which exploits FL to calculate the inertia, cognitive and social factor, minimum and maximum velocity independently for each particle, thus realizing a complete settings-free version of PSO. The novelty and strength of FST-PSO lie in the fact that it does not require any expertise in PSO functioning, since the behavior of every particle is automatically and dynamically adjusted during the optimization. We compare the performance of FST-PSO with standard PSO, Proactive Particles in Swarm Optimization, Artificial Bee Colony, Covariance Matrix Adaptation Evolution Strategy, Differential Evolution and Genetic algorithms. We empirically show that FST-PSO can basically outperform all tested algorithms with respect to the convergence speed and is competitive concerning the best solutions found, noticeably with a reduced computational effort.
Computational Intelligence methods, which include Evolutionary Computation and Swarm Intelligence, can efficiently and effectively identify optimal solutions to complex optimization problems by exploiting the cooperat...
详细信息
Computational Intelligence methods, which include Evolutionary Computation and Swarm Intelligence, can efficiently and effectively identify optimal solutions to complex optimization problems by exploiting the cooperative and competitive interplay among their individuals. The exploration and exploitation capabilities of these meta-heuristics are typically assessed by considering well-known suites of benchmark functions, specifically designed for numerical global optimization purposes. However, their performances could drastically change in the case of real-world optimization problems. In this paper, we investigate this issue by considering the Parameter Estimation (PE) of biochemical systems, a common computational problem in the field of Systems Biology. In order to evaluate the effectiveness of various meta-heuristics in solving the PE problem, we compare their performance by considering a set of benchmark functions and a set of synthetic biochemical models characterized by a search space with an increasing number of dimensions. Our results show that some state-of-the-art optimization methods - able to largely outperform the other meta-heuristics on benchmark functions are characterized by considerably poor performances when applied to the PE problem. We also show that a limiting factor of these optimization methods concerns the representation of the solutions: indeed, by means of a simple semantic transformation, it is possible to turn these algorithms into competitive alternatives. We corroborate this finding by performing the PE of a model of metabolic pathways in red blood cells. Overall, in this work we state that classic benchmark functions cannot be fully representative of all the features that make real-world optimization problems hard to solve. This is the case, in particular, of the PE of biochemical systems. We also show that optimization problems must be carefully analyzed to select an appropriate representation, in order to actually obtain the performa
Among the existing global optimization algorithms, Particle Swarm Optimization (PSO) is one of the most effective when dealing with non-linear and complex high-dimensional problems. However, the performance of PSO is ...
详细信息
ISBN:
(纸本)9781467374286
Among the existing global optimization algorithms, Particle Swarm Optimization (PSO) is one of the most effective when dealing with non-linear and complex high-dimensional problems. However, the performance of PSO is strongly dependent on the choice of its settings. In this work we propose a novel and self-tuning PSO algorithm - called Proactive Particles in Swarm Optimization (PPSO) - which exploits Fuzzy Logic to calculate the best setting for the inertia, cognitive factor and social factor. Thanks to additional heuristics, PPSO automatically determines also the best setting for the swarm size and for the particles maximum velocity. PPSO significantly differs from other versions of PSO that exploit Fuzzy Logic, since specific settings are assigned to each particle according to its history, instead of being globally defined for the whole swarm. Thus, the novelty of PPSO is that particles gain a limited autonomous and proactive intelligence, instead of being simple reactive agents. Our results show that PPSO outperforms the standard PSO, both in terms of convergence speed and average quality of solutions, remarkably without the need for any user setting.
Among the existing global optimization algorithms, Particle Swarm Optimization (PSO) is one of the most effective when dealing with non-linear and complex high-dimensional problems. However, the performance of PSO is ...
详细信息
ISBN:
(纸本)9781467374293
Among the existing global optimization algorithms, Particle Swarm Optimization (PSO) is one of the most effective when dealing with non-linear and complex high-dimensional problems. However, the performance of PSO is strongly dependent on the choice of its settings. In this work we propose a novel and self-tuning PSO algorithm - called Proactive Particles in Swarm Optimization (PPSO) - which exploits Fuzzy Logic to calculate the best setting for the inertia, cognitive factor and social factor. Thanks to additional heuristics, PPSO automatically determines also the best setting for the swarm size and for the particles maximum velocity. PPSO significantly differs from other versions of PSO that exploit Fuzzy Logic, since specific settings are assigned to each particle according to its history, instead of being globally defined for the whole swarm. Thus, the novelty of PPSO is that particles gain a limited autonomous and proactive intelligence, instead of being simple reactive agents. Our results show that PPSO outperforms the standard PSO, both in terms of convergence speed and average quality of solutions, remarkably without the need for any user setting.
The general task of optimal adaptive control with recursive identification (self-tuning control) is very complicated problem. This problem is solved usually by the separation of identification and control – the Certa...
详细信息
The general task of optimal adaptive control with recursive identification (self-tuning control) is very complicated problem. This problem is solved usually by the separation of identification and control – the Certainty Equivalency (CE) Principle. The aim of this paper is to present the solution of this problem using the Dual Adaptive Control (Bicriterial Approach). The main idea of this approach involves two cost functions: (1) the system output should track cautiously the desired reference signal; (2) the control signal should excite the controlled process sufficiently for accelerating the parameter estimates. This approach was verified by a real-time control of nonlinear time varying laboratory model – DTS200 Three Tank System.
High performance PMSM drives require accurate selection of the current vector reference to be inputted to a suitable current control algorithm. This work implements a drive prototype with a current vector generation s...
详细信息
ISBN:
(纸本)9789075815115;9075815115
High performance PMSM drives require accurate selection of the current vector reference to be inputted to a suitable current control algorithm. This work implements a drive prototype with a current vector generation scheme which can be self-adjusted in "quasi real-time" to the actual maximum torque per amps locus. The model doesn't have to rely on computed or estimated motor parameters, but only on the analysis of additional signals superimposed on the driving voltages/currents, without affecting the overall drive performance.
The buffer pool in a DBMS is used to cache the disk pages Of the database. Because typical database workloads are I/O-bound, the effectiveness of the buffer pool management algorithm is a crucial factor in the perform...
详细信息
ISBN:
(纸本)0769518400
The buffer pool in a DBMS is used to cache the disk pages Of the database. Because typical database workloads are I/O-bound, the effectiveness of the buffer pool management algorithm is a crucial factor in the performance of the DBMS. In IBM's DB2 buffer pool, the page cleaning algorithm is used to write changed pages to disks before they are selected for replacement. We conducted a detailed study of page cleaning in DB2 version 7.1.0 for Windows by both trace-driven simulation and measurements. Our results show that system throughput can be increased by 19% when the page cleaning algorithm is carefully tuned. In practice, however, the manual tuning of this algorithm is difficult. A self-tuning algorithm for page cleaning is proposed in this paper to automate this tuning task. Simulation results show that the self-tuning algorithm can achieve performance comparable to the best manually tuned system.
暂无评论