Meta-heuristic algorithms have been successfully applied to solve the redundancy allocation problem in recent years. Among these algorithms, the electromagnetism-like mechanism (EM) is a powerful population-based algo...
详细信息
Meta-heuristic algorithms have been successfully applied to solve the redundancy allocation problem in recent years. Among these algorithms, the electromagnetism-like mechanism (EM) is a powerful population-based algorithm designed for continuous decision spaces. This paper presents an efficient memory-based electromagnetism-like mechanism called MBEM to solve the redundancy allocation problem. The proposed algorithm employs a memory matrix in local search to save the features of good solutions and feed it back to the algorithm. This would make the search process more efficient. To verify the good performance of MBEM, various test problems, especially the 33 well-known benchmark instances in the literature, are examined. The experimental results show that not only optimal solutions of all benchmark instances are obtained within a reasonable computer execution time, but also MBEM outperforms EM in terms of the quality of the solutions obtained, even for large-size problems. (C) 2015 Elsevier B.V. All rights reserved.
In recent years, model-agnostic meta-learning (MAML) has become a popular research area. However, the stochastic optimization of MAML is still underdeveloped. Existing MAML algorithms rely on the "episode" i...
详细信息
In recent years, model-agnostic meta-learning (MAML) has become a popular research area. However, the stochastic optimization of MAML is still underdeveloped. Existing MAML algorithms rely on the "episode" idea by sampling a few tasks and data points to update the meta-model at each iteration. Nonetheless, these algorithms either fail to guarantee convergence with a constant mini-batch size or require processing a large number of tasks at every iteration, which is unsuitable for continual learning or cross-device federated learning where only a small number of tasks are available per iteration or per round. To address these issues, this paper proposes memory-based stochastic algorithms for MAML that converge with vanishing error. The proposed algorithms require sampling a constant number of tasks and data samples per iteration, making them suitable for the continual learning scenario. Moreover, we introduce a communication-efficient memory-based MAML algorithm for personalized federated learning in cross-device (with client sampling) and cross-silo (without client sampling) settings. Our theoretical analysis improves the optimization theory for MAML, and our empirical results corroborate our theoretical findings. Interested readers can access our code at https://***/bokun-wang/moml.
memory-based Collaborative Filtering (CF) has been a widely used approach for personalised recommendation with considerable success in many applications. An important issue regarding memory-based CF lies in similarity...
详细信息
memory-based Collaborative Filtering (CF) has been a widely used approach for personalised recommendation with considerable success in many applications. An important issue regarding memory-based CF lies in similarity computation: the sparsity of the rating matrix leads to similarity computations based on few co-rated items between users, resulting in high sensitive predictions. Additionally, the sparse similarity computation has high computational cost, due to the dimensionality of the item space. In this paper, we pursue both these issues. We propose a new model to compute similarity by representing users (or items) through their distances to preselected users, named landmarks. Such user modelling allows the introduction of more ratings into similarity computations through transitive relations created by the landmarks. Unlike conventional memory-based CF, the proposal builds a new user space defined by distances to landmarks, avoiding sensitivity in similarity computations. Findings from our experiments show that the proposed modelling achieves better accuracy than the 'sparse' similarity representation in all tested datasets, and has also yielded competitive accuracy results against the compared model-based CF algorithms. Furthermore, the proposed implementation has beaten all compared methods in terms of computational performance, becoming a promising alternative to memory-based CF algorithms for large datasets. (C) 2019 Elsevier Inc. All rights reserved.
In this paper, we propose an enhanced feature selection algorithm able to cope with feature drift problem that may occur in data streams, where the set of relevant features change over time. We utilize a dynamic multi...
详细信息
ISBN:
(纸本)9781728169293
In this paper, we propose an enhanced feature selection algorithm able to cope with feature drift problem that may occur in data streams, where the set of relevant features change over time. We utilize a dynamic multi-objective evolutionary algorithm to continuously search for the updated set of relevant features after the occurrence of every change in the environment. An artificial neural network is employed to classify the new instances based on the up-to-date obtained set of relevant features efficiently. Our algorithm exploits a detection mechanism for the severity of changes to estimate the severity level of occurred changes and adaptively replies to these changes by introducing diversity to algorithm solutions. Furthermore, a fixed-size memory is used to store the good solutions and reuse them after each change to accelerate the convergence and searching process of the algorithm. The experimental results using three datasets and different environmental parameters show that the combination of our improved feature selection algorithm with the artificial neural network outperforms related work.
In recent years, model-agnostic meta-learning (MAML) has become a popular research area. However, the stochastic optimization of MAML is still underdeveloped. Existing MAML algorithms rely on the "episode" i...
详细信息
In recent years, model-agnostic meta-learning (MAML) has become a popular research area. However, the stochastic optimization of MAML is still underdeveloped. Existing MAML algorithms rely on the "episode" idea by sampling a few tasks and data points to update the meta-model at each iteration. Nonetheless, these algorithms either fail to guarantee convergence with a constant mini-batch size or require processing a large number of tasks at every iteration, which is unsuitable for continual learning or cross-device federated learning where only a small number of tasks are available per iteration or per round. To address these issues, this paper proposes memory-based stochastic algorithms for MAML that converge with vanishing error. The proposed algorithms require sampling a constant number of tasks and data samples per iteration, making them suitable for the continual learning scenario. Moreover, we introduce a communication-efficient memory-based MAML algorithm for personalized federated learning in cross-device (with client sampling) and cross-silo (without client sampling) settings. Our theoretical analysis improves the optimization theory for MAML, and our empirical results corroborate our theoretical findings. Interested readers can access our code at https://***/bokun-wang/moml.
暂无评论