Hypervolume optimal µ-distribution is the distribution of µ solutions maximizing the hypervolume indicator of µ solutions on a specific Pareto front. Most studies have focused on simple Pareto fronts su...
Hypervolume optimal µ-distribution is the distribution of µ solutions maximizing the hypervolume indicator of µ solutions on a specific Pareto front. Most studies have focused on simple Pareto fronts such as triangular and inverted triangular Pareto fronts. There is almost no study which focuses on complex Pareto fronts such as disconnected and partially degenerate Pareto fronts. However, most real-world multi-objective optimization problems have such a complex Pareto front. Thus, it is of great practical significance to study the hypervolume optimal µ-distribution on the complex Pareto fronts. In this paper, we study this issue by empirically showing the hypervolume optimal µ-distributions on the Pareto fronts of some representative artificial and real-world test problems. Our results show that, in general, maximizing the hypervolume indicator does not lead to uniformly distributed solution sets on the complex Pareto fronts. We also give some suggestions related to the use of the hypervolume indicator for performance evaluation of evolutionary multi-objective optimization algorithms.
In evolutionary multi-objective optimization (EMO), one important issue is to efficiently remove dominated solutions from a large number of solutions examined by an EMO algorithm. An efficient approach to remove domin...
In evolutionary multi-objective optimization (EMO), one important issue is to efficiently remove dominated solutions from a large number of solutions examined by an EMO algorithm. An efficient approach to remove dominated solutions from a large solution set is to partition it into small subsets. Dominated solutions are removed from each subset independently. This partition method is fast but cannot guarantee to remove all dominated solutions. To further remove remaining dominated solutions, a simple idea is to iteratively perform this approach. In this paper, we first examine three partition methods (random, objective value-based and cosine similarity-based methods) and their iterative versions through computational experiments on artificial test problems (DTLZ and WFG) and real-world problems. Our results show that the choice of an appropriate partition method is problem dependent. This observation motivates us to use a hybrid approach where different partition methods are used in an iterative manner. The results show that all dominated solutions are removed by the hybrid approach in most cases. Then, we examine the effects of the following factors on the computation time and the removal performance: the number of objectives, the shape of the Pareto front, and the number of subsets in each partition method.
Accurate and efficient methods for identifying and tracking each animal in a group are needed to study complex behaviors and social *** tracking methods(e.g.,marking each animal with dye or surgically implanting micro...
详细信息
Accurate and efficient methods for identifying and tracking each animal in a group are needed to study complex behaviors and social *** tracking methods(e.g.,marking each animal with dye or surgically implanting microchips)can be invasive and may have an impact on the social behavior being *** overcome these shortcomings,video-based methods for tracking unmarked animals,such as fruit flies and zebrafish,have been ***,tracking individual mice in a group remains a challenging problem because of their flexible body and complicated interaction *** this study,we report the development of a multi-object tracker for mice that uses the Faster region-based convolutional neural network(R-CNN)deep learning algorithm with geometric transformations in combination with multi-camera/multi-image fusion *** system successfully tracked every individual in groups of unmarked mice and was applied to investigate chasing *** proposed system constitutes a step forward in the noninvasive tracking of individual mice engaged in social behavior.
Hypervolume subset selection (HSS) is a hot topic in the evolutionary multi-objective optimization (EMO) community since hypervolume is the most widely-used performance indicator. In the literature, most HSS algorithm...
Hypervolume subset selection (HSS) is a hot topic in the evolutionary multi-objective optimization (EMO) community since hypervolume is the most widely-used performance indicator. In the literature, most HSS algorithms were designed for small-scale HSS (e.g., environmental selection: select $N$ solutions from $2N$ solutions where $N$ is the population size). Few researchers focus on large-scale HSS as a post-processing procedure in an unbounded external archive framework (i.e., subset selection from all examined solutions). In this paper, we propose a two-stage lazy greedy inclusion HSS (TGI-HSS) algorithm for large-scale HSS. In the first stage of TGI- HSS, a small solution set is selected from a large-scale candidate set using an efficient subset selection method (which is not based on exact hypervolume calculation). In the second stage, the final subset is selected from the small solution set using an existing efficient HSS algorithm. Experimental results show that the computational time can be significantly reduced by the proposed algorithm in comparison with other state-of-the-art HSS algorithms at the cost of only a small deterioration of the selected subset quality.
Recently, it has been pointed out in many studies that the performance of evolutionary multi-objective optimization (EMO) algorithms can be improved by selecting solutions from all examined solutions stored in an unbo...
Recently, it has been pointed out in many studies that the performance of evolutionary multi-objective optimization (EMO) algorithms can be improved by selecting solutions from all examined solutions stored in an unbounded external archive. This is because in general the final population is not the best subset of the examined solutions. To obtain a good final solution set in such a solution selection framework, subset selection from a large candidate set (i.e., all examined solutions) has been studied. However, since good subsets cannot be obtained from poor candidate sets, a more important issue is how to find a good candidate set, which is the focus of this paper. In this paper, we first visually demonstrate that the entire Pareto front is not covered by the examined solutions through computational experiments using MOEA/D, NSGA-III and SMS-EMOA on DTLZ test problems. That is, the examined solution set stored in the unbounded archive has some large holes (i.e., some uncovered area of the Pareto front). Next, to evaluate the quality of the examined solution set (i.e., to measure the size of the largest hole), we propose the use of a variant of the inverted generational distance (IGD) indicator. Then, we propose a simple modification of EMO algorithms to improve the quality of the examined solution set. Finally, we demonstrate the effectiveness of the proposed modification through computational experiments.
Initialization plays a crucial role in surrogate-based multiobjective evolutionary algorithms (MOEAs) when tackling computationally expensive multiobjective optimization problems. During the initialization process, so...
Initialization plays a crucial role in surrogate-based multiobjective evolutionary algorithms (MOEAs) when tackling computationally expensive multiobjective optimization problems. During the initialization process, solutions are generated to train surrogate models. Consequently, the accuracy of these surrogate models depends on the quality of the initial solutions, which in turn directly impacts the performance of surrogate-based MOEAs. Despite the widespread use of Latin hypercube sampling as an initialization method in surrogate-based MOEAs, there is a lack of comprehensive research examining the effectiveness of different initialization methods. Additionally, the impact of the number of initial solutions on the performance of surrogate-based MOEAs remains largely unexplored. This paper aims to bridge these research gaps by comparing the usefulness of two commonly employed initialization methods (i.e., random sampling and Latin hypercube sampling) in surrogate-based MOEAs. Furthermore, it investigates how varying the number of initial solutions influences the performance of surrogate-based MOEAs.
The hypervolume-based multi-objective evolutionary algorithms (HV-MOEAs) have proven to be highly effective in solving multi-objective optimization problems. However, the computation time of the hypervolume calculatio...
The hypervolume-based multi-objective evolutionary algorithms (HV-MOEAs) have proven to be highly effective in solving multi-objective optimization problems. However, the computation time of the hypervolume calculation increases significantly as the number of objectives increases. To address this issue, an R2-based hypervolume contribution approximation (R2-HVC) method was proposed. Nevertheless, the original R2-HVC generates a large number of vectors and computes the HVC only once. In this study, we propose an ensemble method based on the R2-HVC method. By using a small number of vectors for repetitive computation and majority voting, the ensemble method can reduce the probability of making incorrect choices. Experimental results show that the proposed method can improve the approximation accuracy while maintaining a similar computation time to the original R2-HVC method.
In this paper, we examine the effect of normalization in R2-based hypervolume and hypervolume contribution approximation. The fact is that the region with different scales on objective space brings approximation bias....
In this paper, we examine the effect of normalization in R2-based hypervolume and hypervolume contribution approximation. The fact is that the region with different scales on objective space brings approximation bias. The basic idea of normalization is to perform a coordinate transformation to make the shape of the approximated region more regular, and then transform it to obtain the final value according to the property of hypervolume and hypervolume contribution. The performance of normalization is evaluated on different datasets by comparing it with the original R2-based method. We use two different metrics to evaluate hypervolume and hypervolume contribution separately, and the results indicate that normalization does exactly improve the approximation accuracy and outperforms the original R2-based method.
Source-free domain adaptation (SFDA) aims to adapt a model pre-trained on a labeled source domain to an unlabeled target domain without access to source data, preserving the source domain’s privacy. While SFDA is pre...
详细信息
Source-free domain adaptation (SFDA) aims to adapt a model pre-trained on a labeled source domain to an unlabeled target domain without access to source data, preserving the source domain’s privacy. While SFDA is prevalent in computer vision, it remains largely unexplored in time series analysis. Existing SFDA methods, designed for visual data, struggle to capture the inherent temporal dynamics of time series, hindering adaptation performance. This paper proposes MAsk And imPUte (MAPU), a novel and effective approach for time series SFDA. MAPU addresses the critical challenge of temporal consistency by introducing a novel temporal imputation task. This task involves randomly masking time series signals and leveraging a dedicated temporal imputer to recover the original signal within the learned embedding space, bypassing the complexities of noisy raw data. During subsequent adaptation, the imputer network guides the target model to generate target features that exhibit temporal consistency with the source features. Notably, MAPU is the first method to explicitly address temporal consistency in the context of time series SFDA. Additionally, it offers seamless integration with existing SFDA methods, providing greater flexibility. We further introduce E-MAPU, which incorporates evidential uncertainty estimation to address the overconfidence issue inherent in softmax predictions. To achieve that, we leverage evidential deep learning to obtain a better-calibrated pre-trained model and identify out-of-support target samples (those falling outside the source domain’s support) by predicting them with higher entropy than source samples. During adaptation, the target classifier remains fixed while the feature extractor is trained to minimize the evidential entropy of out-of-support target samples. This is achieved by adapting the target encoder to map these samples to a new feature representation closer to the source domain’s support. This fosters better alignment, ultimately
Insufficient prior knowledge of a captured hyperspectral image (HSI) scene may lead the experts or the automatic labeling systems to offer incorrect labels or ambiguous labels (i.e., assigning each training sample to ...
详细信息
暂无评论