In this paper we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value, as convex objects. This problem, which extends the stan...
详细信息
In this paper we address the problem of visualizing in a bounded region a set of individuals, which has attached a dissimilarity measure and a statistical value, as convex objects. This problem, which extends the standard Multidimensional Scaling Analysis, is written as a global optimization problem whose objective is the difference of two convex functions (dc). Suitable dc decompositions allow us to use the Difference of Convex algorithm (dcA) in a very efficient way. Our algorithmic approach is used to visualize two real-world datasets.
Proximal support vector machine (PSVM), as a variant of support vector machine (SVM), is to generate a pair of non-parallel hyperplanes for classification. Although PSVM is one of the powerful classification tools, it...
详细信息
Proximal support vector machine (PSVM), as a variant of support vector machine (SVM), is to generate a pair of non-parallel hyperplanes for classification. Although PSVM is one of the powerful classification tools, its ability on feature selection is still weak. To overcome this defect, we introduce l(0)-norm regularization in PSVM which enables PSVM to select important features and remove redundant features simultaneously for classification. This PSVM is called as a sparse proximal support vector machine (SPSVM). Due to the presence of l(0)-norm, the resulting optimization problem of SPSVM is neither convex nor smooth and thus, is difficult to solve. In this paper, we introduce a continuous nonconvex function to approximate l(0)-norm, and propose a novel difference of convex functions algorithms (dcA) to solve SPSVM. The main merit of the proposed method is that all subproblems are smooth and admit closed form solutions. The effectiveness of the proposed method is illustrated by theoretical analysis as well as some numerical experiments on both simulation datasets and real world datasets. (C) 2020 Elsevier Inc. All rights reserved.
In this paper, an NP-hard problem of minimizing a sum of pointwise minima of two functions is considered. Using a new equivalent formula, we propose a smooth approximation and an ADMM algorithm for solving the problem...
详细信息
In this paper, an NP-hard problem of minimizing a sum of pointwise minima of two functions is considered. Using a new equivalent formula, we propose a smooth approximation and an ADMM algorithm for solving the problem. In numerical experiments, we survey four methods including the algorithms proposed in this paper and the known methods. The results of numerical experiments indicate that the performance of each of algorithms could highly depend on the problem and simulation settings.
The goal of affine matrix rank minimization problem is to reconstruct a low-rank or approximately low-rank matrix under linear constraints. In general, this problem is combinatorial and NP-hard. In this paper, a nonco...
详细信息
The goal of affine matrix rank minimization problem is to reconstruct a low-rank or approximately low-rank matrix under linear constraints. In general, this problem is combinatorial and NP-hard. In this paper, a nonconvex fraction function is studied to approximate the rank of a matrix and translate this NP-hard problem into a transformed affine matrix rank minimization problem. The equivalence between these two problems is established, and we proved that the uniqueness of the global minimizer of transformed affine matrix rank minimization problem also solves affine matrix rank minimization problem if some conditions are satisfied. Moreover, we also proved that the optimal solution to the transformed affine matrix rank minimization problem can be approximately obtained by solving its regularization problem for some proper smaller lambda > 0. Lastly, the dc algorithm is utilized to solve the regularization transformed affine matrix rank minimization problem and the numerical experiments on image inpainting problems show that our method performs effectively in recovering low-rank images compared with some state-of-art algorithms. (c) 2018 Elsevier B.V. All rights reserved.
This paper proposes a new robust truncated L-2-norm twin support vector machine ((TSVM)-S-2), where the truncated L-2-norm is used to measure the empirical risk to make the classifiers more robust when encountering lo...
详细信息
This paper proposes a new robust truncated L-2-norm twin support vector machine ((TSVM)-S-2), where the truncated L-2-norm is used to measure the empirical risk to make the classifiers more robust when encountering lots of outliers. Meanwhile, chance constraints are also employed to specify false positive and false negative error rates. (TSVM)-S-2 considers a pair of chance constrained nonconvex nonsmooth problems. To solve these difficult problems, we propose an efficient iterative method for (TSVM)-S-2 based on difference of convex functions (dc) programs and dc algorithms (dcA). Experiments on benchmark data sets and artificial data sets demonstrate the significant virtues of (TSVM)-S-2 in terms of robustness and generalization performance.
In this paper, we study characterizations of differentiability for real-valued functions based on generalized differentiation. These characterizations provide the mathematical foundation for Nesterov's smoothing t...
详细信息
In this paper, we study characterizations of differentiability for real-valued functions based on generalized differentiation. These characterizations provide the mathematical foundation for Nesterov's smoothing techniques in infinite dimensions. As an application, we provide a simple approach to image reconstructions based on Nesterov's smoothing and algorithms for minimizing differences of convex (dc) functions that involve the regularization.
In dose-finding clinical trials, it is becoming increasingly important to account for individual-level heterogeneity while searching for optimal doses to ensure an optimal individualized dose rule (IDR) maximizes the ...
详细信息
In dose-finding clinical trials, it is becoming increasingly important to account for individual-level heterogeneity while searching for optimal doses to ensure an optimal individualized dose rule (IDR) maximizes the expected beneficial clinical outcome for each individual. In this article, we advocate a randomized trial design where candidate dose levels assigned to study subjects are randomly chosen from a continuous distribution within a safe range. To estimate the optimal IDR using such data, we propose an outcome weighted learning method based on a nonconvex loss function, which can be solved efficiently using a difference of convex functions algorithm. The consistency and convergence rate for the estimated IDR are derived, and its small-sample performance is evaluated via simulation studies. We demonstrate that the proposed method outperforms competing approaches. Finally, we illustrate this method using data from a cohort study for warfarin (an anti-thrombotic drug) dosing. Supplementary materials for this article are available online.
By using error bounds for affine variational inequalities we prove that any iterative sequence generated by the Projection dc (Difference-of-Convex functions) decomposition algorithm in quadratic programming is R-line...
详细信息
By using error bounds for affine variational inequalities we prove that any iterative sequence generated by the Projection dc (Difference-of-Convex functions) decomposition algorithm in quadratic programming is R-linearly convergent, provided that the original problem has solutions. Our result solves in the affirmative the first part of the conjecture stated by Le Thi, Pham Dinh and Yen in their recent paper [8, p. 489]. (C) 2014 Elsevier Inc. All rights reserved.
In this paper, we study nearest prototype classifiers, which classify data instances into the classes to which their nearest prototypes belong. We propose a maximum-margin model for nearest prototype classifiers. To p...
详细信息
In this paper, we study nearest prototype classifiers, which classify data instances into the classes to which their nearest prototypes belong. We propose a maximum-margin model for nearest prototype classifiers. To provide the margin, we define a class-wise discriminant function for instances by the negatives of distances of their nearest prototypes of the class. Then, we define the margin by the minimum of differences between the discriminant function values of instances with respect to the classes they belong to and the values of the other classes. The optimization problem corresponding to the maximum-margin model is a difference of convex functions (dc) program. It is solved using a dc algorithm, which is a k-means-like algorithm, i.e., the members and positions of prototypes are alternately optimized. Through a numerical study, we analyze the effects of hyperparameters of the maximum-margin model, especially considering the classification performance.
We prove that any iterative sequence generated by the projection decomposition algorithm of Pham Dinh et al. (Optim Methods Softw 23:609-629, 2008) in quadratic programming is bounded, provided that the quadratic prog...
详细信息
We prove that any iterative sequence generated by the projection decomposition algorithm of Pham Dinh et al. (Optim Methods Softw 23:609-629, 2008) in quadratic programming is bounded, provided that the quadratic program in question is two-dimensional and solvable.
暂无评论