Sparse logistic principal component analysis was proposed in Lee et al. (2010) for exploratory analysis of binary data. Relying on the joint estimation of multiple principal components, the algorithm therein is comput...
详细信息
Sparse logistic principal component analysis was proposed in Lee et al. (2010) for exploratory analysis of binary data. Relying on the joint estimation of multiple principal components, the algorithm therein is computationally too demanding to be useful when the data dimension is high. We develop a computationally fast algorithm using a combination of coordinate descent and majorization-minimization (mm) auxiliary optimization. Our new algorithm decouples the joint estimation of multiple components into separate estimations and consists of closed-form elementwise updating formulas for each sparse principal component. The performance of the proposed algorithm is tested using simulation and high-dimensional real-world datasets. (C) 2013 Elsevier B.V. All rights reserved.
The constrained estimation in Cox's model for the right-censored survival data is studied and the asymptotic properties of the constrained estimators are derived by using the Lagrangian method based on Karush-Kuhn...
详细信息
The constrained estimation in Cox's model for the right-censored survival data is studied and the asymptotic properties of the constrained estimators are derived by using the Lagrangian method based on Karush-Kuhn-Tucker conditions. A novel minorization-maximization (mm) algorithm is developed for calculating the maximum likelihood estimates of the regression coefficients subject to box or linear inequality restrictions in the proportional hazards model. The first M-step of the proposed mm algorithm is to construct a surrogate function with a diagonal Hessian matrix, which can be reached by utilizing the convexity of the exponential function and the negative logarithm function. The second M-step is to maximize the surrogate function with a diagonal Hessian matrix subject to box constraints, which is equivalent to separately maximizing several one-dimensional concave functions with a lower bound and an upper bound constraint, resulting in an explicit solution via a median function. The ascent property of the proposed mm algorithm under constraints is theoretically justified. Standard error estimation is also presented via a non-parametric bootstrap approach. Simulation studies are performed to compare the estimations with and without constraints. Two real data sets are used to illustrate the proposed methods. (C) 2014 Elsevier B.V. All rights reserved.
We model the effect of a road safety measure on a set of target sites with a control area for each site, and we suppose that the accident data recorded at each site are classified in different mutually exclusive types...
详细信息
We model the effect of a road safety measure on a set of target sites with a control area for each site, and we suppose that the accident data recorded at each site are classified in different mutually exclusive types. We adopt the before-after technique and we assume that at any one target site the total number of accidents recorded is multinomially distibuted between the periods and types of accidents. In this article, we propose a minorization-majorization (mm) algorithm for obtaining the constrained maximum likelihood estimates of the parameter vector. We compare it with a gradient projection-expectation maximization (GP-EM) algorithm, based on gradient projections. The performance of the algorithms is examined through a simulation study of road safety data.
The minorization-maximization (mm) algorithm is an optimization technique for iteratively calculating the maximizer of a concave target function rather than a root-finding tool. In this paper, we in the first time dev...
详细信息
The minorization-maximization (mm) algorithm is an optimization technique for iteratively calculating the maximizer of a concave target function rather than a root-finding tool. In this paper, we in the first time develop the mm algorithm as a new method for seeking the root x* of a univariate nonlinear equation g(x) = 0. The key idea is to transfer the root-finding issue to iteratively calculate the maximizer of a concave target function by designing a new mm algorithm. According to the ascent property of the mm algorithm, we know that the proposed algorithm converges to the root x* and does not depend on any initial values, in contrast to Newton's method. Several statistical examples are provided to demonstrate the proposed algorithm.
We consider a semiparametric mixture of two univariate density functions where one of them is known while the weight and the other function are unknown. We do not assume any additional structure on the unknown density...
详细信息
We consider a semiparametric mixture of two univariate density functions where one of them is known while the weight and the other function are unknown. We do not assume any additional structure on the unknown density function. For this mixture model, we derive a new sufficient identifiability condition and pinpoint a specific class of distributions describing the unknown component for which this condition is mostly satisfied. We also suggest a novel approach to estimation of this model that is based on an idea of applying a maximum smoothed likelihood to what would otherwise have been an ill-posed problem. We introduce an iterative mm (Majorization-Minimization) algorithm that estimates all of the model parameters. We establish that the algorithm possesses a descent property with respect to a log-likelihood objective functional and prove that the algorithm, indeed, converges. Finally, we also illustrate the performance of our algorithm in a simulation study and apply it to a real dataset.
Prism and Storm are popular model checking tools that provide a number of powerful analysis techniques for Continuous-time Markov chains (CTMCs). The outcome of the analysis is strongly dependent on the parameter valu...
详细信息
ISBN:
(数字)9783031438356
ISBN:
(纸本)9783031438349;9783031438356
Prism and Storm are popular model checking tools that provide a number of powerful analysis techniques for Continuous-time Markov chains (CTMCs). The outcome of the analysis is strongly dependent on the parameter values used in the model which govern the timing and probability of events of the resulting CTMC. However, for some applications, parameter values have to be empirically estimated from partially-observable executions. In this work, we address the problem of estimating parameter values of CTMCs expressed as Prism models from a number of partially-observable executions whichmight possiblymiss some dwell time measurements. The semantics of the model is expressed as a parametricCTMC(pCTMC), i.e., CTMC where transition rates are polynomial functions over a set of parameters. Then, building on a theory of algorithms known by the initials mm, for minorization-maximization, we present an iterative maximum likelihood estimation algorithm for pCTMCs. We present an experimental evaluation of the proposed technique on a number of CTMCs from the quantitative verification benchmark set. We conclude by illustrating the use of our technique in a case study: the analysis of the spread of COVID-19 in presence of lockdown countermeasures.
In this paper, the general channel estimator for MIMO OFDM is derived and mm algorithms are used to reduce computational complexity. The first M of mm stands for majorization(minorization) and the second M stands for ...
详细信息
ISBN:
(纸本)078039335X
In this paper, the general channel estimator for MIMO OFDM is derived and mm algorithms are used to reduce computational complexity. The first M of mm stands for majorization(minorization) and the second M stands for minimization(maximization). It's well known that EM algorithms are powerful tools for channel estimation using iterative calculation. Indeed, every EM-type algorithm is a special case of the more general class of mm algorithms. To construct an EM-type algorithm is skillful and complicated. In contrast, the mm algorithms are easier to be understood and applied. In addition to constructing channel estimation algorithms based on the mm principle we also analyze the convergence property of the mm-type algorithms. Finally, the simulation results demonstrate the performance of mm-type channel estimation algorithms.
We propose a method for high dimensional multivariate regression that is robust to heavy-tailed distributions or outliers, while preserving estimation accuracy in normal random error distributions. We extend the Wilco...
详细信息
We propose a method for high dimensional multivariate regression that is robust to heavy-tailed distributions or outliers, while preserving estimation accuracy in normal random error distributions. We extend the Wilcoxontype regression to a multivariate regression model as a tuning-free approach to robustness. Furthermore, the proposed method regularizes the L1 and L2 terms of the clustering based on k-means, which is extended from the multivariate cluster elastic net. The estimation of the regression coefficient and variable selection are produced simultaneously. Moreover, considering the relationship among the correlation of response variables through the clustering is expected to improve the estimation performance. The numerical simulation demonstrates that our proposed method overperforms the multivariate cluster method and other multiple regression methods in the case of heavy-tailed error distribution and outliers. The proposed method also indicates stability in normal error distribution. Finally, we confirm the efficacy of our proposed method using gene data.
Parameter estimation in logistic regression is a well-studied problem with the Newton-Raphson method being one of the most prominent optimization techniques used in practice. A number of monotone optimization methods ...
详细信息
Parameter estimation in logistic regression is a well-studied problem with the Newton-Raphson method being one of the most prominent optimization techniques used in practice. A number of monotone optimization methods including minorization-maximization (mm) algorithms, expectation-maximization (EM) algorithms and related variational Bayes approaches offer useful alternatives guaranteed to increase the logistic regression likelihood at every iteration. In this article, we propose and evaluate an optimization procedure that is based on a straightforward modification of an EM algorithm for logistic regression. Our method can substantially improve the computational efficiency of the EM algorithm while preserving the monotonicity of EM and the simplicity of the EM parameter updates. By introducing an additional latent parameter and selecting this parameter to maximize the penalized observed-data log-likelihood at every iteration, our iterative algorithm can be interpreted as a parameter-expanded expectation-conditional maximization either (ECME) algorithm, and we demonstrate how to use the parameter-expanded ECME with an arbitrary choice of weights and penalty function. In addition, we describe a generalized version of our parameter-expanded ECME algorithm that can be tailored to the challenges encountered in specific high-dimensional problems, and we study several interesting connections between this generalized algorithm and other well-known methods. Performance comparisons between our method, the EM algorithm, Newton-Raphson, and several other optimization methods are presented using an extensive series of simulation studies based upon both real and synthetic datasets.
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the mm algorithm. In this setting, one ca...
详细信息
This paper derives new algorithms for signomial programming, a generalization of geometric programming. The algorithms are based on a generic principle for optimization called the mm algorithm. In this setting, one can apply the geometric-arithmetic mean inequality and a supporting hyperplane inequality to create a surrogate function with parameters separated. Thus, unconstrained signomial programming reduces to a sequence of one-dimensional minimization problems. Simple examples demonstrate that the mm algorithm derived can converge to a boundary point or to one point of a continuum of minimum points. Conditions under which the minimum point is unique or occurs in the interior of parameter space are proved for geometric programming. Convergence to an interior point occurs at a linear rate. Finally, the mm framework easily accommodates equality and inequality constraints of signomial type. For the most important special case, constrained quadratic programming, the mm algorithm involves very simple updates.
暂无评论