Recently, prefabricated construction has been vigorously promoted, resulting in high demand for precast concrete (PC) components. The transportation scheduling optimization problem of PC components with various kinds ...
详细信息
Recently, prefabricated construction has been vigorously promoted, resulting in high demand for precast concrete (PC) components. The transportation scheduling optimization problem of PC components with various kinds from multiple projects arises. Unlike conventional cargo, PC components are characterized by shape heterogeneity, large volume, and strict delivery time limits. Based on three characteristics, a heterogeneous fixed fleet vehicle routing problem (HFFVRP) for PC components is introduced, where heterogeneous vehicles, allocation of PC components to size-matching vehicles, and hybrid time windows are considered. Then a two-stage solution strategy based on the improved ant colony optimization (ACO) and Dijkstra algorithm is designed to obtain optimal vehicle routes under minimum transportation costs. The results indicate that the improved ACODijkstra algorithm outperforms in obtaining optimal transportation plans for heterogeneous vehicles compared with manual decision-making and other heuristic algorithms. Sensitivity analysis denotes that utilizing heterogeneous vehicles contributes to reductions in transportation costs, and vehicle configuration should be adjusted along with demand scales. The proposed model and algorithm extend the theoretical basis of construction industrial applications.
In this paper, a two-stage nonlinear identification algorithm parameterized in terms of rational basis functions with fixed basis poles is studied when disturbances are subject to mild stochastic assumptions. The two-...
详细信息
In this paper, a two-stage nonlinear identification algorithm parameterized in terms of rational basis functions with fixed basis poles is studied when disturbances are subject to mild stochastic assumptions. The two-stage algorithm is the archetype for robust estimation algorithms in H-infinity and its first stage is linear-in-data. Conditions for the consistency of both the stages are derived. It is shown that the two-stage algorithm enjoys a better stochastic as well as deterministic performance than those of the linear algorithms. Copyright (C) 2001 John Wiley Sons, Ltd.
作者:
Dutilleul, PMcGill Univ
Dept Plant Sci Lab Appl Stat Ste Anne De Bellevue PQ H9X 3V9 Canada
The maximum likelihood estimation (MLE) of the parameters of the matrix normal distribution is considered, In the absence of analytical solutions of the system of likelihood equations for the among-row and among-colum...
详细信息
The maximum likelihood estimation (MLE) of the parameters of the matrix normal distribution is considered, In the absence of analytical solutions of the system of likelihood equations for the among-row and among-column covariance matrices, a two-stage algorithm must be solved to obtain their maximum likelihood estimators. A necessary and sufficient condition for the existence of maximum likelihood estimators is given and the question of their stability as solutions of the system of likelihood equations is addressed. In particular, the covariance matrix parameters and their maximum likelihood estimators are defined up to a positive multiplicative constant;only their direct product is uniquely defined. Using simulated data under two variance-covariance structures that, otherwise, are indistinguishable by semivariance analysis, further specific aspects of the procedure are studied: (1) the convergence of the MLE algorithm is assessed;(2) the empirical bias of the direct product of covariance matrix estimators is calculated for various sample sizes;and (3) the consistency of the estimator is evaluated by its mean Euclidean distance from the parameter, as a function of the sample size. The adequacy of the matrix normal model, including the separability of the variance-covariance structure, is tested on multiple time series of dental medicine data;other applications to real doubly multivariate data are outlined.
Principal Component Analysis (PCA) is a popular multivariate analytic tool which can be used for dimension reduction without losing much information. Data vectors containing a large number of features arriving sequent...
详细信息
Principal Component Analysis (PCA) is a popular multivariate analytic tool which can be used for dimension reduction without losing much information. Data vectors containing a large number of features arriving sequentially may be correlated with each other. An effective algorithm for such situations is online PCA. Existing Online PCA research works revolve around proposing efficient scalable updating algorithms focusing on compression loss only. They do not take into account the size of the dataset at which further arrival of data vectors can be terminated and dimension reduction can be applied. It is well known that the dataset size contributes to reducing the compression loss - the smaller the dataset size, the larger the compression loss while larger the dataset size, the lesser the compression loss. However, the reduction in compression loss by increasing dataset size will increase the total data collection cost. In this paper, we move beyond the scalability and updation problems related to Online PCA and focus on optimising a cost-compression loss which considers the compression loss and data collection cost. We minimise the corresponding risk using a two-stage PCA algorithm. The resulting two-stage algorithm is a fast and an efficient alternative to Online PCA and is shown to exhibit attractive convergence properties with no assumption on specific data distributions. Experimental studies demonstrate similar results and further illustrations are provided using real data. As an extension, a multi-stage PCA algorithm is discussed as well. Given the time complexity, the two-stage PCA algorithm is emphasised over the multi-stage PCA algorithm for online data.
Introduction: Identifying predictors of patient outcomes evaluated over time may require modeling interactions among variables while addressing within-subject correlation. Generalized linear mixed models (GLMMs) and g...
详细信息
Introduction: Identifying predictors of patient outcomes evaluated over time may require modeling interactions among variables while addressing within-subject correlation. Generalized linear mixed models (GLMMs) and generalized estimating equations (GEEs) address within-subject correlation, but identifying interactions can be difficult if not hypothesized a priori. We evaluate the performance of several variable selection approaches for clustered binary outcomes to provide guidance for choosing between the methods. Methods: We conducted simulations comparing stepwise selection, penalized GLMM, boosted GLMM, and boosted GEE for variable selection considering main effects and two-way interactions in data with repeatedly measured binary outcomes and evaluate a two-stage approach to reduce bias and error in parameter estimates. We compared these approaches in real data applications: hypothermia during surgery and treatment response in lupus nephritis. Results: Penalized and boosted approaches recovered correct predictors and interactions more frequently than stepwise selection. Penalized GLMM recovered correct predictors more often than boosting, but included many spurious predictors. Boosted GLMM yielded parsimonious models and identified correct predictors well at large sample and effect sizes, but required excessive computation time. Boosted GEE was computationally efficient and selected relatively parsimonious models, offering a compromise between computation and parsimony. The two-stage approach reduced the bias and error in regression parameters in all approaches. Conclusion: Penalized and boosted approaches are effective for variable selection in data with clustered binary outcomes. The two-stage approach reduces bias and error and should be applied regardless of method. We provide guidance for choosing the most appropriate method in real applications.
暂无评论