predictive maintenance needs to forecast the numbers of rejections at any overhaul point before any failure occurs in order to accurately and proactively take adequate maintenance action. In healthcare, prediction has...
详细信息
predictive maintenance needs to forecast the numbers of rejections at any overhaul point before any failure occurs in order to accurately and proactively take adequate maintenance action. In healthcare, prediction has been applied to foretell when and how to administer medication to improve the health condition of the patient. The same is true for maintenance where the application of prognostics can help make better decisions. In this paper, an overview of prognostic maintenance strategies is presented. The proposed data-driven prognostics approach employs a statistical technique of (i) the parameter estimation methods of the time-to failure data to predict the relevant statistical model parameters and (ii) prognostics modelling incorporating the reliability Weibull Cumulative Distribution Function to predict part rejection, replacement, and reuse. The analysis of the modelling uses synthetic data validated by industry domain experts. The outcome of the prediction can further proffer solution to designers, manufacturers and operators of industrial product-service systems. The novelty in this paper is the development of the through life performance approach. The approach ascertains when the system needs to undergo maintenance, repair and overhaul before failure occurs. (C) 2016 Elsevier B.V.
predictive risk modelling is a computational method used to generate probabilities correlating events. The output of such systems is typically represented by a statistical score derived from various related and often ...
详细信息
predictive risk modelling is a computational method used to generate probabilities correlating events. The output of such systems is typically represented by a statistical score derived from various related and often arbitrary datasets. In many cases, the information generated by such systems is treated as a form of evidence to justify further action. This paper examines the nature of the information generated by such systems and compares it with more orthodox notions of evidence found in epistemology. The paper focuses on a specific example to illustrate the issues: The New Zealand Government has proposed implementing a predictive risk modelling system which purportedly identifies children at risk of a maltreatment event before the age of five. Timothy Williamson’s (2002) conception of epistemology places a requirement on knowledge that it be explanatory. Furthermore, Williamson argues that knowledge is equivalent to evidence. This approach is compared to the claim that the output of such computational systems constitutes evidence. While there may be some utility in using predictive risk modelling systems, I argue, since an explanatory account of the output of such algorithms that meets Williamson’s requirements cannot be given, doubt is cast upon the resulting statistical scores as constituting evidence on generally accepted epistemic grounds. The algorithms employed in such systems are geared towards identifying patterns which turn out to be good correlations. However, rather than providing information about specific individuals and their exposure to risk, a more valid explanation of a high probability score is that the particular variables related to incidents of maltreatment are just higher amongst certain subgroups in a population than they are amongst others. The paper concludes that any justification of the information generated by such systems is generalised and pragmatic at best and the application of this information to individual cases raises various eth
A shelf life model based on storage temperatures was developed for a nutricereal based fermented baby food formulation. The formulated baby food samples were packaged and stored at 10, 25, 37 and 45 A degrees C for a ...
详细信息
A shelf life model based on storage temperatures was developed for a nutricereal based fermented baby food formulation. The formulated baby food samples were packaged and stored at 10, 25, 37 and 45 A degrees C for a test storage period of 180 days. A shelf life study was conducted using consumer and semi-trained panels, along with chemical analysis (moisture and acidity). The chemical parameters (moisture and titratable acidity) were found inadequate in determining the shelf life of the formulated product. Weibull hazard analysis was used to determine the shelf life of the product based on sensory evaluation. Considering 25 and 50 % rejection probability, the shelf life of the baby food formulation was predicted to be 98 and 322 days, 84 and 271 days, 71 and 221 days and 58 and 171 days for the samples stored at 10, 25, 37 and 45 A degrees C, respectively. A shelf life equation was proposed using the rejection times obtained from the consumer study. Finally, the formulated baby food samples were subjected to microbial analysis for the predicted shelf life period and were found microbiologically safe for consumption during the storage period of 360 days.
Accurate estimation of running and dwell times is important for all levels of planning and control of railway traffic. The availability of historical track occupation data with a high degree of granularity inspired a ...
详细信息
Accurate estimation of running and dwell times is important for all levels of planning and control of railway traffic. The availability of historical track occupation data with a high degree of granularity inspired a data-driven approach for estimating these process times. In this paper we present and compare the accuracy of several approaches to model running and dwell times in railway traffic. Three global predictive model approaches are presented based on advanced statistical learning techniques: LTS robust linear regression, regression trees and random forests. Also local models are presented for a particular train line, station or block section, based on LTS robust linear regression with some refinements. The models are validated and compared using a test set independent from the training set. The applicability of the proposed data-driven approach for real-time applications is proved by the accuracy of the obtained estimates and the low computation times. Overall, the local models perform best both in accuracy and computation time.
predictive modelling of gene expression provides a powerful framework for exploring the regulatory logic underpinning transcriptional regulation. Recent studies have demonstrated the utility of such models in identify...
详细信息
predictive modelling of gene expression provides a powerful framework for exploring the regulatory logic underpinning transcriptional regulation. Recent studies have demonstrated the utility of such models in identifying dysregulation of gene and miRNA expression associated with abnormal patterns of transcription factor (TF) binding or nucleosomal histone modifications (HMs). Despite the growing popularity of such approaches, a comparative review of the various modelling algorithms and feature extraction methods is lacking. We define and compare three methods of quantifying pairwise gene-TF/HM interactions and discuss their suitability for integrating the heterogeneous chromatin immunoprecipitation (ChIP)-seq binding patterns exhibited by TFs and HMs. We then construct log-linear and I mu-support vector regression models from various mouse embryonic stem cell (mESC) and human lymphoblastoid (GM12878) data sets, considering both ChIP-seq- and position weight matrix- (PWM)-derived in silico TF-binding. The two algorithms are evaluated both in terms of their modelling prediction accuracy and ability to identify the established regulatory roles of individual TFs and HMs. Our results demonstrate that TF-binding and HMs are highly predictive of gene expression as measured by mRNA transcript abundance, irrespective of algorithm or cell type selection and considering both ChIP-seq and PWM-derived TF-binding. As we encourage other researchers to explore and develop these results, our framework is implemented using open-source software and made available as a preconfigured bootable virtual environment.
The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predicti...
详细信息
The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.
genetic algorithm - as an alternative to conventional approaches in predicting the optimal value of machining parameters leading to minimum surface roughness. A real machining experiment has been referred in this stud...
详细信息
genetic algorithm - as an alternative to conventional approaches in predicting the optimal value of machining parameters leading to minimum surface roughness. A real machining experiment has been referred in this study to check the capability of the proposed model for prediction and optimization of surface roughness. The results predicted by the proposed model indicate good agreement between the predicted values and experimental values. The analysis of this study proves that the proposed approach is capable of determining the optimum machining parameters. (C) 2015 The Authors. Published by Elsevier B.V.
predictive maintenance needs to forecast the numbers of rejections at any overhaul point before any failure occurs in order to accurately and proactively take adequate maintenance action. In healthcare, prediction has...
详细信息
predictive maintenance needs to forecast the numbers of rejections at any overhaul point before any failure occurs in order to accurately and proactively take adequate maintenance action. In healthcare, prediction has been applied to foretell when and how to administer medication to improve the health condition of the patient. The same is true for maintenance where the application of prognostics can help make better decisions. In this paper, an overview of prognostic maintenance strategies is presented. The proposed data-driven prognostics approach employs a statistical technique of (i) the parameter estimation methods of the time-to-failure data to predict the relevant statistical model parameters and (ii) prognostics modelling incorporating the reliability Weibull Cumulative Distribution Function to predict part rejection, replacement, and reuse. The analysis of the modelling uses synthetic data validated by industry domain experts. The outcome of the prediction can further proffer solution to designers, manufacturers and operators of industrial product-service systems. The novelty in this paper is the development of the through-life performance approach. The approach ascertains when the system needs to undergo maintenance, repair and overhaul before failure occurs.
predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. Wi...
详细信息
predictive modelling in drug discovery is challenging to automate as it often contains multiple analysis steps and might involve cross-validation and parameter tuning that create complex dependencies between tasks. With large-scale data or when using computationally demanding modelling methods, e-infrastructures such as high-performance or cloud computing are required, adding to the existing challenges of fault-tolerant automation. Workflow management systems can aid in many of these challenges, but the currently available systems are lacking in the functionality needed to enable agile and flexible predictive modelling. We here present an approach inspired by elements of the flow-based programming paradigm, implemented as an extension of the Luigi system which we name SciLuigi. We also discuss the experiences from using the approach when modelling a large set of biochemical interactions using a shared computer cluster.
Accurate estimation of soil hydraulic conductivity (K) is crucial in groundwater hydrology and geo-environmental engineering applications. This study introduces novel K predictive modelling for wide gradation spectrum...
详细信息
Accurate estimation of soil hydraulic conductivity (K) is crucial in groundwater hydrology and geo-environmental engineering applications. This study introduces novel K predictive modelling for wide gradation spectrum sandy soils, employing stepwise multiple linear regression (SMLR) and least absolute shrinkage and selection operator (LASSO) techniques. Using an 81-sample dataset with key variables, including particle sizes, gradation parameters, porosity, and dry density, this study addresses limitations in existing K prediction methods. Correlation analysis reveals variable associations and multicollinearity issues, necessitating feature selection and the development of SMLR and LASSO regression models. While both models perform well on the training dataset, LASSO excels in mitigating overfitting, achieving a high coefficient of determination (R2) of 0.82 and 0.87 on the training and testing datasets. Comparative analyses with existing models in the literature underscored LASSO's superiority in approximating laboratory-measured K-values, establishing it as the preferred choice for hydraulic conductivity estimation.
暂无评论