A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is present...
详细信息
A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications. (C) 2013 Elsevier B.V. All rights reserved.
The paper presents IRSN's results of the OECD/NEA WPEC Subgroup 33 benchmark exercise which is focused upon combined use of differential and integral data using adjustment technique. The results are generated by B...
详细信息
The paper presents IRSN's results of the OECD/NEA WPEC Subgroup 33 benchmark exercise which is focused upon combined use of differential and integral data using adjustment technique. The results are generated by BERING code using different sets of input data: integral parameters and sensitivity coefficients for fast benchmark experiments and applications computed by deterministic ERANOS code and Monte Carlo SCALE sequences, COMMARA-2.0 and JENDL-4.0 cross-section-covariance data and integral correlations provided by JAEA. The paper demonstrates results of the adjustment when using different input data and two adjustment algorithms implemented in BERING.
Matrix completion that estimates missing values in visual data is an important topic in computer vision. Most of the recent studies focused on the low rank matrix approximation via the nuclear norm. However, the visua...
详细信息
Matrix completion that estimates missing values in visual data is an important topic in computer vision. Most of the recent studies focused on the low rank matrix approximation via the nuclear norm. However, the visual data, such as images, is rich in texture which may not be well approximated by low rank constraint. In this paper, we propose a novel matrix completion method, which combines the nuclear norm with the local geometric regularizer to solve the problem of matrix completion for redundant texture images. And in this paper we mainly consider one of the most commonly graph regularized parameters: the total variation norm which is a widely used measure for enforcing intensity continuity and recovering a piecewise smooth image. The experimental results show that the encouraging results can be obtained by the proposed method on real texture images compared to the stateof-the-art methods.
Our objective was to explore artificial neural networks (ANNs) as a possible tool for dosage individualization of warfarin. Demographic, clinical, and genetic data were gathered from a previously collected cohort of p...
详细信息
Our objective was to explore artificial neural networks (ANNs) as a possible tool for dosage individualization of warfarin. Demographic, clinical, and genetic data were gathered from a previously collected cohort of patients with a stable warfarin dosage who were able to achieve an observed international normalized ratio of 2-3. Data from a cohort of 3,415 patients were used to develop an ANN dosing algorithm. Data from another cohort of 856 were used to validate the algorithm. The clinical significance of the ANN dosing algorithm was evaluated by calculating the percentage of patients whose predicted dosage of warfarin was within 20 % of the actual stable therapeutic dose. The clinical significance was also compared with a previously published dosing algorithm. A feed-forward neural network with three layers was able to successfully predict the ideal warfarin dosage in 48 % of the patients. The neural network model explained 48 % and 43 % of the dosage variability observed among patients in the derivation and validation cohorts, respectively. ANN analysis identified several predictors of warfarin dosage including race;age;height;weight;cytochrome P450 (CYP)2C9 genotype;VKORC1 genotype;sulfonamide, azole antifungals, or macrolide administration;carbamazepine, phenytoin, or rifampicin administration;and amiodarone administration. An ANN was applied to develop a warfarin dosing algorithm. The proposed dosing algorithm has the potential to recommend warfarin dosages that are close to the appropriate dosages.
Topological centrality is a significant measure for characterising the relative importance of a node in a complex network. For directed networks that model dynamic processes, however, it is of more practical importanc...
详细信息
Topological centrality is a significant measure for characterising the relative importance of a node in a complex network. For directed networks that model dynamic processes, however, it is of more practical importance to quantify a vertex's ability to dominate (control or observe) the state of other vertices. In this paper, based on the determination of controllable and observable subspaces under the global minimum-cost condition, we introduce a novel direction-specific index, domination centrality, to assess the intervention capabilities of vertices in a directed network. Statistical studies demonstrate that the domination centrality is, to a great extent, encoded by the underlying network's degree distribution and that most network positions through which one can intervene in a system are vertices with high domination centrality rather than network hubs. To analyse the interaction and functional dependence between vertices when they are used to dominate a network, we define the domination similarity and detect significant functional modules in glossary and metabolic networks through clustering analysis. The experimental results provide strong evidence that our indices are effective and practical in accurately depicting the structure of directed networks.
Network virtualisation is organised as a fundamental technology for eradicating the ossified architecture of the Internet. Virtual network embedding (VNE) instantiates virtual networks on a substrate network to carry ...
详细信息
Network virtualisation is organised as a fundamental technology for eradicating the ossified architecture of the Internet. Virtual network embedding (VNE) instantiates virtual networks on a substrate network to carry as many services as possible, whi...
Propensity-score matching is increasingly being used to reduce the confounding that can occur in observational studies examining the effects of treatments or interventions on outcomes. We used Monte Carlo simulations ...
详细信息
Propensity-score matching is increasingly being used to reduce the confounding that can occur in observational studies examining the effects of treatments or interventions on outcomes. We used Monte Carlo simulations to examine the following algorithms for forming matched pairs of treated and untreated subjects: optimal matching, greedy nearest neighbor matching without replacement, and greedy nearest neighbor matching without replacement within specified caliper widths. For each of the latter two algorithms, we examined four different sub-algorithms defined by the order in which treated subjects were selected for matching to an untreated subject: lowest to highest propensity score, highest to lowest propensity score, best match first, and random order. We also examined matching with replacement. We found that (i) nearest neighbor matching induced the same balance in baseline covariates as did optimal matching;(ii) when at least some of the covariates were continuous, caliper matching tended to induce balance on baseline covariates that was at least as good as the other algorithms;(iii) caliper matching tended to result in estimates of treatment effect with less bias compared with optimal and nearest neighbor matching;(iv) optimal and nearest neighbor matching resulted in estimates of treatment effect with negligibly less variability than did caliper matching;(v) caliper matching had amongst the best performance when assessed using mean squared error;(vi) the order in which treated subjects were selected for matching had at most a modest effect on estimation;and (vii) matching with replacement did not have superior performance compared with caliper matching without replacement. (c) 2013 The Authors. Statistics in Medicine published by John Wiley & Sons, Ltd.
In practical systems, face recognition under unconstrained conditions is a very challenging task, where their input images are first pre-processed and initially aligned by a face detection algorithm. However, there ar...
详细信息
In practical systems, face recognition under unconstrained conditions is a very challenging task, where their input images are first pre-processed and initially aligned by a face detection algorithm. However, there are still some residual localisation errors after the initial alignment. If we do not take these errors into account, the recognition performance should be greatly degraded for most face recognition algorithms. Generally, when designing a practical face recognition system, we need to compromise the capability of residual error tolerance and the discriminating capability. Although it is feasible to apply an iterative alignment algorithm to fine-tune alignment, it will increase the computation load significantly. In this study, we propose an adaptive two-stage face recognition system consisting of two block-based recognition stages with a relatively larger cell size (i.e. the size of local regions) in the first stage to provide sufficient tolerance for geometric errors followed by a smaller one in the second stage to accurately evaluate a most probable candidate subset, which is adaptively determined according to the proposed confidence measure. In addition, an iterative gradient-based alignment algorithm is incorporated into the two-stage system to refine the alignment such that the recognition performance can be improved and the computation load can be saved simultaneously.
The dynamics of pneumatic systems are highly nonlinear, and there normally exists a large extent of model uncertainties;the precision motion trajectory tracking control of pneumatic cylinders is still a challenge. In ...
详细信息
The dynamics of pneumatic systems are highly nonlinear, and there normally exists a large extent of model uncertainties;the precision motion trajectory tracking control of pneumatic cylinders is still a challenge. In this paper, two typical nonlinear controllers-adaptive controller and deterministic robust controller-are constructed firstly. Considering that they have both benefits and limitations, an adaptive robust controller (ARC) is further proposed. The ARC is a combination of the first two controllers;it employs online recursive least squares estimation (RLSE) to reduce the extent of parametric uncertainties, and utilizes the robust control method to attenuate the effects of parameter estimation errors, unmodeled dynamics, and disturbances. In order to solve the conflicts between the robust control design and the parameter adaption law design, the projection mapping is used to condition the RLSE algorithm so that the parameter estimates are kept within a known bounded convex set. Theoretically, ARC possesses the advantages of the adaptive control and the deterministic robust control, and thus an even better tracking performance can be expected. Extensive comparative experimental results are presented to illustrate the achievable performance of the three proposed controllers and their performance robustness to the parameter variations and sudden disturbance.
Classification of subtomograms obtained by cryoelectron tomography (cryo-ET) is a powerful approach to study the conformational landscapes of macromolecular complexes in situ. Major challenges in subtomogram classific...
详细信息
Classification of subtomograms obtained by cryoelectron tomography (cryo-ET) is a powerful approach to study the conformational landscapes of macromolecular complexes in situ. Major challenges in subtomogram classification are the low signal-to-noise ratio (SNR) of cryo-tomograms, their incomplete angular sampling, the unknown number of classes and the typically unbalanced abundances of structurally distinct complexes. Here, we propose a clustering algorithm named AC3D that is based on a similarity measure, which automatically focuses on the areas of major structural discrepancy between respective subtomogram class averages. Furthermore, we incorporate a spherical-harmonics-based fast subtomogram alignment algorithm, which provides a significant speedup. Assessment of our approach on simulated data sets indicates substantially increased classification accuracy of the presented method compared to two state-of-the-art approaches. Application to experimental subtomograms depicting endoplasmic-reticulum-associated ribosomal particles shows that AC3D is well suited to deconvolute the compositional heterogeneity of macromolecular complexes in situ.
暂无评论