Mirroring and replication are common techniques for ensuring fault-tolerance and resiliency of client/server applications. Because such mirroring and replication procedures are not usually automated, they tend to be c...
详细信息
Mirroring and replication are common techniques for ensuring fault-tolerance and resiliency of client/server applications. Because such mirroring and replication procedures are not usually automated, they tend to be cumberson. In this paper, we present an architecture in which the identification of sites for replicated servers, and the generation of replicas, are both automated. The design is based on a self-configuring mesh of computers and a communication mechanism between nodes that operates on a rooted spanning tree. A query-search component uses Java(TM) language-based query capsules traveling along the branches of the spanning tree, and a caching scheme whereby the query and previous search results are cached at each node for improved efficiency. Furthermore, a security and anonymity component relies on one or more authentication servers and an anonymous communication scheme using link local addresses and indirect communication between the nodes via the spanning tree. The architecture also includes components for resource advertising and for application replication.
The present work explores, through a comprehensive sensitivity study, a new methodology to find a suitable artificial neural network architecture which improves its performances capabilities in predicting two signific...
详细信息
The present work explores, through a comprehensive sensitivity study, a new methodology to find a suitable artificial neural network architecture which improves its performances capabilities in predicting two significant parameters in safety assessment i.e. the multiplication factor k(eff) and the fuel powers peaks P-max of the benchmark 10 MW IAEA LEU core research reactor. The performances under consideration were the improvement of network predictions during the validation process and the speed up of computational time during the training phase. To reach this objective, we took benefit from Neural Network MATLAB Toolbox to carry out a widespread sensitivity study. Consequently, the speed up of several popular algorithms has been assessed during the training process. The comprehensive neural system was subsequently trained on different transfer functions, number of hidden neurons, levels of error and size of generalization corpus. Thus, using a personal computer with data created from preceding work, the final results obtained for the treated benchmark were improved in both network generalization phase and much more in computational time during the training process in comparison to the results obtained previously. (C) 2009 Elsevier B.V. All rights reserved.
In this study a new pattern recognition-based algorithm is presented for detecting high impedance faults (HIFs) in distribution networks with broken or unbroken conductors and distinguishing them from other similar ph...
详细信息
In this study a new pattern recognition-based algorithm is presented for detecting high impedance faults (HIFs) in distribution networks with broken or unbroken conductors and distinguishing them from other similar phenomena such as capacitor bank switching, load switching, no-load transformer switching (through feeder switching), fault on adjacent feeders, insulator leakage current (ILC) and harmonic load. The proposed method has employed multi-resolution morphological gradient (MMG) for extraction of the time-based features from three half cycles of the post-disturbance current waveform. Then, according to these features, three multi-layer perceptron neural networks are trained. Finally, the outputs of these classifiers are combined using the average method. Applying the data for HIF, ILC and harmonic load from field tests and for other similar phenomena from simulations has shown high security and dependability of the proposed method. Also, a comparison between the features from the proposed MMG-based procedure and the features from discrete Fourier transform, discrete S-transform, discrete TT-transform and discrete wavelet transform is made in the feature space.
Ahybrid computational intelligent approachwhich combineswavelet fuzzy neural network (WFNN) with switching particle swarm optimization (SPSO) algorithm is proposed to control the nonlinearity, wide variation in loads,...
详细信息
Ahybrid computational intelligent approachwhich combineswavelet fuzzy neural network (WFNN) with switching particle swarm optimization (SPSO) algorithm is proposed to control the nonlinearity, wide variation in loads, time variation, and uncertain disturbance of the high-powerACservo system. TheWFNNmethod integratedwavelet transforms with fuzzy rules and is proposed to achieve precise positioning control of the AC servo system. As the WFNN controller, the back-propagation method is used for the online learning algorithm. Moreover, the SPSO is proposed to adapt the learning rates of theWFNN online, where the velocity updating equation is according to aMarkov chain, whichmakes it easy to jump the local minimum, and acceleration coefficients are dependent on mode switching. Furthermore, the stability of the closed loop system is guaranteed by using the Lyapunov method. The results of the simulation and the prototype test prove that the proposed approach can improve the steady-state performance and possess strong robustness to both parameter perturbation and load disturbance.
Purpose: To assess the accuracy against measurements of two photon dose calculation algorithms (Acuros XB and the Anisotropic Analytical algorithm AAA) for small fields usable in stereotactic treatments with particula...
详细信息
Purpose: To assess the accuracy against measurements of two photon dose calculation algorithms (Acuros XB and the Anisotropic Analytical algorithm AAA) for small fields usable in stereotactic treatments with particular focus on RapidArc (R). Methods: Acuros XB and AAA were configured for stereotactic use. Baseline accuracy was assessed on small jaw-collimated open fields for different values for the spot sizes parameter in the beam data: 0.0, 0.5, 1, and 2 mm. Data were calculated with a grid of 1 x 1 mm(2). Investigated fields were: 3 x 3, 2 x 2, 1 x 1, and 0.8 x 0.8 cm(2) with a 6 MV photon beam generated from a Clina-c2100iX (Varian, Palo Alto, CA). Profiles, PDD, and output factors were measured in water with a PTW diamond detector (detector size: 4 mm(2), thickness 0.4 mm) and compared to calculations. Four RapidArc test plans were optimized, calculated and delivered with jaw settings J3 x 3, J2 x 2, and J1 x 1 cm(2), the last was optimized twice to generate high (H) and low (L) modulation patterns. Each plan consisted of one partial arc (gantry 110 degrees to 250 degrees), and collimator 45 degrees. Dose to isocenter was measured in a PTW Octavius phantom and compared to calculations. 2D measurements were performed by means of portal dosimetry with the GLAaS method developed at authors' institute. Analysis was performed with gamma pass-fail test with 3% dose difference and 2 mm distance to agreement thresholds. Results: Open square fields: penumbrae from open field profiles were in good agreement with diamond measurements for 1 mm spot size setting for Acuros XB, and between 0.5 and 1 mm for AAA. Maximum MU difference between calculations and measurements was 1.7% for Acuros XB (0.2% for fields greater than 1 x 1 cm(2)) with 0.5 or 1 mm spot size. Agreement for AAA was within 0.7% (2.8%) for 0.5 (1 mm) spot size. RapidArc plans: doses were evaluated in a 4 mm diameter structure at isocenter and computed values differed from measurements by 0.0, -0.2, 5.5, and
Document skew estimation and correction is a regular issue in scanned document images. It is an active research area in the domain of document analysis and recognition. Literature is replete with several document skew...
详细信息
Document skew estimation and correction is a regular issue in scanned document images. It is an active research area in the domain of document analysis and recognition. Literature is replete with several document skew detection and correction techniques. The focus of this article is to present, compare, and analyze techniques for two subareas of document image analysis: skew angle estimation and correction. Several algorithms proposed in the literature are synthetically described. Accordingly, document skew estimation and correction techniques are broadly divided into several categories. Critical discussions are observed about the current status of the field and persistent problems of each category are highlighted. Finally, possible solutions are recommended.
The output of LC-MS metabolomics experiments consists of mass-peak intensities identified through a peak-picking/alignment procedure. Besides imperfections in biological samples and instrumentation, data accuracy is h...
详细信息
The output of LC-MS metabolomics experiments consists of mass-peak intensities identified through a peak-picking/alignment procedure. Besides imperfections in biological samples and instrumentation, data accuracy is highly dependent on the applied algorithms and their parameters. Consequently, quality control (QC) is essential for further data analysis. Here, we present a QC approach that is based on discrepancies between replicate samples. First, the quantile normalization of per-sample log-signal distributions is applied to each group of biologically homogeneous samples. Next, the overall quality of each replicate group is characterized by the Z-transformed correlation coefficients between samples. This general QC allows a tuning of the procedure's parameters which minimizes the inter-replicate discrepancies in the generated output. Subsequently, an in-depth QC measure detects local neighborhoods on a template of aligned chromatograms that are enriched by divergences between intensity profiles of replicate samples. These neighborhoods are determined through a segmentation algorithm. The retention time (RT)-m/z positions of the neighborhoods with local divergences are indicative of either: incorrect alignment of chromatographic features, technical problems in the chromatograms, or to a true biological discrepancy between replicates for particular metabolites. We expect this method to aid in the accurate analysis of metabolomics data and in the development of new peak-picking/alignment procedures.
A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC) is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy co...
详细信息
A low-complexity compression algorithm for hyperspectral images based on distributed source coding (DSC) is proposed in this paper. The proposed distributed compression algorithm can realize both lossless and lossy compression, which is implemented by performing scalar quantization strategy on the original hyperspectral images followed by distributed lossless compression. Multilinear regression model is introduced for distributed lossless compression in order to improve the quality of side information. Optimal quantized step is determined according to the restriction of the correct DSC decoding, which makes the proposed algorithm achieve near lossless compression. Moreover, an effective rate distortion algorithm is introduced for the proposed algorithm to achieve low bit rate. Experimental results show that the compression performance of the proposed algorithm is competitive with that of the state-of-the-art compression algorithms for hyperspectral images.
In two-way contingency tables we sometimes find that frequencies along the diagonal cells are relatively larger (or smaller) compared to off-diagonal cells, particularly in square tables with the common categories for...
详细信息
In two-way contingency tables we sometimes find that frequencies along the diagonal cells are relatively larger (or smaller) compared to off-diagonal cells, particularly in square tables with the common categories for the rows and the columns. In this case the quasi-independence model with an additional parameter for each of the diagonal cells is usually fitted to the data. A simpler model than the quasi-independence model is to assume a common additional parameter for all the diagonal cells. We consider testing the goodness of fit of the common diagonal effect by the Markov chain Monte Carlo (MCMC) method. We derive an explicit form of a Markov basis for performing the conditional test of the common diagonal effect. Once a Markov basis is given, MCMC procedure can be easily implemented by techniques of algebraic statistics. We illustrate the procedure with some real data sets. (C) 2008 Elsevier B.V. All rights reserved.
To solve the matching problem of the elements in different data collections, an improved coupled metric learning approach is proposed. First, we improved the supervised locality preserving projection algorithm and add...
详细信息
To solve the matching problem of the elements in different data collections, an improved coupled metric learning approach is proposed. First, we improved the supervised locality preserving projection algorithm and added the within-class and between-class information of the improved algorithm to coupled metric learning, so a novel coupled metric learning method is proposed. Furthermore, we extended this algorithm to nonlinear space, and the kernel coupled metric learning method based on supervised locality preserving projection is proposed. In kernel coupled metric learning approach, two elements of different collections are mapped to the unified high dimensional feature space by kernel function, and then generalized metric learning is performed in this space. Experiments based on Yale and CAS-PEAL-R1 face databases demonstrate that the proposed kernel coupled approach performs better in low-resolution and fuzzy face recognition and can reduce the computing time;it is an effective metric method.
暂无评论