The paper proposes a new digital distance relayingd* algorithm for first-zone protection for parallel transmission lines. The new method uses data from one end of the protected parallel lines to calculate the fault dist...
详细信息
The paper proposes a new digital distance relayingd* algorithm for first-zone protection for parallel transmission lines. The new method uses data from one end of the protected parallel lines to calculate the fault distance. It is shown that the new method is independent of fault resistance, remote infeed and source impedances. Extensive simulation studies using EMTP have verified that this approach can obtain a highly accurate fault distance estimation within one cycle after fault inception and hence is very much suitable for digital distance protection for parallel lines. Applications of this method for fault location are also presented.
The finite-difference (Fd) scheme is extensively applied in seismic modelling, imaging and inversion due to its advantages of large-scale parallel computing and programming. However, numerical dispersion caused by usi...
详细信息
The finite-difference (Fd) scheme is extensively applied in seismic modelling, imaging and inversion due to its advantages of large-scale parallel computing and programming. However, numerical dispersion caused by using a difference operator in substitution for the differential operator is non-negligible, which reduces the accuracy of the modelling and can lead to some misinterpretations. In addition, the computing resources required by the Fd scheme is highly demanding when dealing with large models, which limits its applicability. In this paper, a new optimised Fd scheme is proposed, which is based on an improved particle swarm optimisation (PSO)d* algorithm. We improve the conventional PSOd* algorithm by introducing strategies related to local learning and global learning, which contribute to accelerating the convergence rate and effectively avoiding getting trapped in local extrema. Then, the improved PSOd* algorithm is used to improve the conventional Fd scheme. dispersion analysis and numerical modelling demonstrate that the low-order optimised Fd scheme can achieve higher accuracy than a high-order conventional operator. Compared with the conventional Fd scheme and a Fd scheme based on the Remez exchanged* algorithm, the optimised Fd scheme based on the improved PSOd* algorithm can more efficiently suppress numerical dispersion and increase computational efficiency.
In this paper, a fast and flexibled* algorithm for computing watersheds in digital grayscale images is introduced. A review of watersheds and related notion is first presented, and the major methods to determine watersh...
详细信息
In this paper, a fast and flexibled* algorithm for computing watersheds in digital grayscale images is introduced. A review of watersheds and related notion is first presented, and the major methods to determine watersheds are discussed. The presentd* algorithm is based on an immersion process analogy, in which the flooding of the water in the picture is efficiently simulated using a queue of pixels. It is described in detail and provided in a pseudo C language. We prove the accuracy of thisd* algorithm is superior to that of the existing implementations. Furthermore, it is shown that its adaptation to any kind of digital grid and its generalization to n-dimensional images and even to graphs are straightforward. In addition, its strongest point is that it is faster than any other watershed* algorithm. Applications of thisd* algorithm with regard to picture segmentation are presented for MR imagery and for digital elevation models. An example of 3-d watershed is also provided. Lastly, some ideas are given on how to solve complex segmentation tasks using watersheds on graphs.
Spatial clustering, which groups similar spatial objects into classes, is an important component of spatial data mining [Han and Kamber, data Mining: Concepts and Techniques, 2000]. due to its immense applications in ...
详细信息
Spatial clustering, which groups similar spatial objects into classes, is an important component of spatial data mining [Han and Kamber, data Mining: Concepts and Techniques, 2000]. due to its immense applications in various areas, spatial clustering has been highly active topic in data mining researches, with fruitful, scalable clustering methods developed recently. These spatial clustering methods can be classified into four categories: partitioning method, hierarchical method, density-based method and grid-based method. Clustering large data sets of high dimensionality has always been a serious challenge for clusteringd* algorithms. Many recently developed clusteringd* algorithms have attempted to address either handling data with very large number of records or data sets with very high number of dimensions. This new clustering method GCHL (a Grid-Clusteringd* algorithm for High-dimensional very Large spatial databases) combines a novel density-grid based clustering with axis-parallel partitioning strategy to identify areas of high density in the input data space. Thed* algorithm work as well in the feature space of any data set. The method operates on a limited memory buffer and requires at most a single scan through the data. We demonstrate the high quality of the obtained clustering solutions, capability of discovering concave/deeper and convex/higher regions, their robustness to outlier and noise, and GCHL excellent scalability. (c) 2004 Elsevier B.V. All rights reserved.
Bimetallic nanoparticles(AmBn)usually exhibit rich catalytic chemistry and have drawn tremendous attention in heterogeneous ***,challenged by the huge configuration space,the understanding toward their composition and...
详细信息
Bimetallic nanoparticles(AmBn)usually exhibit rich catalytic chemistry and have drawn tremendous attention in heterogeneous ***,challenged by the huge configuration space,the understanding toward their composition anddistribution of A/B element is known little at the atomic level,which hinders the rational ***,we develop an on-the-fly training strategy combing the machine learning model(SchNet)with the geneticd* algorithm(GA)search technique,which achieve the fast and accurate energy prediction of complex bimetallic clusters at the dFT *** the 38-atom PtmAu38-mnanoparticle as example,the element distribution identification problem and the stability trend as a function of Pt/Au composition is quantitatively re ***,results show that on the Pt-rich cluster Au atoms prefer to occupy the low-coordinated surface corner sites and form patch-like surface segregation patte rns,while for the Au-rich ones Pt atoms tend to site in the co re region and form the co re-shell(Pt@Au)*** thermodynamically most stable PtmAu38-mcluster is Pt6 Au32,with all the core-region sites occupied by Pt,rationalized by the stronger Pt-Pt bond in comparison with Pt-Au and Au-Au *** work exemplifies the potent application of rapid global sea rch enabled by machine learning in exploring the high-dimensional configuration space of bimetallic nanocatalysts.
This paper presents the first Learning Automaton-based solution to the dynamic single source shortest path problem. It involves finding the shortest path in a single-source stochastic graph topology where there are co...
详细信息
This paper presents the first Learning Automaton-based solution to the dynamic single source shortest path problem. It involves finding the shortest path in a single-source stochastic graph topology where there are continuous probabilistic updates in the edge-weights. Thed* algorithm is significantly more efficient than the existing solutions, and can be used to find the "statistical" shortest path tree in the "average" graph topology. It converges to this solution irrespective of whether there are new changes in edge-weights taking place or not. In such random settings, the proposed learning automata solution converges to the set of shortest paths. On the other hand, the existingd* algorithms will fail to exhibit such a behavior, and would recalculate the affected shortest paths after each weight-change. The important contribution of the proposed* algorithm is that all the edges in a stochastic graph are not probed, and even if they are, they are not all probed equally often. Indeed, thed* algorithm attempts to almost always probe only those edges that will be included in the shortest path graph, while probing the other edges minimally. This increases the performance of the proposed* algorithm. All thed* algorithms were tested in environments where edge-weights change stochastically, and where the graph topologies undergo multiple simultaneous edge-weight updates. Its superiority in terms of the average number of processed nodes, scanned edges and the time per update operation, when compared with the existingd* algorithms, was experimentally established. Thed* algorithm can be applicable in domains ranging from ground transportation to aerospace, from civilian applications to military, from spatial database applications to telecommunications networking.
diagenetic features, such as vugs, fractures anddolomite bodies can have significant impacts on carbonate reservoir quality. Challenges remain in characterizing these diagenetic features from well logs, as they are o...
详细信息
diagenetic features, such as vugs, fractures anddolomite bodies can have significant impacts on carbonate reservoir quality. Challenges remain in characterizing these diagenetic features from well logs, as they are often mixed with changes in mineral and fluid concentrations. In this paper, a data-driven approach is developed to classify vuggy facics bascd on core and well logs from a key well penetrating the Arbuckle formation in Kansas. Three supervised machine-learning methods, namely artificial neural networks (ANN), support vector machines (SVM), and random forests (RF), are compared for their accuracy, stability, and computational efficiency. Hyperparameters are tuned using cross-validation and Bayesian optimization. different feature selection methods anddata labeling schemes arc also evaluated to optimize the prediction. Results indicate predicting a binary classification (vuggy/nonvuggy) presents an-80% accuracy, compared to a 65% accuracy using a five-class vug-size-based classification label. A direct input of well logs as training features is recommended instead of using derived petrophysical properties. Among the three machine-learningd* algorithms, ANN outperforms the other two methods for vug/nonvug detection, whereas for vug-size classification, RF is the bestd* algorithm to apply. This work also suggests RF shows the least sensitivity to hyperparameters (i.e., maximum number of splits and minimum leaf sizes) according to the response surfaccs constructed via Bayesian optimization. For the dataset used in this study. SVM is the most computationally efficientd* algorithm.
Long-range interactions are known to be of difficult treatment in statistical mechanics models. There are some approaches that introduce a cutoff in the interactions or make use of reaction field approaches. However, ...
详细信息
Long-range interactions are known to be of difficult treatment in statistical mechanics models. There are some approaches that introduce a cutoff in the interactions or make use of reaction field approaches. However, those treatments suffer the illness of being of limited use, in particular close to phase transitions. The use of open boundary conditions allows the sum of the long-range interactions over the entire system to be done, however, this approach demands a sum over all degrees of freedom in the system, which makes a numerical treatment prohibitive. Techniques like the Ewald summation or fast multipole expansion account for the exact interactions but are still limited to a few thousands of particles. In this paper we introduce a novel mean-field approach to treat long-range interactions. The method is based in the division of the system in cells. In the inner cell, that contains the particle in sight, the 'local' interactions are computed exactly, the 'far' contributions are then computed as the average over the particles inside a given cell with the particle in sight for each of the remaining cells. Using this approach, the large and small cells limits are exact. At a fixed cell size, the method also becomes exact in the limit of large lattices. We have applied the procedure to the two-dimensional anisotropic dipolar Heisenberg model. A detailed comparison between our method, the exact calculation and the cutoff radius approximation were done. Our results show that the cutoff-cell approach outperforms any cutoff radius approach as it maintains the long-range memory present in these interactions, contrary to the cutoff radius approximation. Besides that, we calculated the critical temperature and the critical behavior of the specific heat of the anisotropic Heisenberg model using our method. The results are in excellent agreement with extensive Monte Carlo simulations using Ewald summation. (C) 2015 Elsevier B.V. All rights reserved.
The rise of Web 2.0 is signaled by sites such as Flickr, ***, and YouTube, and social tagging is essential to their success. A typical tagging action involves three components, user, item (e.g., photos in Flickr), and...
详细信息
The rise of Web 2.0 is signaled by sites such as Flickr, ***, and YouTube, and social tagging is essential to their success. A typical tagging action involves three components, user, item (e.g., photos in Flickr), and tags (i.e., words or phrases). Analyzing how tags are assigned by certain users to certain items has important implications in helping users search for desired information. In this paper, we develop a dual mining framework to explore tagging behavior. This framework is centered around two opposing measures, similarity anddiversity, applied to one or more tagging components, and therefore enables a wide range of analysis scenarios such as characterizing similar users tagging diverse items with similar tags or diverse users tagging similar items with diverse tags. By adopting different concrete measures for similarity anddiversity in the framework, we show that a wide range of concrete analysis problems can be defined and they are NP-Complete in general. We design four sets of efficientd* algorithms for solving many of those problems anddemonstrate, through comprehensive experiments over real data, that ourd* algorithms significantly out-perform the exact brute-force approach without compromising analysis result quality.
This paper is concerned with the design and analysis of improved* algorithms for determining the optimal length resolution refutation (OLRR) of a system of difference constraints over an integral domain. The problem of...
详细信息
This paper is concerned with the design and analysis of improved* algorithms for determining the optimal length resolution refutation (OLRR) of a system of difference constraints over an integral domain. The problem of finding short explanations for unsatisfiable difference Constraint Systems (dCS) finds applications in a number of design domains including program verification, proof theory, real-time scheduling, and operations research. These explanations have also been called "certificates" and "refutations" in the literature. This problem was first studied in Subramani (J Autom Reason 43(2):121-137, 2009), wherein the first polynomial timed* algorithm was proposed. In this paper, we propose two new strongly polynomiald* algorithms which improve on the existing time bound. Our firstd* algorithm, which we call the edge progression approach, runs in O(n (2) center dot k + m center dot n center dot k) time, while our second* algorithm, which we call the edge relaxation approach, runs in O(m center dot n center dot k) time, where m is the number of constraints in the dCS, n is the number of program variables, and k denotes the length of the shortest refutation. We conducted an extensive empirical analysis of the three OLRRd* algorithms discussed in this paper. Our experiments indicate that in the case of sparse graphs, the newd* algorithms discussed in this paper are superior to thed* algorithm in Subramani (J Autom Reason 43(2):121-137, 2009). Likewise, in the case of dense graphs, the approach in Subramani (J Autom Reason 43(2):121-137, 2009) is superior to thed* algorithms described in this paper. One surprising observation is the superiority of the edge relaxationd* algorithm over the edge progressiond* algorithm in all cases, although bothd* algorithms have the same asymptotic time complexity.
暂无评论