Decision trees and extension matrixes are two methodologies for (fuzzy) rule generation. This paper gives an initial study on the comparison between the two methodologies. Their computational complexity and the qualit...
详细信息
ISBN:
(纸本)0780384032
Decision trees and extension matrixes are two methodologies for (fuzzy) rule generation. This paper gives an initial study on the comparison between the two methodologies. Their computational complexity and the quality of rule generation are analyzed. The experimental results show that the number of generated rules of the heuristic algorithm based on extension matrix is fewer than the decision tree algorithm. Moreover, regarding the testing accuracy (i.e., the generalization capability for unknown cases), experiments also show that the extension matrix method is better than the other.
This paper addresses the problem of piecewise linear approximation of implicit surfaces. We first give a criterion ensuring that the zero-set of a smooth function and the one of a piecewise linear approximation of it ...
详细信息
This paper addresses the problem of piecewise linear approximation of implicit surfaces. We first give a criterion ensuring that the zero-set of a smooth function and the one of a piecewise linear approximation of it are isotopic. Then, we deduce from this criterion an implicit surface meshing algorithm certifying that the output mesh is isotopic to the actual implicit surface. This is the first algorithm achieving this goal in a provably correct way.
This paper describes a novel math.matical model of segmenting bone boundary of X-ray image that incorporates prior shape information into a geodesic active contour. The energy function of the model is minimized depend...
详细信息
We present an unusual algorithm involving classification trees CARTwheels - where two trees are grown in opposite directions so that they are joined at their leaves. This approach finds application in a new data minin...
详细信息
ISBN:
(纸本)1581138881
We present an unusual algorithm involving classification trees CARTwheels - where two trees are grown in opposite directions so that they are joined at their leaves. This approach finds application in a new data mining task we formulate, called redescription mining. A redescription is a shift-of-vocabulary, or a different way of communicating information about a given subset of data;the goal of redescription mining is to find subsets of data that afford multiple descriptions. We highlight the importance of this problem in domains such as bioinformatics, which exhibit an underlying richness and diversity of data descriptors (e.g., genes can be studied in a variety of ways). CARTwheels exploits the duality between class partitions and path partitions in an induced classification tree to model and mine redescriptions. It helps integrate multiple forms of characterizing datasets, situates the knowledge gained from one dataset in the context of others, and harnesses high-level abstractions for uncovering cryptic and subtle features of data. Algorithm design decisions, implementation details, and experimental results are presented.
This paper discusses algorithmic and implementation aspects of a remote visualization system, which adoptively decomposes and maps the visualization pipeline onto a wide-area network. Visualization pipeline modules su...
详细信息
We investigate in this paper the standard k-means clustering algorithm and give our improved version by selecting better initial centroids that the algorithm begins with. First we evaluate the distances between every ...
详细信息
ISBN:
(纸本)0780384032
We investigate in this paper the standard k-means clustering algorithm and give our improved version by selecting better initial centroids that the algorithm begins with. First we evaluate the distances between every pair of data-points;then try to find out those data-points which are similar;and finally construct initial centroids according to these found data-points. Different initial centroids lead to different results. If we can find initial centroids which are consistent with the distribution of data, the better clustering can be obtained. According to our experimental results, the improved k-means Clustering Algorithm has the accuracy higher than the original one.
It is important to study the relationship between pruning algorithms and the selection of parameters in fuzzy decision tree generation for controlling the tree size. This paper selects a pruning algorithm and a method...
详细信息
ISBN:
(纸本)0780384032
It is important to study the relationship between pruning algorithms and the selection of parameters in fuzzy decision tree generation for controlling the tree size. This paper selects a pruning algorithm and a method of fuzzy decision tree generation to experimentally show the relationship for some existing databases. It aims to give some guidelines for how to select an appropriate parametric value in fuzzy decision tree generation. When a suitable parametric value is selected, the pruning for fuzzy decision tree generation seems to be unnecessary.
Let S be a set of n points in 2. Given an integer 1 &le k &le n, we wish to find a maximally separated subset I ⊆ S of size k;this is a subset for which the minimum among the (k/2) pairwise distances between i...
详细信息
Let S be a set of n points in 2. Given an integer 1 &le k &le n, we wish to find a maximally separated subset I ⊆ S of size k;this is a subset for which the minimum among the (k/2) pairwise distances between its points is as large as possible. The decision problem associated with this problem is to determine whether there exists I ⊆ S, |I| = k, so that all (k/2) pairwise distances in I are at least 2, say. This problem can also be formulated in terms of disk-intersection graphs: Let D be the set of unit disks centered at the points of S. The disk-intersection graph G of D connects pairs of disks by an edge if they have nonempty intersection. I is then the set of centers of disks that form an independent set in the graph G. This problem is known to be NP-Complete if k is part of the input. In this paper we first present a linear-time approximation algorithm for any constant k. Next we give O(n4/3polylog(n)) exact algorithms for the cases k = 3 and k = 4. We also present a simpler nO(√k))-time algorithm (as compared with the recent algorithm in [5]) for arbitrary values of k.
暂无评论