Segmentation of connected handwritten Chinese characters is a very difficult task in document image analysis. In this paper, a novel algorithm based on stroke analysis and background thinning is proposed to segment co...
详细信息
An algorithm to group edge points into digital line segments with Hough transformation is described in this paper. The edge points are mapped onto the parameter domain dis-cretked at specific intervals, on which peaks...
详细信息
The use of plane graphs for the description of image structure and shape representation poses two problems : (1) how to obtain the set of vertices, the set of edges and the incidence relation of the graph, and (2) how...
详细信息
An association rule typically strives for discovering a dependency among attributes with respect to the externally defined parameters like support threshold and confidence threshold. As an important database discovery...
详细信息
ISBN:
(纸本)3540679251
An association rule typically strives for discovering a dependency among attributes with respect to the externally defined parameters like support threshold and confidence threshold. As an important database discovery method, the kernel of association rule mining is the acquisition of large itemsets. It is an important field of data mining to represent the support and confidence of items that are purchased together in supermarket domain. In this paper, a novel limited concept lattice is first proposed for the transaction data itemsets modeling. Concept lattice is a form of a concept hierarchy in which each node represents a subset of objects (extent) with their common properties (intent). The Hasse diagram of the lattice represents a generalization/specialization relationship between the concepts. Therefore, the lattice and Hasse diagram corresponding to a set of objects described by some properties can be used as an effective tool for symbolic data analysis and knowledge acquisition. Based on this lattice structure, an algorithm, LCLL, is presented to incrementally generate large itemsets visually. The algorithm works by means of attaching frequency information to each lattice node, the corresponding support measure can be obtained with the limited lattice. Besides, the edges in the Hasse diagram of the new lattice must be modified: the generator of a new node is always its child, and original parent of the generator is updated. When a node is deleted till the frequency value turns to zero, the node and the edges between its parents and children are not deleted, but tagged. The key point lies in adding edges when searching for the new node’s parents, the large itemsets can be obtained by judging whether the cardinal and frequency value of the node exceeds the threshold or not. And accordingly, association rules can be identified. The approach is especially efficient when the database is dynamically updated (insertion, deletion or simultaneous insertion and deletion
In this paper we address the problem of reliably fitting parametric and semi-parametric models to spots in high density spot array images obtained in gene expression experiments. The goal is to measure the amount of l...
详细信息
A technique for the construction of invariant features of 3D sensor-data is proposed. Invariant grey-scale features are characteristics of grey-scale sensor-data which remain constant if the sensor-data is transformed...
详细信息
ISBN:
(纸本)0769507506
A technique for the construction of invariant features of 3D sensor-data is proposed. Invariant grey-scale features are characteristics of grey-scale sensor-data which remain constant if the sensor-data is transformed according to the action of a transformation group. The proposed features are capable of recognizing 3D objects independent of their orientation and position, which can be used e.g. in medical image analysis. The computation of the proposed invariants needs no preprocessing like filtering, segmentation, or registration. After the introduction of the general theory for the construction of invariant features for 3D sensor-data, the paper focuses on the special case of 3D Euclidean motion which is typical for rigid 3D objects. Due to the fact that we use the function of local support the calculated invariants are also robust with respect to independent Euclidean motion, articulated objects, and even topological deformations. The complexity of the method is linear in the data-set size which may be too high for large 3D objects. Therefore approaches for the acceleration of the computation are given. First experimental results for artificial 3D objects are presented in the paper to demonstrate the invariant properties of the proposed features.
Because of wide variation in gray levels and particle dimensions and the presence of many small gravel objects in the background, as well as corrupting the image by noise, it is difficult o segment gravel objects. In ...
详细信息
Because of wide variation in gray levels and particle dimensions and the presence of many small gravel objects in the background, as well as corrupting the image by noise, it is difficult o segment gravel objects. In this paper, we develop a partial entropy method and succeed to realize gravel objects segmentation. We give entropy principles and fur calculation methods. Moreover, we use minimum entropy error automaticly to select a threshold to segment image. We introduce the filter method using mathematical morphology. The segment experiments are performed by using different window dimensions for a group of gravel image and demonstrates that this method has high segmentation rate and low noise sensitivity.
In this paper,we presented a displacement field estimation algorithm based on a relaxed smoothness constraint;this algorithmcan preserve discontinuities in the displacement field to some *** image data is irregular an...
详细信息
In this paper,we presented a displacement field estimation algorithm based on a relaxed smoothness constraint;this algorithmcan preserve discontinuities in the displacement field to some *** image data is irregular and the images are noisy,the method produces some big residual errors in the residual *** this paper we propose an improved displacement field estimation algorithm which uses the displacement information obtained using blockmatching to modify the matching *** results show,this leads to smaller residual error maps, without introducing block artefacts,as would happen in the case of simple block matching when there is much noise in the *** the displacement filed using this method is more consistent than using a method without additional block matching.
An algorithm to group edge points into digital line segments with Hough transformation is described. The edge points are mapped onto the parameter domain discretized at specific intervals, on which peaks appear to rep...
详细信息
ISBN:
(纸本)0769507506
An algorithm to group edge points into digital line segments with Hough transformation is described. The edge points are mapped onto the parameter domain discretized at specific intervals, on which peaks appear to represent different line segments. By modeling each peak as a Gaussian function in the parameter domain, a region to which the edge points are supposed to be mapped is determined. Then the edge points are grouped and the parameters for a line segment are computed. For the edges including multiple line segments, a sequential Hough transformation for detecting peaks one by one in the parameter domain is implemented, and the points from the region around each peak are grouped, thus the line segments are described. Experiments show the robustness of the algorithm implemented on both the generated edges disturbed by different noise levels and real images taken from an indoor environment.
Presents a framework for the geometric interpretation of a single polarization image taken of specular reflecting objects. The task of recovering 3D shape information of specular objects from intensity images is a dif...
详细信息
ISBN:
(纸本)0769507506
Presents a framework for the geometric interpretation of a single polarization image taken of specular reflecting objects. The task of recovering 3D shape information of specular objects from intensity images is a difficult if not impossible one, as no intensity based features are available. Polarization analysis provides additional features to overcome the problem. In particular the orientation of polarized light is measured, from which constraints on the object surface can be deduced. We show how to perform shape analysis from a single polarization image. The method is applied to local deformation measuring. Along with a discussion of polarization and related geometric issues, algorithms are presented and substantiated by experiments.
暂无评论