A modified method of conventional D-S evidence combination rule was presented. The consensus index of evidence based on distance measures of evidence was introduced into modifying the BPAs of hypotheses with the evide...
详细信息
A modified method of conventional D-S evidence combination rule was presented. The consensus index of evidence based on distance measures of evidence was introduced into modifying the BPAs of hypotheses with the evidence sufficiency and evidence importance. By using the D-S combination and new decision rules, the different sets of BPAs for each hypothesis were calculated. A numerical example of the fault diagnosis is used to demonstrate the validity of the method by comparison with other methods.
It's well known machine learning from examples is an effective method to solve non-linear classification problem. A new dynamic method of machine learning from transition example is given in this paper. This metho...
详细信息
Rough set theory proposed by Pawlak, is a complementary generalization of classical set theory. The relations between rough sets and algebraic systems endowed with two binary operations such as rings, groups and semig...
详细信息
Rough set theory proposed by Pawlak, is a complementary generalization of classical set theory. The relations between rough sets and algebraic systems endowed with two binary operations such as rings, groups and semigroups have been already considered. Wu, Xie and Cao defined a pair of rough approximation operators based on a sub-space. In this paper we do the further study about the upper (lower)approximations. We propose concepts of W-upper(lower) rough subspaces in an approximation space and give some properties of the upper(lower) and rough subspaces. Some characterizations of the upper(lower) rough subspace are expressed with addition and intersection operations of subspaces. At the same time, some counterexamples are offered to illustrate the relevant results.
Dependency structure, as a first step towards semantics, is believed to be helpful to improve translation quality. However, previous works on dependency structure based models typically resort to insertion operations ...
详细信息
ISBN:
(纸本)9781618391100
Dependency structure, as a first step towards semantics, is believed to be helpful to improve translation quality. However, previous works on dependency structure based models typically resort to insertion operations to complete translations, which make it difficult to specify ordering information in translation rules. In our model of this paper, we handle this problem by directly specifying the ordering information in head-dependents rules which represent the source side as head-dependents relations and the target side as strings. The head-dependents rules require only substitution operation, thus our model requires no heuristics or separate ordering models of the previous works to control the word order of translations. Large-scale experiments show that our model performs well on long distance reordering, and outperforms the state-of-the-art constituency-to-string model (+1.47 BLEU on average) and hierarchical phrase-based model (+0.46 BLEU on average) on two Chinese-English NIST test sets without resort to phrases or parse forest. For the first time, a source dependency structure based model catches up with and surpasses the state-of-the-art translation models.
In this paper, we propose a learning based approach to estimating pixel disparities from the motion information extracted out of input monoscopic video sequences. We represent each video frame with superpixels, and ex...
详细信息
In this paper, we propose a learning based approach to estimating pixel disparities from the motion information extracted out of input monoscopic video sequences. We represent each video frame with superpixels, and extract the motion features from the superpixels and the frame boundary. These motion features account for the motion pattern of the superpixel as well as camera motion. In the learning phase, given a pair of stereoscopic video sequences, we employ a state-of-the-art stereo matching method to compute the disparity map of each frame as ground truth. Then a multi-label SVM is trained from the estimated disparities and the corresponding motion features. In the testing phase, we use the learned SVM to predict the disparity for each superpixel in a monoscopic video sequence. Experiment results show that the proposed method achieves low error rate in disparity estimation.
Image authentication is usually approached by checking the preservation of some invariant features, which are expected to be both robust and discriminative so that content-preserving operations are accepted while cont...
详细信息
Image authentication is usually approached by checking the preservation of some invariant features, which are expected to be both robust and discriminative so that content-preserving operations are accepted while content-altering manipulations are rejected. However, most of existing features have not obtained convincing performance due to insufficiency of experiments and over biasing of robustness. Motivated by the sparse coding strategy discovered in primary visual cortex, we explore the possibility of using sparse coding coefficients for image authentication. Through extensive experiments, we discover that the proposed feature bears great discrimination as well as robustness, which indicates the effectiveness of sparse coding as a new invariant feature for image authentication.
Part-based representation has become an essential representation for objects. Different parts of a shape have various importance to shape perception. Concerned by the contributions of local parts to the perception of ...
详细信息
Part-based representation has become an essential representation for objects. Different parts of a shape have various importance to shape perception. Concerned by the contributions of local parts to the perception of global shapes, we introduce an importance measure of shape parts from the viewpoint of their “reconstructability”. Given one shape part as a local observation, we estimate the global object shapes through a proposed novel shape reconstruction method. The reconstruction quality is determined by part geometry variation and part uniqueness. Then a conditional entropy formulation is introduced to measure the part importance considering the reconstruction quality, part uniqueness and variation. Extensive experiments demonstrate the benefit of the proposed part importance to object detection applications. Moreover, psychological experiments are conducted to further support the consistency of our importance measure with human perception towards shape parts.
Cross-domain text categorization targets on adapting the knowledge learnt from a labeled source domain to an unlabeled target domain, where the documents from the source and target domains are drawn from different dis...
详细信息
暂无评论