The motivation for this work was that little is known about the construction of asymmetric quantum error-correcting codes from linear codes over finite rings. In this work, attempts are made to construct asymmetric qu...
详细信息
The motivation for this work was that little is known about the construction of asymmetric quantum error-correcting codes from linear codes over finite rings. In this work, attempts are made to construct asymmetric quantum error-correcting codes from linear codes over finite rings Zp2, where p is any prime. Furthermore, we present explicit parameters for infinite families of asymmetric quantum error correcting codes which derived from linear over finite rings.
TWSVM(Twin Support Vector Machines) is based on the idea of GEPSVM (Proximal SVM based on Generalized Eigenvalues), which determines two nonparallel planes by solving two related SVM-type problems, so that its computi...
详细信息
TWSVM(Twin Support Vector Machines) is based on the idea of GEPSVM (Proximal SVM based on Generalized Eigenvalues), which determines two nonparallel planes by solving two related SVM-type problems, so that its computing cost in the training phase is 1/4 of standard SVM. In addition to keeping the superior characteristics of GEPSVM, the classification performance of TWSVM significantly outperforms that of GEPSVM. In order to further improve the speed and accuracy of TWSVM, this paper proposes the twin support vector machines based on rough sets. Firstly, using the rough sets theory to reduce the attributes, and then using TWSVM to train and predict the new datasets. The final experimental results and data analysis show that the proposed algorithm has higher accuracy and better efficiency compared with the traditional twin support vector machines.
At present, most of the attribute reduction algorithms based on granularity are simply computing the granularity of knowledge. Repeated calculation will increase the time complexity. Binary discernibility matrix is us...
详细信息
At present, most of the attribute reduction algorithms based on granularity are simply computing the granularity of knowledge. Repeated calculation will increase the time complexity. Binary discernibility matrix is used to express binary form, which has a clear ascension whether in space or in time than the traditional discernibility matrix efficiency. On the basis of granularity-based attribute reduction, a method is proposed to preprocess the dataset by using binary discernibility matrix. Firstly, find the core attribute and a minimal reduction. Then use the granularity thought to calculate each particle of the importance of attributes. Most important is joined to the reduction set, thereby achieving the attributes reduction. The example analysis shows that the method can improve the performance of the traditional attribute reduction algorithms effectively. It is a feasible approach to reduce attributes.
Traditional support vector machine has disadvantages of slow training speed and great time consumption when dealing with large-scale datasets. This paper proposes a support vector extraction method based on clustering...
详细信息
Traditional support vector machine has disadvantages of slow training speed and great time consumption when dealing with large-scale datasets. This paper proposes a support vector extraction method based on clustering membership, which preprocesses the training datasets and extracts all possible support vectors for SVM training according to the memberships. Considering the training datasets may be linear or nonlinear, this paper severally uses FCM and KFCM to extract support vectors. Experiment results show that the method proposed in this paper can improve the training speed greatly in the condition of maintaining the classification accuracy.
Document clustering is an effective way to detect similar clusters of documents in nature. The objective of this paper is to develop an effective methodology to cluster food complaint documents with the guidance of on...
详细信息
Document clustering is an effective way to detect similar clusters of documents in nature. The objective of this paper is to develop an effective methodology to cluster food complaint documents with the guidance of ontology and fuzzy set theory. Here, the terms in the controlled vocabulary of our food ontology describe the hazards in dairy product and will be used to guide feature selection. Then, the fuzzy compatibility relation will be constructed in k-dimensional semantic space and a fuzzy equivalence relation based on the fuzzy compatibility relation can be obtained. Finally, a suitable λ-cut value is selected to determine the best number of clusters. Several documents have been tested and the results show our methodology is effective and accurate.
Clustering is one of the key techniques in data mining. Almost, traditional clustering methods just focus on single-label datasets but do not concern the multi-label problem, where a data point can belong to more than...
详细信息
Clustering is one of the key techniques in data mining. Almost, traditional clustering methods just focus on single-label datasets but do not concern the multi-label problem, where a data point can belong to more than one cluster. In this paper, we propose a new multi-label clustering method based on distance measure (MCDM). F-measure is a widely used clustering validation measure, but its value may be out of the range [0, 1] in multi-label clustering. We revise F-measure in order to guarantee the value is in the range [0, 1]. Experiments on benchmark datasets demonstrate that the multiple clustering we proposed is effective and the revision of F-measure is reasonable.
The van der Waerden number W(r, k) is the least integer N such that every r-coloring of {1,2, • • •, N} contains a monochromatic arithmetic progression of length at least k. Rabung gave a method to obtain lower bounds...
详细信息
The van der Waerden number W(r, k) is the least integer N such that every r-coloring of {1,2, • • •, N} contains a monochromatic arithmetic progression of length at least k. Rabung gave a method to obtain lower bounds on W(2,k) based on quadratic residues, and performed computations on all primes no greater than 20117. By improving the efficiency of the algorithm of Rabung, we perform the computation for all primes up to 6 x 107, and obtain lower bounds on W(2, k) for k between 11 and 23.
In this paper, we propose a method to improve adaptive loop filter (ALF) efficiency with temporal prediction. For one frame, two sets of adaptive loop filter parameters are adaptively selected by rate distortion optim...
详细信息
In this paper, we propose a method to improve adaptive loop filter (ALF) efficiency with temporal prediction. For one frame, two sets of adaptive loop filter parameters are adaptively selected by rate distortion optimization. The first set of ALF parameters is estimated by minimizing the mean square error between the original frame and the current reconstructed frame. The second set of filter parameters is the one that is used in the latest prior frame. The proposed algorithm is implemented in HM3.0 software. Compared with the HM3.0 anchor, the proposed method achieves 0.4%, 0.3% and 0.3% BD bitrate reduction in average for high efficiency low delay B, high efficiency low delay P and high efficiency random access configuration, respectively. The encoding and decoding time increase by 1% and 2% on average, respectively.
Short texts are short and their ability of describing concept is weak and the conventional text classification methods are not suitable for short text classification. This paper presents a new classification method fo...
详细信息
There exist noisy, unparallel sentences in parallel web pages. Web page structure is subjected to some limitation for sentences alignment task for web page text. The most straightforward way of aligning sentences is u...
详细信息
There exist noisy, unparallel sentences in parallel web pages. Web page structure is subjected to some limitation for sentences alignment task for web page text. The most straightforward way of aligning sentences is using a translation lexicon. However, a major obstacle to this approach is the lack of dictionary for training. This paper presents a method for automatically align Mongolian-Chinese parallel text on the Web via vector space model. Vector space model is an algebraic model for representing any object as vectors of identifiers, such as index terms. In the statistically based vector-space model, a sentence is conceptually represented by a vector of keywords extracted from the text. Extracted keywords are composed by content words, known as terms and the weight of a term in a sentence vector can be determined tf-idf method. CHI is used to compute the association between bilingual words. Once the term weights are determined, the similarity between sentence vectors is computed via cosine measure. The experimental results indicate that the method is accurate and efficient enough to apply without human intervention.
暂无评论