Untraceability is an important aspect of RFID security. A BRS-based approach for modeling RFID untraceability is provided in this paper. Elements in a RFID protocol can be represented as bigraphs, communications betwe...
详细信息
Untraceability is an important aspect of RFID security. A BRS-based approach for modeling RFID untraceability is provided in this paper. Elements in a RFID protocol can be represented as bigraphs, communications between elements can be represented as reaction rules. RFID untraceability can be represented as behavioral congruences. We take a RFID air interface protocol as a case study and show the usability of this approach.
Diagnosability indicates that whether the fault can be detected in finite time, which is an important property in model based diagnosis. As diagnosis depends on the sensor placement and the modeling, it is hard to mak...
详细信息
Diagnosability indicates that whether the fault can be detected in finite time, which is an important property in model based diagnosis. As diagnosis depends on the sensor placement and the modeling, it is hard to make a choice whether to place more sensors for testing or to compute more diagnosis pathways in practical application. In this paper, a method is proposed to resolve this by defining key point. The key points take priority on testing among the existing sensors. In these key points, the observation can be optimized and be diagnosis tested efficiently. Experimental result indicates that this method achieves efficient faults distinction and identification, and reduces the cost of sensors and the computational complexity of diagnostic in the case of normal behavior.
Filtering techniques are used in Constraint Satisfaction Problems to remove all the local inconsistencies during a processing step or prune the search tree efficiently during search. Local consistencies are used as pr...
详细信息
The current GPM algorithm needs many iterations to get good process models with high fitness which makes the GPM algorithm usually time-consuming and sometimes the result can not be accepted. To mine higher quality mo...
详细信息
The current GPM algorithm needs many iterations to get good process models with high fitness which makes the GPM algorithm usually time-consuming and sometimes the result can not be accepted. To mine higher quality model in shorter time, a heuristic solution by adding log-replay based crossover operator and direct/indirect dependency relation based mutation operator is put forward. Experiment results on 25 benchmark logs show encouraging results.
Microarray data are highly redundant and noisy, and most genes are believed to be uninformative with respect to studied classes, as only a fraction of genes may present distinct profiles for different classes of sampl...
详细信息
Microarray data are highly redundant and noisy, and most genes are believed to be uninformative with respect to studied classes, as only a fraction of genes may present distinct profiles for different classes of samples. This paper proposed a novel hybrid framework (NHF) for the classification of high dimensional microarray data, which combined information gain(IG), F-score, genetic algorithm(GA), particle swarm optimization(PSO) and support vector machines(SVM). In order to identify a subset of informative genes embedded out of a large dataset which is contaminated with high dimensional noise, the proposed method is divided into three stages. In the first stage, IG is used to construct a ranking list of features, and only 10% features of the ranking list are provided for the second stage. In the second stage, PSO performs the feature selection task combining SVM. F-score is considered as a part of the objective function of PSO. The feature subsets are filtered according to the ranking list from the first stage, and then the results of it are supplied to the initialization of GA. Both the SVM parameter optimization and the feature selection are dynamically executed by PSO. In the third stage, GA initializes the individual of population from the results of the second stage, and an optimal result of feature selection is gained using GA integrating SVM. Both the SVM parameter optimization and the feature selection are dynamically performed by GA. The performance of the proposed method was compared with that of the PSO based, GA based, Ant colony optimization (ACO) based and simulated annealing (SA) based methods on five benchmark data sets, leukemia, colon, breast cancer, lung carcinoma and brain cancer. The numerical results and statistical analysis show that the proposed approach is capable of selecting a subset of predictive genes from a large noisy data set, and can capture the correlated structure in the data. In addition, NHF performs significantly better than th
This paper introduces a qualitative spatial model based on previously developed models for representation and reasoning the spatial information described in natural language. The model is integrated with direction and...
详细信息
lti-label learning aims at predicting a proper label set for each unseen *** instance in the dataset is associated with a set of predefined ***-label learning approaches frequently used choose identical feature set to...
lti-label learning aims at predicting a proper label set for each unseen *** instance in the dataset is associated with a set of predefined ***-label learning approaches frequently used choose identical feature set to determine the instance's membership of each label.
Integrity constraint is a formula that checks whether all necessary information has been explicitly provided. It can be added into ontology to guarantee the data-centric application. In this paper,a set of constraint ...
详细信息
Integrity constraint is a formula that checks whether all necessary information has been explicitly provided. It can be added into ontology to guarantee the data-centric application. In this paper,a set of constraint axioms called IC-mapping axioms are stated. Based on these axioms,a special ontology with integrity constraint,which is adapted to map ontology knowledge to data in relational databases,is defined. It's generated through our checking and modification. Making use of the traditional mapping approaches,it can be mapped to relational databases as a normal ontology. Unlike what in the past,a novel mapping approach named IC-based mapping is proposed in accordance with such special ontology. The detailed algorithm is put forward and compared with other existing approaches used in Semantic Web applications. The result shows that our method is advanced to the traditional approaches.
The loss assessment is an important operation of claim process in insurance industry. On the growing tide of making the insurance information system the in-depth support to optimizing operation and serving insurant, a...
详细信息
The loss assessment is an important operation of claim process in insurance industry. On the growing tide of making the insurance information system the in-depth support to optimizing operation and serving insurant, a methodological framework for the loss assessment is given based on SOA technology, Under the framework, the operation process design, the client design, the service design and the database design are given. These design results have been validated by an actual application system.
Standard pattern classifiers perform on all data features. Whereas, some of the features are redundant or irrelevant, which reduce prediction accuracy, and increase running time of classifier. The purpose of this stud...
详细信息
暂无评论