Radio frequency identification (RFID) is a technology where a reader device can "sense'' the presence of a close by object by reading a tag device attached to the object. To guarantee the coverage quality...
详细信息
Radio frequency identification (RFID) is a technology where a reader device can "sense'' the presence of a close by object by reading a tag device attached to the object. To guarantee the coverage quality, multiple RFID readers can be deployed in the given region. In this paper, we consider the problem of activation schedule for readers in a multi-reader environment. In particular, we try to design a schedule for readers to maximize the number of served tags per time-slot while avoiding various interferences. We first develop a centralized algorithm under the assumption that different readers may have different interference and interrogation radius. Next, we propose a novel algorithm which does not need any location information of the readers. Finally, we extend the previous algorithm in distributed manner in order to suit the case where no central entity exists. We conduct extensive simulations to study the performances of our proposed algorithm. And our evaluation results corroborate our theoretical analysis.
Increasingly, software needs to dynamically adapt its structure and behavior at runtime in response to changing conditions in the supporting computing, network infrastructure, and in the surrounding physical environme...
详细信息
Increasingly, software needs to dynamically adapt its structure and behavior at runtime in response to changing conditions in the supporting computing, network infrastructure, and in the surrounding physical environments. By high complexity, assurance of high dependability of these software is a great challenge. Effective modeling of behavior and flexibly specifying requirements are the key issues for developing trusted adaptive software. This paper introduces a formal model for the behavior of adaptive software and an extended linear temporal logic to specify global properties. We use state machines to describe programs in different behavioral modes of adaptive software and consider these machines as different versions of programs. Specifications are classified into three categories, local, adaptation and global properties from perspective of dynamic adaptation. To specify and verify global properties on our model, we propose the versioned LTL (vLTL) which extends Linear Temporal Logic by adding version related element and enables describing properties on different versions. We also discuss verifying approach of vLTL by transforming them into LTL formulae and illustrate a study case.
In this paper, we focus on two-thirds bisimulation and the main aim is to describing the convergence mechanism of implementation under two-thirds bisimulation. Two-thirds limit bisimulation and two-thirds bisimulation...
详细信息
We propose a novel locality sensitive vocabulary coding scheme to extract compact descriptors for low bit rate visual search. We employ Latent Dirichlet Allocation (LDA) to learn the topic vocabularies of lower dimens...
详细信息
ISBN:
(纸本)9781457700293
We propose a novel locality sensitive vocabulary coding scheme to extract compact descriptors for low bit rate visual search. We employ Latent Dirichlet Allocation (LDA) to learn the topic vocabularies of lower dimension to generate compact descriptors. To deal with diverse datasets, LDA model is introduced to subdivide a dataset into groups of images with a topic model, where the code word distributions in each group produce coherent statistics from generative learning. Moreover, our empirical study has shown that the original Bag-of-Word (BoW) is sparse, and the occurrences of non-zero words is coherent within a topic. Our proposed topic-wise vocabulary learning yields a more compact yet discriminative codebook to search images. Given a query image, multiple topics are determined, which is fed into the topic vocabularies to generate more compact topical descriptor. Comparison experiments show our topic-wise locality sensitive vocabulary coding produces more compact and discriminative descriptors than the state-of-the-arts.
With the development of education and open software, item response model is more and more important in testing actual testees' abilities. In item response model, testees' abilities and item parameters are inte...
详细信息
With the development of education and open software, item response model is more and more important in testing actual testees' abilities. In item response model, testees' abilities and item parameters are interrelated. Bayesian method is important estimation method in item response model, but calculation is difficulty. This paper use Scilab and Mysql database construct item response parameter estimation framework;provideBayesian joint likelihood with Langevin MCMC method. Compare three ways to calculate item parameters and testees' abilities, including random walk sampling, simple Langevin sampling and target distribution Langevin sampling *** the actual calculations, target distribution Langevin sampling improves the calculation speed and reduces the number of systematic sampling times.
The information technology has been recognized as one of the most important means to improve health care and curb its ever-increasing cost. However, existing efforts mainly focus on informatization of hospitals or med...
详细信息
The information technology has been recognized as one of the most important means to improve health care and curb its ever-increasing cost. However, existing efforts mainly focus on informatization of hospitals or medical institutions within organizations, and few are directly oriented to individuals. The strong demand for various health services from customers calls for the creation of powerful individual-oriented personalized health care service systems. Web service composition (WSC) and related technologies can greatly help one build such systems. This paper aims to present a newly developed platform called a Public oriented Health care Information Service Platform (PHISP) and several novel WSC techniques that are used to build it. Among them include WSC techniques that can well support branch and parallel structures.
In traditional partial-duplicate image retrieval, images are commonly represented using the Bag-of-Visual-Words (BOV) model built from image local features, such as SIFT. Actually, there is only a small similar portio...
详细信息
In traditional partial-duplicate image retrieval, images are commonly represented using the Bag-of-Visual-Words (BOV) model built from image local features, such as SIFT. Actually, there is only a small similar portion between partial-duplicate images so that such representation on the whole image is not adequate for the partial-duplicate image retrieval task. In this paper, we propose a novel perspective to retrieval partial-duplicate images with Contented-based Saliency Region (CSR). CSRs are such sub-regions with abundant visual content and high visual attention in the image. The content of CSR is represented with the BOV model while saliency analysis is employed to ensure the high visual attention of CSR. Each CSR is regarded as an independent unit to be retrieved in the dataset. To effectively retrieve the CSRs, we design a relative saliency ordering constraint, which captures a weak saliency relative layout among interest points in the CSR. Comparison experiments with four state-of-the-art methods on the standard partial-duplicate image dataset clearly verify the effectiveness of our scheme. Further, our approach can provide a more diverse retrieval result, which facilitates the interaction of portable-device users.
In this paper, a new visual saliency detection method is proposed based on the spatially weighted dissimilarity. We measured the saliency by integrating three elements as follows: the dissimilarities between image pat...
详细信息
In this paper, a new visual saliency detection method is proposed based on the spatially weighted dissimilarity. We measured the saliency by integrating three elements as follows: the dissimilarities between image patches, which were evaluated in the reduced dimensional space, the spatial distance between image patches and the central bias. The dissimilarities were inversely weighted based on the corresponding spatial distance. A weighting mechanism, indicating a bias for human fixations to the center of the image, was employed. The principal component analysis (PCA) was the dimension reducing method used in our system. We extracted the principal components (PCs) by sampling the patches from the current image. Our method was compared with four saliency detection approaches using three image datasets. Experimental results show that our method outperforms current state-of-the-art methods on predicting human fixations.
Digital watermarking is an efficient method to protect multimedia documents. Robust watermarks, which survive to any change or alteration of the protected documents, are typically used for copyright protection. Fragil...
详细信息
Digital watermarking is an efficient method to protect multimedia documents. Robust watermarks, which survive to any change or alteration of the protected documents, are typically used for copyright protection. Fragile watermarks, which are vulnerable to a little alteration, are typically used for content authentication. In the paper, we propose a hybrid watermarking method joining a robust and a fragile watermark, and thus combining copyright protection and content authentication. As a result this approach is at the same time resistant against tampering and copy attacks. Our contribution is setting up the relationship between fragile watermark and robust watermark. We make use of DCT coefficients to embed watermark information. The experimental results show that the proposed method can resist tampering and copy attacks at the same time.
暂无评论