With the widespread use of mobile devices, the location-based service (LBS) applications become increasingly popular, which introduces the new security challenge to protect user's location privacy. On one hand, a ...
详细信息
Counterfeit coin problem is of utmost importance and it is truly interesting in computerscience and Game theory as well as in Mathematics. In this problem the objective is to detect the fake coin(s) of identical appe...
详细信息
Counterfeit coin problem is of utmost importance and it is truly interesting in computerscience and Game theory as well as in Mathematics. In this problem the objective is to detect the fake coin(s) of identical appearance but of different weight in minimum number of comparisons. The word counterfeit is most frequently applicable to forgeries of currency or documents, but can also describe software, pharmaceuticals, clothing, and more recently, motorcycles and other vehicles, especially when these result in patent or trademark infringement. In this paper we have developed a new algorithm for solving two counterfeit coins problem in linear time, where n is the total number of coins given. However, this is the first algorithm that identifies and solves the problem, given the false coins with type ω(ΔH) = ω(ΔL), i.e., one false coin is heavier and another is lighter than a true coin, and their difference in weight from the true coin is equal. However, this is the degenerate case in the field of two counterfeit coins problem.
In this paper we consider a cognitive radio network comprising of a number of primary users (PUs) and a number of secondary users (SUs). The spectrum has been divided into channels by means of frequency division multi...
详细信息
In this paper we consider a cognitive radio network comprising of a number of primary users (PUs) and a number of secondary users (SUs). The spectrum has been divided into channels by means of frequency division multiple access (FDMA). The PUs have license to use the channels and the SUs periodically check the channels for idleness. When the channels are not in use by the PUs, the SUs may bid for them. The PUs auctions off the channels to the purchasers (SUs) who offer most money for them. Thus, PUs earn revenue in place of the spectrum. Here, we intend to compute an allocation of channels/vacant spectrums to the SUs such that the total revenue earned by all PUs is maximized. We also intend to compute another such allocation of channels/vacant spectrum to the SUs such that the payoff earned by the SUs is maximized. Both these optimizations have been performed with the help of real coded genetic algorithm (GA) and simulated annealing (SA) algorithm. We have made comparative analysis of both the algorithms and show that GA provides better near optimal solution compared to SA in terms of computation of a close to optimal solution. But, computational time of GA is more than that of SA. GA suffers from premature convergence, which can be dealt with SA.
The counterfeit coin problem is well-known and truly interesting in computerscience, Game theory, and also in Mathematics. In this problem the objective is to detect the fake coin(s) of identical appearance but diffe...
详细信息
The counterfeit coin problem is well-known and truly interesting in computerscience, Game theory, and also in Mathematics. In this problem the objective is to detect the fake coin(s) of identical appearance but different weight in minimum number of comparisons. The word counterfeit most frequently describes forgeries of currency or documents, but can also describe software, pharmaceuticals, clothing, and more recently, motorcycles and cars, especially when these result in patent or trademark infringement. Finding one fake coin among n coins is tricky enough and complex. The problem becomes rigorous when there are two fake coins, as the false coin pair may form several different combinations that make the problem particularly tricky and complex to solve. In this paper we have developed a new algorithm for solving two versions of the two counterfeit coins problem in O(log n) time, where n is the number of coins given.
With the rapid accumulation of high dimensional data, dimensionality reduction plays a more and more important role in practical data processing and analysing tasks. This paper studies semi-supervised dimensionality r...
详细信息
ISBN:
(纸本)9781479942732
With the rapid accumulation of high dimensional data, dimensionality reduction plays a more and more important role in practical data processing and analysing tasks. This paper studies semi-supervised dimensionality reduction using pair wise constraints. In this setting, domain knowledge is given in the form of pair wise constraints, which specifies whether a pair of instances belong to the same class (must-link constraint) or different classes (cannot-link constraint). In this paper, a novel semi-supervised dimensionality reduction method called Adaptive Semi-Supervised Dimensionality Reduction (ASSDR) is proposed, which can get the optimized low dimensional representation of the original data by adaptively adjusting the weights of the pair wise constraints and simultaneously optimizing the graph construction. Experiments on UCI classification and face recognition show that ASSDR is superior to many existing dimensionality reduction methods.
Inference of ancestral or extinct genomes is important in evolutionary biology, cancer research and many other research areas. Whole-genome data has become readily available due to advances in large-scale sequencing t...
详细信息
ISBN:
(纸本)9781479956708
Inference of ancestral or extinct genomes is important in evolutionary biology, cancer research and many other research areas. Whole-genome data has become readily available due to advances in large-scale sequencing technology. During the past decades, a number of evolutionary models and related algorithms have been designed to infer ancestral genome sequence or gene order. Since it is so hard or even impossible to know the true scenario of the ancestral genomes, there must be some tools used to test the robustness of the adjacencies obtained from existing methods. However, till now, there is still no systematic work being conducted to tackle this problem. On the other hand, it is a common practice in phylogenetic analysis to assess the confidence rate of the inferred branches, using techniques such as bootstrapping and jackknifing, which have been evaluated as good resampling tools for phylogenetic reconstruction methods. Some of these resampling methods can be potentially used as a robustness test for ancestral genomes. In this paper, we conduct large-scale experiments by using three existing best ancestral genome inference methods and four different resampling techniques. Experimental results show that resampling techniques are useful for assessing the quality of ancestral genomes, while different methods have different preferred resampling techniques. It is suggested the cut-off threshold of 75% should be used to filter bad gene adjacencies.
The evolution of web has changed the way of interaction with the user. Web 2.0 encouraged more contribution from the user of varying level of mapping experience and is called Crowd Sourcing. Open Street Map is also th...
详细信息
ISBN:
(纸本)9781479931897
The evolution of web has changed the way of interaction with the user. Web 2.0 encouraged more contribution from the user of varying level of mapping experience and is called Crowd Sourcing. Open Street Map is also the outcome of Crowd Sourcing. It is collecting huge data with help of general public, researchers have started analysing the data rather than collecting it. The aim of this study is to review the research work for assessment of Open Street Map Data. It is concluded that the most of research work on assessment of Open Street map data has been done for countries like Germany, UK & USA. But the authenticity and accuracy of reference data still unanswered. Another issue that is concluded by this review, in context to Indian subcontinent, is the requirement of through analysis of Open Street Map data.
Data integrity protection in outsourced data is challenging task in cloud computing. Client or third party auditor need to check the integrity of outsourced data. For this purpose so many approaches are introduced wit...
详细信息
Data integrity protection in outsourced data is challenging task in cloud computing. Client or third party auditor need to check the integrity of outsourced data. For this purpose so many approaches are introduced with different types of algorithms such as Proof of Retrievablity, provable data possession, privacy preserving, etc. This extensive paper aims to make comparative analysis on RSA based homomorphic and BLS based homomorphic by time factor and produce the best algorithm for data protection as well as time consuming. Which is used to the cloud user choose the best mechanism based on their resource and requirements and help to make their integrity checking process effectively.
Ontologies are used to represent domain knowledge with help of object, their behaviour and properties. This paper represents web enabled approach on agriculture semantic web using SPARQL and specified tools to increas...
详细信息
Ontologies are used to represent domain knowledge with help of object, their behaviour and properties. This paper represents web enabled approach on agriculture semantic web using SPARQL and specified tools to increase productivity of farmers. This work focuses on assessment of query optimization tools and results predicted from them to determine the suitability of each method for different users where structured ontologies are used as querying aids for agriculture based dataset.
Cancer classification from microarray gene expression data is a challenging task in computational biology and bioinformatics as the sufficient number of labeled samples (required to train the traditional classifiers) ...
详细信息
Cancer classification from microarray gene expression data is a challenging task in computational biology and bioinformatics as the sufficient number of labeled samples (required to train the traditional classifiers) are very expensive and difficult to collect. Therefore, the predication accuracies of the classifiers trained with limited training samples are often very low. Although, the unlabeled samples are relatively inexpensive and readily available, traditional classifiers not generally utilize the distribution of those unlabeled samples. In this context, this article presents a novel `self-training' based semi-supervised classification method using fuzzy K-Nearest Neighbour algorithm which utilizes the unlabeled samples along with the labeled samples to improve the prediction accuracy of the cancer classification. The proposed method is evaluated with a number of microarray gene expression cancer data sets. Experimental results justify the potentiality of the proposed semi-supervised method for cancer classification using microarray gene expression data in comparison to its other supervised counterparts.
暂无评论