There is an inherit weakness in regular digital signatures. If private key is exposed, all signatures will be insecure no matter whether they were generated before key exposure or not. Forward secure signatures and fo...
详细信息
There is an inherit weakness in regular digital signatures. If private key is exposed, all signatures will be insecure no matter whether they were generated before key exposure or not. Forward secure signatures and forward secure threshold signatures are proposed to deal with this problem. In this work, we propose a forward secure threshold signature scheme in the standard model. The complexity of each parameter in this scheme is at most log-squared in terms of the total number of time periods. Another important trait is that the signature only comprises one-time tag and triple group elements. In the end, we prove that the proposed scheme is forward secure without random oracles.
As regard to the case of extending the lifetime of zigbee network, the defination of node's boundary is proposed. First, all the information for node's boundary is stored when zigbee network is built. Then, th...
详细信息
Considering potential attacks from cloud-computing and quantum-computing,it is becoming nec-essary to provide higher security elliptic *** hidden Markov models are introduced for designing the trace-vector computation...
详细信息
Considering potential attacks from cloud-computing and quantum-computing,it is becoming nec-essary to provide higher security elliptic *** hidden Markov models are introduced for designing the trace-vector computation algorithm to accelerate the search for elliptic curve (EC) *** present a new algorithm for secure Koblitz EC generation based on evolutionary cryptography *** algorithm is tested by selecting a secure Koblitz EC over the field F(2 2000),with experiments showing that both the base field and base point of the secure curve generated exceed the parameter range for Koblitz curves recommended by *** base fields generated go beyond 1900 bits,which is higher than the 571 bits recommended by *** also find new secure curves in the range F(2 163)-F(2 571) recommended by *** perform a detailed security analysis of those secure curves,showing that those we propose satisfy the same security criteria as NIST.
Energy saving is a critical concern in wireless sensor networks (WSNs). Usually, duty cycle is applied to reduce the energy expended by radio transceiver of nodes. In this paper, a Buffering- and Prediction- based Sle...
详细信息
Energy saving is a critical concern in wireless sensor networks (WSNs). Usually, duty cycle is applied to reduce the energy expended by radio transceiver of nodes. In this paper, a Buffering- and Prediction- based Sleep Scheme (BPSS) is proposed for WSNs. The proposed BPSS is featured by that only relay nodes are allowed to duty cycle whereas sample nodes not, a buffer is set at each sample node to hold the data captured by the sample nodes for the events occurring during the sleeping period of relay nodes, and the relay nodes predict the occurrence times of events according to exponentially weighted moving average with a decay parameter so as to set their sleeping periods adapting to event traffic. The proposed BPSS is implemented on TinyOS and TelosW motes and experiments show the BPSS is able to reduce energy consumption while maintaining high event reception ratio in the sink node.
Most existing vulnerability taxonomy classifies vulnerabilities by their idiosyncrasies, weaknesses, flaws and faults et al. The disadvantage of the taxonomy is that the classification standard is not unified and ther...
详细信息
Most existing vulnerability taxonomy classifies vulnerabilities by their idiosyncrasies, weaknesses, flaws and faults et al. The disadvantage of the taxonomy is that the classification standard is not unified and there is overlap classification phenomenon in vulnerability taxonomy. In order to solve the problem, we will propose an algorithm VUNClique, Virtual Grid-based Clustering of Uncertain Data on vulnerability database. Firstly, this paper transforms the vulnerability database into uncertain dataset using the existing vulnerability database pretreatment model. Secondly, we define a Virtual grid structure, the cells are divided into real cells and virtual cells, but only the real cells which contain data objects stored in memory. The probability attribute value similarity is defined to deal with the similarity of non-numeric attributes, which compares the number of non-numeric attributes with the same value between tuples to measure the similarity. We provide a secondary partition algorithm to improve the similarity between the tuples in the same cell, the algorithm merges a tuple into it's high-density neighbor cell which has the maximum value of probability attribute value similarity with it. Then, a novel identify cluster algorithm is provided to cluster the high-density real cells. It can identify clusters of arbitrary shapes by traversing real cells twice. Finally, performance experiments over the uncertain dataset transformed by NVD vulnerability database. The experiments results show that VUNClique can find clusters of arbitrary shapes, and greatly improve the efficiency of clustering.
High dimensional data clustering is an important issue for data mining. Firstly, the records in the dataset are mapped to the vertices of hypergraph, the hyperedges of hypergraph are composed of the vertices which hav...
详细信息
With eXtensible Markup Language (XML) becoming more and more popular, to avoid the redundancy, XML schema design has become an important issue. So the normalization of XML is a hotspot in research field. Similar to re...
详细信息
ISBN:
(纸本)9781467329637
With eXtensible Markup Language (XML) becoming more and more popular, to avoid the redundancy, XML schema design has become an important issue. So the normalization of XML is a hotspot in research field. Similar to relational database, this paper is database based with the goal of eliminating the data redundancy, to study the concepts of path expression in Document Type Definition (DTD). In this paper, XML is extended with functional dependency (XFD) and multi-valued dependency (XMVD), which are fundamental to semantic specification. And make formalized definitions on XFD and XMVD;Based on the concepts of XML tree and data dependency, it provides the description of key and redundancy. On the condition of the coexistence of XFD and XMVD, it further proposes the terms to meet the fourth normal form (4XNF) and provides theorem to determine the XML document tree which meets the above terms without redundancy, and the sound of the 4XNF is proved by experiment.
Due to the introduction of weight of attributes, most of existing dissimilarity measures can not accurately reflect the difference between two heterogeneous objects, and then clustering quality was decreased. In this ...
详细信息
Due to the introduction of weight of attributes, most of existing dissimilarity measures can not accurately reflect the difference between two heterogeneous objects, and then clustering quality was decreased. In this paper, we present HIDK-means, an approach for clustering heterogeneous data based on information dissimilarity. At first, the algorithm defines heterogeneous information dissimilarity between two heterogeneous objects based on Kolmogorov information theory, and calculates approximately heterogeneous information dissimilarity by a universal probability of an object. Then, in the clustering process, the algorithm selects the initial cluster centers by maximum sum of dissimilarity. After that, each remaining object is assigned to a cluster center which has the smallest dissimilarity with it and the criterion function is calculated. Iteratively, cluster centers are updated and the process is ceased until the criterion function converges or the iteration number reaches the pre-set threshold. The experimental results show that the proposed algorithm HIDK-means is effective in clustering heterogeneous objects and also scalable to large datasets.
Quantizing Error during the process of JPEG compression is analyzed, and a new information hiding algorithm is proposed based on Quantizing Error of JPEG image. Experimental results show that the proposed algorithm ca...
详细信息
In this paper, we present an algorithm TKBT(top-k closed frequent mining based on TKTT) to mine top-k closed frequent itemsets in data streams efficiently. First according to the consecutive and changeable characteris...
详细信息
In this paper, we present an algorithm TKBT(top-k closed frequent mining based on TKTT) to mine top-k closed frequent itemsets in data streams efficiently. First according to the consecutive and changeable characteristics of the data from data streams in sliding window, a novel structure, BWT(bit-vector window table) is defined. In BWT horizontal direction we use bit vectors to express the transactions, record the count of items in the oldest, the newest window and all the windows a t current time, which decreases the calculating time of the items count when a new window slides in. In BWT vertical direction we set window partition, which makes us just need replace the oldest window information with the corresponding newest window when a new window comes. The construction of TKTT (top-k temporary table) is based on BWT. The itemsets in TKTT are ranked in a descending count order. TKBT can get top-k closed frequent itemsets by connecting the candidates in TKTT using top-down strategy. The candidate number is reduced by using closed itemset displace its subset and less connection times are contributed to the less runtime. Experiment results show that TKBT is very effective and scalable.
暂无评论