An identity-based threshold key management scheme without secure channel is proposed for ad hoc network. The master private key, which is shared among all nodes by the Shamir's secret sharing scheme, is produced b...
详细信息
An identity-based threshold key management scheme without secure channel is proposed for ad hoc network. The master private key, which is shared among all nodes by the Shamir's secret sharing scheme, is produced by all nodes when network is formed. The nodes’ public keys are derived from their identities. In order to get the private key, each node needs to prove their identity to distributed CAs using a zeroknowledge proof protocol to get the share of private key. Compared with former schemes, our scheme doesn’t need any local registration authority(LRA), which is easy to be compromised by adversary. When a node leaves the network, shares of the master private key would be renewed. In the end, we prove our scheme is correct and secure.
A variety of wavelet transform methods have been introduced to remove noise from images. However, many of these algorithms remove the fine details and smooth the structures of the image when removing noise. The wavele...
详细信息
A variety of wavelet transform methods have been introduced to remove noise from images. However, many of these algorithms remove the fine details and smooth the structures of the image when removing noise. The wavelet coefficient magnitude sum (WCMS) algorithm can preserve edges, but it is at the expense of removing noise. The Non-Local means algorithm can removing noise effective. But it tend to cause distortion ( eg white). Meanwhile, when the noise is large, the method is not so effective. In this paper, we propose an efficient denoising algorithm. we denoised the image with non-local means algorithm in the spatial domain and WCMS algorithm in wavelet domain, weighted, combined them and got the image that we want. The experiment shows that our algorithm can improve PSNR form 0.6 dB to 1.0 dB and the image boundary is more clearly.
Cone-Beam Computed Tomography (CBCT) has always been in the forefront of medical image processing. The denoising as a image pre-processing, has a great affected on the image analysis and recognition. In this paper, a ...
详细信息
Cone-Beam Computed Tomography (CBCT) has always been in the forefront of medical image processing. The denoising as a image pre-processing, has a great affected on the image analysis and recognition. In this paper, a new algorithm for image denoising was proposed. By thresholding the interscale wavelet coefficient magnitude sum(WCMS) within a cone of influence (COI), the wavelet coefficients are classified into 2 categories: irregular coefficients, and edge-related and regular coefficients. They are processed by different ways. Meanwhile according to the projection image sequences characteristics in CBCT system, an effective noise variance estimated methods was proposed. The experiment shows that our algorithm can improve PSNR form 1.3dB to 2.6dB, and the image border is more clearly.
Today, almost all the C2C e-commerce community member reputation evaluating algorithms are time sensitive. The reputation of a community member is accumulated after transactions and the increment of reputation after a...
详细信息
In order to enhance the image information security, image encryption becomes an important research direction. A new image encryption algorithm based on encryption template is presented in this paper. This algorithm is...
详细信息
In Web database integration, crawling data pages is important for data extraction. The fact that data are contained by multiple result pages increases the difficulty of accessing data for integration. Thus, it is nece...
详细信息
In Web database integration, crawling data pages is important for data extraction. The fact that data are contained by multiple result pages increases the difficulty of accessing data for integration. Thus, it is necessary to accurately and automatically crawl query result pages from Web database. To address this problem, we propose a novel approach based on URL classification to effectively identify result pages. In our approach, we compute the similarity between URLs of hyperlinks in result pages and classify them into four categories. Each category maps to a set of similar web pages, which separate result pages from others. Then, we use the page probing method to verify the correctness of classification and improve the accuracy of crawled result pages. The experimental result demonstrates that our approach is effective for identifying the collection of result pages in Web database, and can improve the quality and efficiency of data extraction.
Early detection of pulmonary nodules on multi-slice computed tomography (MSCT) is an important clinical indication for early-stage lung cancer diagnosis. Currently, support vector machines (SVMs) have been widely used...
详细信息
Early detection of pulmonary nodules on multi-slice computed tomography (MSCT) is an important clinical indication for early-stage lung cancer diagnosis. Currently, support vector machines (SVMs) have been widely used in pattern recognition. We are developing a nodule detection system by using the combination of SVMs. For training and testing the SVM classifier, a lung nodule simulation method in three-dimensional (3D) image space is developed to provide a better test environment. The detection method is multi-step, including 3D nodule simulation and insertion, lung fields segmentation, initial nodule candidates extraction, 3D geometry and intensity features calculation and false positives (FPs) reduction based on SVM. The proposed scheme is applied on three mixed chest MSCT datasets and evaluated using receiver operating characteristic (ROC) curve analysis. The experimental results illustrate the efficiency of the proposed method and the sensitivity of SVM classifier is found to be 86%.
The problem of extracting data from a Web page has been studied by many works. In this paper, we present a novel approach that extracts data records from Web pages based on techniques of XML encoding. Firstly, our app...
详细信息
The problem of extracting data from a Web page has been studied by many works. In this paper, we present a novel approach that extracts data records from Web pages based on techniques of XML encoding. Firstly, our approach formats a given Web data page into an XML document. Then instead of using DOM-based approaches, we make use of XML encoding model to transform the XML document into a linear sequence. Our algorithm identifies the data records of a Web page from the sequence, which avoids the complex matching between sub trees in DOM model. Moreover, we address the problem of repetitive subparts in records and propose an algorithm for data alignment. Experimental results show that our approach can extract data records accurately from web pages.
Mobile users traveling over roads often issue KNN queries based on their current locations with their mobile terminals (e.g. where is the nearest gas station?). However, exact location information transmitted to an un...
详细信息
Mobile users traveling over roads often issue KNN queries based on their current locations with their mobile terminals (e.g. where is the nearest gas station?). However, exact location information transmitted to an unsecure server will easily lead the mobile user to be tracked. Thus it is important to protect mobile users' location privacy while providing location-based services. People traveling over roads always follow a road network. We observe that two cloaking subgraph structures, which we name cloaking cycle and cloaking tree, can be used to protect mobile users' location privacy effectively in road network environment. Based on these two subgraph structures, we propose a novel location privacy preserving approach using cloaking cycle and forest, which can effectively protect mobile users' location privacy while efficiently providing exact location-based services.
With the fast development of Internet technology and information processing technology, the image is commonly transmitted via the Internet. People enjoy the convenience and shortcut, but people have to face to the obs...
详细信息
With the fast development of Internet technology and information processing technology, the image is commonly transmitted via the Internet. People enjoy the convenience and shortcut, but people have to face to the obsession that the important image information in transmission is easily intercepted by unknown persons or hackers. In order to enhance the image information security, image encryption becomes an important research direction. An image encryption algorithm based on DNA sequences for the big image is presented in this paper. The main purpose of this algorithm is to reduce the big image encryption time. This algorithm is implemented by using the natural DNA sequences as main keys. The first part is the process of pixel scrambling. The original image is confused in the light of the scrambling sequence is generated by the DNA sequence. The second part is the process of pixel replacement. The pixel gray values of the new image and the one of the three encryption templates are generated by the other DNA sequence are XORed bit-by-bit in turn. The experimental result demonstrates that the image encryption algorithm is feasible and simple. Through performance analysis, this algorithm is robust against all kinds of attacks and owns higher security.
暂无评论