Accurate and quick localization of randomly deployed nodes is required by many applications in wireless sensor networks and always formulated as a multidimensional optimization problem. Particle swarm optimization (PS...
详细信息
Accurate and quick localization of randomly deployed nodes is required by many applications in wireless sensor networks and always formulated as a multidimensional optimization problem. Particle swarm optimization (PSO) is feasible for the localization problem because of its quick convergence and moderate demand for computing resources. This paper proposes a distributed two-phase PSO algorithm to solve the flip ambiguity problem, improve the efficiency and precision. In this work, the initial search space is defined by bounding box method and a refinement phase is put forward to correct the error due to flip ambiguity. Moreover, the unknown nodes which only have two references or three near-collinear references are tried to be localized in our research. Simulation results indicate that the proposed distributed localization algorithm is superior to the previous algorithms.
An unsupervised segmentation and its performance evaluation technique are proposed for synthetic aperture radar (SAR) image based on the mixture multiscale autoregressive (MMAR) model and the bootstrap method. The...
详细信息
An unsupervised segmentation and its performance evaluation technique are proposed for synthetic aperture radar (SAR) image based on the mixture multiscale autoregressive (MMAR) model and the bootstrap method. The segmentation-evaluation techniques consist of detecting the number of image regains, esti- mating MMAR parameters by using bootstrap stochastic annealing expectation-maximization (BSAEM) algorithm, and classifying pixels into region by using Bayesian classifier. Experimental results demonstrate that the evaluation operation is robust, and the proposed segmentation method is superior to the tradi- tional single resolution techniques, and considerably reduces the computing time over the EM algorithm.
In this paper, we propose a deep packet inspection system based on the MapReduce. The MapReduce which is a parallel distributed programming model developed by Google applies the technology of deep packet inspection. N...
详细信息
This paper proposes a new nonlinear approach for 3D motion estimation of a planar object. Based on the premise that the object is a plane or a kind of similar plane, we use LSM (Least Squares Method) to estimate the 3...
详细信息
The research of product recommendation system mainly focuses on the user s behavior or the commodities. contents, but rarely focuses on the commodies. reviews. This paper extracts useful information hidden in the comm...
详细信息
The research of product recommendation system mainly focuses on the user s behavior or the commodities. contents, but rarely focuses on the commodies. reviews. This paper extracts useful information hidden in the commodies. reviews by opinion mining technology. It is more targeted that recommending product to users according to the user’s favorite property. The main process of opinion mining is the extraction of topic words and the polarity judgement of polar words. Because the time complexity of the topic extracting algorithm is high, this paper extracts the explicit evaluation object and evaluation words by using the method of matching noun phrase and then setting up a semantic mapping set of evaluation objects and evaluation words to determine the implicit evaluation object. In this paper, k-means and BIRCH are combined to cluster the evaluation objects. K-Means algorithm is used for pre-clustering for the BIRCH algorithm to solve local optimum. And the advantage of BIRCH is it can get the number of clusters by self-learning. And delete the clusters contained few contents to pruning evaluation objects. It can reduce the time complexity and guarantees the clustering effect.
Aiming at the defect that the current time dependent road network model cannot fully reflect the information of the road attribute, and considering that the road weights of traffic congestion factors should be based o...
详细信息
Aiming at the defect that the current time dependent road network model cannot fully reflect the information of the road attribute, and considering that the road weights of traffic congestion factors should be based on the characterization of travel time and the impedance function model of average speed relations, we put forward the improved road network model which is based on the edge cost analysis;Then we propose the ant colony optimization algorithm of a new hierarchical restricted search area and the corresponding dynamic switching strategy for traffic jams, when the search level can be dynamically adjusted by the road capacity of traffic congestion, we can achieve the purpose of improving the quality of route planning and avoiding the congested road. Our simulation experiment uses the scheme of random allocation speed value by speed fitting function, and provides the multipopulation ant colony algorithm based on layered restricted searching area is significantly better than others.
In this pa per, in order to impro ve the accuracy of hepatitis diagnosis in computer-aided diagnosis system, the medical data from the network public database were optimized by GA-BP neural network algorithm, and then...
详细信息
In this pa per, in order to impro ve the accuracy of hepatitis diagnosis in computer-aided diagnosis system, the medical data from the network public database were optimized by GA-BP neural network algorithm, and then we obtained the important index of judging the hepatitis and the weight of the index through thi s algorithm, we regarded the weight as the single point measure of non-additive, and then using the fuzzy integral calculated the proba bility of a person being diagnosed with hepatitis. From the results of calculation, using fuzzy integral can greatly improve the accuracy of hepatitis auxiliary diagnosis.
With increment of personal data amount, how to allow users to efficiently re-find personal data items becomes an important research issue. According to general experience of persons, task is taken as a popular way to ...
详细信息
With increment of personal data amount, how to allow users to efficiently re-find personal data items becomes an important research issue. According to general experience of persons, task is taken as a popular way to classify personal dataset, and is often taken as a factor to re-access expected items. In this paper, we propose a framework called TaskSpace to help users re-find expected data items based on user task, and present conceptual model of TaskSpace, framework for implementing a task-based system, methods to identify task relationships and interface for users to perform task-based query. TaskSpace framework provides users an alternative way to re-find personal information, and illustrates some interesting research issues.
Spike sorting is difficult when there is high waveforms similarity between different spike or when there is a large number of superimposed spikes in the sample. A new sample optimization method is proposed in the pape...
详细信息
Spike sorting is difficult when there is high waveforms similarity between different spike or when there is a large number of superimposed spikes in the sample. A new sample optimization method is proposed in the paper, called window-gradient feature. Every spike waveform is segmented into successive fragments in terms of the width σ, and each segment is called a window. Then calculate the gradient change of each window and use the ratio as the new alternative feature of the window. Finally, all window-gradient features of the spike waveforms are used to replace the original waveform features for spike sporting. The method is verified on the simulation data at different SNR. The experimental results show that when using SVM for spike sorting, the optimization effect of the proposed method is better than PCA, especially for data sets data sets with high noise or large sampling waveform similarity.
A new clustering classification approach based on fuzzy closeness relationship (FCR) is studied in this paper. As we know, fuzzy clustering classification is one of important and valid methods to knowledge discovery. ...
详细信息
A new clustering classification approach based on fuzzy closeness relationship (FCR) is studied in this paper. As we know, fuzzy clustering classification is one of important and valid methods to knowledge discovery. One of problems in fuzzy clustering classification is to determine a certain fuzzy sample classification in given limited sample space. Another is its validity, that is to say, if the sample is resemble in sample space, its fuzzy type will be resemble too. In our research, firstly, using triangle arithmetic operator and triangle transference, we extend fuzzy equivalence relationship to fuzzy closeness relationship, and cluster. Secondly, importing fuzzy coverage based on fuzzy closeness relationship, we judge resemble type of resemble sample. Thirdly, we introduce the clustering classification method based on fuzzy closeness relationship and fuzzy coverage. The method can overcome information more loss in fuzzy equivalence. Finally, we test its feasibility. The approach is applied to knowledge discovery for risk prediction of electronic commerce market, good result has been gotten.
暂无评论