This work provides a brief development roadmap of the neural network sensitivity analysis, from 1960's to now on. The two main streams of the sensitivity measures: partial derivative and stochastic sensitivity mea...
详细信息
This work provides a brief development roadmap of the neural network sensitivity analysis, from 1960's to now on. The two main streams of the sensitivity measures: partial derivative and stochastic sensitivity measures are compared. The partial derivative sensitivity (PD-SM) finds the rate of change of the network output with respect to parameter changes, while the stochastic sensitivity (ST-SM) finds the magnitudes of the output perturbations between the original training samples and the perturbed samples, in statistical sense. Their computational complexities are compared. Furthermore, how to evaluate multiple parameters of the neural network with or without correlation are explored too. In addition, the differences of them in the application of supervised pattern classification problems are also discussed. The evaluations are based on three major applications of sensitivity analysis in supervised pattern classification problems: feature selection, sample selection and neural network generalization assessment. ST-SM and PD-SM of the RBFNN are used for investigations.
Low density parity checks (LDPC) are a method of error detection and correction that are able to achieve near Shannon-limit channel communication. LDPC decoders involve a series of computations between two units, the ...
详细信息
Low density parity checks (LDPC) are a method of error detection and correction that are able to achieve near Shannon-limit channel communication. LDPC decoders involve a series of computations between two units, the check node and the bit node. In this paper we propose the use of an approximation unit to perform the check node operation. Additionally, we propose a ROM based look-up table (LUT) as a function approximation technique, to be used with an LDPC decoder. The paper shows that a ROM based LUT achieves better performance than using a piecewise linear approximation method to approximate the LDPC computation function. Furthermore, this paper shows that the ROM LUT method can gradually take over as the selected function approximation technique for computationally intensive demanding VLSI designs as the technology shifts to the nanometer era.
Mining sequential patterns in large database is an important problem in data mining research. Enormous sizes of available datasets and possibly large number of mined patterns demand efficient and scalable algorithms. ...
详细信息
ISBN:
(纸本)0780378652
Mining sequential patterns in large database is an important problem in data mining research. Enormous sizes of available datasets and possibly large number of mined patterns demand efficient and scalable algorithms. In this paper, we present a new dynamic load algorithm based HPSPM (hash-based parallel algorithm for mining sequential patterns) on shared-nothing environment. Experiments on Dawning 300 cluster system show that this algorithm achieves good speedup and is substantially improved compared to HPSPM.
The knapsack problem is very important in cryptosystem and in number *** paper proposes a new parallel algorithm for the knapsack problem where the method of divide and conquer is *** on an EREW-SIMD machine with shar...
详细信息
The knapsack problem is very important in cryptosystem and in number *** paper proposes a new parallel algorithm for the knapsack problem where the method of divide and conquer is *** on an EREW-SIMD machine with shared memory,the proposed algorithm utilizes O(2) processors,0≤ε≤1,and O(2) memory to find a solution for the n-element knapsack problem in time O(2(2)).Thus the cost of the proposed parallel algorithm is O(2),which is optimal,and an improved result over the past researches.
The knapsack problem is very important in cryptosystem and in number theory. We propose a new parallel algorithm for the knapsack problem where the method of divide and conquer is adopted. Basing on an EREW-SIMD machi...
详细信息
The knapsack problem is very important in cryptosystem and in number theory. We propose a new parallel algorithm for the knapsack problem where the method of divide and conquer is adopted. Basing on an EREW-SIMD machine with shared memory, the proposed algorithm utilizes O(2/sup n/4/)/sup 1-/spl epsiv// processors, 0/spl les//spl epsiv//spl les/1, and O(2/sup n/) memory to find a solution for the n-element knapsack problem in time O(2/sup n/4/ (2/sup n/4/)/sup /spl epsiv//). Thus the cost of the proposed parallel algorithm is O(2/sup n/), which is optimal, and an improved result over the past researches.
As more and more people enjoy the various services brought by mobile computing, it is becoming a global trend in today’s world. At the same time, securing mobile computing has been paid increasing attention. In this ...
详细信息
With the exponential growth of both the amount and diversity of the information that the Web encompasses, automatic classification of topic-specific Web sites is highly desirable. We propose a novel approach for Web s...
详细信息
ISBN:
(纸本)9780769519326
With the exponential growth of both the amount and diversity of the information that the Web encompasses, automatic classification of topic-specific Web sites is highly desirable. We propose a novel approach for Web site classification based on the content, structure and context information of Web sites. In our approach, the site structure is represented as a two-layered tree in which each page is modeled as a DOM (document object model) tree and a site tree is used to hierarchically link all pages within the site. Two context models are presented to capture the topic dependences in the site. Then the hidden Markov tree (HMT) model is utilized as the statistical model of the site tree and the DOM tree, and an HMT-based classifier is presented for their classification. Moreover, for reducing the download size of Web sites but still keeping high classification accuracy, an entropy-based approach is introduced to dynamically prune the site trees. On these bases, we employ the two-phase classification system for classifying Web sites through a fine-to-coarse recursion. The experiments show our approach is able to offer high accuracy and efficient process performance.
Controllable, reactive human motion is essential in many video games and training environments. Characters in these applications often perform tasks based on modified motion data, but response to unpredicted events is...
详细信息
ISBN:
(纸本)9781581135732
Controllable, reactive human motion is essential in many video games and training environments. Characters in these applications often perform tasks based on modified motion data, but response to unpredicted events is also important in order to maintain realism. We approach the problem of motion synthesis for interactive, humanlike characters by combining dynamic simulation and human motion capture data. Our control systems use trajectory tracking to follow motion capture data and a balance controller to keep the character upright while modifying sequences from a small motion library to accomplish specified tasks, such as throwing punches or swinging a racket. The system reacts to forces computed from a physical collision model by changing stiffness and damping terms. The freestanding, simulated humans respond automatically to impacts and smoothly return to tracking. We compare the resulting motion with video and recorded human data.
暂无评论