In this paper, we consider novel anycast-based integrated routing protocol (AIRP) to reduce the cost in delay performance of communications in multihop WSNs. Without tight time synchronization or known geographic info...
详细信息
We propose a new fractional-order chaotic complex system and study its dynamical properties including symmetry, equilibria and their stability, and chaotic attractors. Chaotic behavior is verified with phase portraits...
Static data-race detection is a powerful tool by providing clues for dynamic approaches to only instrument certain memory accesses. However, static data-race analysis suffers from high false positive rate. A key reaso...
详细信息
Continued increasing of fault rate in integrate circuit makes processors more susceptible to errors, especially many-core processor. Meanwhile, most systems or applications do not need full fault coverage, which has e...
详细信息
ISBN:
(纸本)9781479909735
Continued increasing of fault rate in integrate circuit makes processors more susceptible to errors, especially many-core processor. Meanwhile, most systems or applications do not need full fault coverage, which has excessive overhead. So on-demand fault tolerance is desired for these applications. In this paper, we propose an adaptive low-overhead fault tolerance mechanism for many-core system, called Device View Redundancy (DVR). It treats fault tolerance as a device that can be configured and used by application when high reliability is needed. Nevertheless, DVR exploits the idle resources for low-overhead fault tolerance, which is based on the observation that the utilization of many-core system is low due to lack of parallelism in application. Finally, the experiment shows that the performance overhead of DVR is reduced by 16% to 98% compared with full Dual Modular Redundancy (DMR).
A general hardware structure was proposed to accelerate variable data set management, which was designed to accept instructions flexibly and accomplish the commonly used functions and some more complicated functions o...
详细信息
A general hardware structure was proposed to accelerate variable data set management, which was designed to accept instructions flexibly and accomplish the commonly used functions and some more complicated functions of the linked-list data structure .The structure can access the data based on both pointer and address mechanism. In order to fully utilize the limited memory resources, we proposed a memory recycle scheme to reuse the memory space where the data have been deleted. Experimental results on FPGA show that our proposal can accelerate the variable data set management. Only few hardware resources were used and it consumed pretty low power. Compared with the software linked-list structure in PC, our proposal in FPGA achieved high speedups.
With CMOS technologies approaching the scaling ceiling, novel memory technologies have thrived in recent years, among which the memristor is a rather promising candidate for future resistive memory (RRAM). Memristor...
详细信息
With CMOS technologies approaching the scaling ceiling, novel memory technologies have thrived in recent years, among which the memristor is a rather promising candidate for future resistive memory (RRAM). Memristor's potential to store multiple bits of information as different resistance levels allows its application in multilevel cell (MCL) tech- nology, which can significantly increase the memory capacity. However, most existing memristor models are built for binary or continuous memristance switching. In this paper, we propose the simulation program with integrated circuits emphasis (SPICE) modeling of charge-controlled and flux-controlled memristors with multilevel resistance states based on the memristance versus state map. In our model, the memristance switches abruptly between neighboring resistance states. The proposed model allows users to easily set the number of the resistance levels as parameters, and provides the predictability of resistance switching time if the input current/voltage waveform is given. The functionality of our models has been validated in HSPICE. The models can be used in multilevel RRAM modeling as well as in artificial neural network simulations.
Multi-pattern matching methods based on numerical computation are advanced in this paper. Firstly it advanced the multiple patterns matching algorithm based on added information. In the process of accumulating of info...
详细信息
Nonnegative matrix factorization (NMF) decomposes a nonnegative dataset X into two low-rank nonnegative factor matrices, i.e., W and H, by minimizing either Kullback-Leibler (KL) divergence or Euclidean distance betwe...
详细信息
ISBN:
(纸本)9781479906505
Nonnegative matrix factorization (NMF) decomposes a nonnegative dataset X into two low-rank nonnegative factor matrices, i.e., W and H, by minimizing either Kullback-Leibler (KL) divergence or Euclidean distance between X and WH. NMF has been widely used in pattern recognition, data mining and computer vision because the non-negativity constraints on both W and H usually yield intuitive parts-based representation. However, NMF suffers from two problems: 1) it ignores geometric structure of dataset, and 2) it does not explicitly guarantee partsbased representation on any datasets. In this paper, we propose an orthogonal nonnegative locally linear embedding (ONLLE) method to overcome aforementioned problems. ONLLE assumes that each example embeds in its nearest neighbors and keeps such relationship in the learned subspace to preserve geometric structure of a dataset. For the purpose of learning parts-based representation, ONLLE explicitly incorporates an orthogonality constraint on the learned basis to keep its spatial locality. To optimize ONLLE, we applied an efficient fast gradient descent (FGD) method on Stiefel manifold which accelerates the popular multiplicative update rule (MUR). The experimental results on real-world datasets show that FGD converges much faster than MUR. To evaluate the effectiveness of ONLLE, we conduct both face recognition and image clustering on real-world datasets by comparing with the representative NMF methods.
The parallel computing method of max ordinal number in universal combinatorics coding based on GPU is advanced in this paper. The core of universal combinatorics coding computing is ordinal number computing while the ...
详细信息
The parallel computing method of max ordinal number in universal combinatorics coding based on GPU is advanced in this paper. The core of universal combinatorics coding computing is ordinal number computing while the calculation of ordinal number depends on the value of max ordinal number. The calculation of max ordinal number is divided into two parts, multiplication and division of large numbers. This paper focus on the GPU parallel computing of the multiplication in max ordinal number calculation, the calculation speed of the multiplication part is increased substantially, and thus improves the efficiency of max ordinal number calculation.
The large amounts of software repositories over the Internet are fundamentally changing the traditional paradigms of software maintenance. Efficient categorization of the massive projects for retrieving the relevant s...
详细信息
ISBN:
(纸本)9781467352185
The large amounts of software repositories over the Internet are fundamentally changing the traditional paradigms of software maintenance. Efficient categorization of the massive projects for retrieving the relevant software in these repositories is of vital importance for Internet-based maintenance tasks such as solution searching, best practices learning and so on. Many previous works have been conducted on software categorization by mining source code or byte code, which are only verified on relatively small collections of projects with coarse-grained categories or clusters. However, Internet-based software maintenance requires finer-grained, more scalable and language-independent categorization approaches. In this paper, we propose a novel approach to hierarchically categorize software projects based on their online profiles across multiple repositories. We design a SVM-based categorization framework to classify the massive number of software hierarchically. To improve the categorization performance, we aggregate different types of profile attributes from multiple repositories and design a weighted combination strategy which assigns greater weights to more important attributes. Extensive experiments are carried out on more than 18,000 projects across three repositories. The results show that our approach achieves significant improvements by using weighted combination, and the overall precision, recall and F-Measure can reach 71.41%, 65.60% and 68.38% in appropriate settings. Compared to the previous work, our approach presents competitive results with 123 finer-grained and multi-layered categories. In contrast to those using source code or byte code, our approach is more effective for large-scale and language-independent software categorization.
暂无评论