Due to its low storage cost and fast query speed, hashing has been widely adopted for approximate nearest neighbor search in large-scale datasets. Traditional hashing methods try to learn the hash codes in an unsuperv...
详细信息
ISBN:
(纸本)9781450322591
Due to its low storage cost and fast query speed, hashing has been widely adopted for approximate nearest neighbor search in large-scale datasets. Traditional hashing methods try to learn the hash codes in an unsupervised way where the metric (Euclidean) structure of the training data is preserved. Very recently, supervised hashing methods, which try to preserve the semantic structure constructed from the semantic labels of the training points, have exhibited higher accuracy than unsupervised methods. In this paper, we propose a novel supervised hashing method, called latent factor hashing (LFH), to learn similarity-preserving binary codes based on latent factor models. An algorithm with convergence guarantee is proposed to learn the parameters of LFH. Furthermore, a linear-time variant with stochastic learning optimization is proposed for training LFH on large-scale datasets. Experimental results on two large datasets with semantic labels show that LFH can achieve superior accuracy than state-of-the-art methods with comparable training time. Copyright 2014 ACM.
This paper, based on the existing LEACH routing algorithm, proposes a novel adaptive zone-crossed and virtual node for multi-hop routing algorithm for large-scale wireless sensor networks. At first, we introduce the c...
详细信息
Ackermann functions are examples of nonprimitive recursive function which grow so fast with respect to their arguments. In this paper, we present Ackermann functions in detail and introduce its influence on theoretica...
详细信息
Ackermann functions are examples of nonprimitive recursive function which grow so fast with respect to their arguments. In this paper, we present Ackermann functions in detail and introduce its influence on theoretical computerscience.
The data dependence analysis is a hard problem, particularly in the presence of data structures similar to the pointer. The inheritance and the polymorphism in object-oriented languages provide program design and soft...
详细信息
Interrupt-driven programs are often embedded in safety-critical systems to perform hardware/resource dependent data operation tasks, such as data acquisition, processing, and transformation. The interrupt programs and...
详细信息
Context: In a large object-oriented software system, packages play the role of modules which group related classes together to provide well-identified services to the rest of the system. In this context, it is widely ...
详细信息
Context: In a large object-oriented software system, packages play the role of modules which group related classes together to provide well-identified services to the rest of the system. In this context, it is widely believed that modularization has a large influence on the quality of packages. Recently, Sarkar, Kak, and Rama proposed a set of new metrics to characterize the modularization quality of packages from important perspectives such as inter-module call traffic, state access violations, fragile base-class design, programming to interface, and plugin pollution. These package-modularization metrics are quite different from traditional package-level metrics, which measure software quality mainly from size, extensibility, responsibility, independence, abstractness, and instability perspectives. As such, it is expected that these package-modularization metrics should be useful predictors for fault-proneness. However, little is currently known on their actual usefulness for fault-proneness prediction, especially compared with traditional package-level metrics. Objective: In this paper, we examine the role of these new package-modularization metrics for determining software fault-proneness in object-oriented systems. Method: We first use principal component analysis to analyze whether these new package-modularization metrics capture additional information compared with traditional package-level metrics. Second, we employ univariate prediction models to investigate how these new package-modularization metrics are related to fault-proneness. Finally, we build multivariate prediction models to examine the ability of these new package-modularization metrics for predicting fault-prone packages. Results: Our results, based on six open-source object-oriented software systems, show that: (1) these new package-modularization metrics provide new and complementary views of software complexity compared with traditional package-level metrics;(2) most of these new package-modul
software test adequacy criteria are used to determine whether the test on a software system is sufficient. Code coverage shows how thoroughly a program is tested according to corresponding testing adequacy criteria. T...
详细信息
Facial expression and emotion recognition from thermal infrared images has attracted more and more attentions in recent years. However, the features adopted in current work are either temperature statistical parameter...
详细信息
Facial expression and emotion recognition from thermal infrared images has attracted more and more attentions in recent years. However, the features adopted in current work are either temperature statistical parameters extracted from the facial regions of interest or several hand-crafted features that are commonly used in visible spectrum. Till now there are no image features specially designed for thermal infrared images. In this paper, we propose using the deep Boltzmann machine to learn thermal features for emotion recognition from thermal infrared facial images. First, the face is located and normalized from the thermal infrared im- ages. Then, a deep Boltzmann machine model composed of two layers is trained. The parameters of the deep Boltzmann machine model are further fine-tuned for emotion recognition after pre-tralning of feature learning. Comparative experimental results on the NVIE database demonstrate that our approach outperforms other approaches using temperature statistic features or hand-crafted features borrowed from visible domain. The learned features from the forehead, eye, and mouth are more effective for discriminating valence dimension of emotion than other facial areas. In addition, our study shows that adding unlabeled data from other database during training can also improve feature learning performance.
Spectral Graph Transducer(SGT) is one of the superior graph-based transductive learning methods for classification. As for the Spectral Graph Transducer algorithm, a good graph representation for data to be processed ...
详细信息
It is well-known that as software system evolves, the source code tends to deviate from its design model so that main- taining their consistence is challenging. Our objective is to detect code changes that inuence des...
详细信息
ISBN:
(纸本)9781450318884
It is well-known that as software system evolves, the source code tends to deviate from its design model so that main- taining their consistence is challenging. Our objective is to detect code changes that inuence designed program be- haviour which are referred as design level changes and up- date the behavioural model timely and automatically to maintain consistence. We propose an approach that filters out low-level source code changes that do not inuence pro- gram behaviour, abstracts code changes into updating oper- ations for behavioral model, and automates the integration and update of activity diagrams to maintain consistence. We've recognised that it is not uncommon for developers to introduce quick and dirty implementation that unneces- sarily increases program complexity or introduces subopti- mal behaviour changes. So while merging code changes into behaviour model, our approach also calculates cyclometric complexity variation before and after the process so that developers can be alerted of significant and/or detrimental changes. Our tool allows the user to approve the change in code before merging and updating the model. Copyright 2012 ACM.
暂无评论