Memory is a fundamental component in human brain and plays very important roles for all mental processes. The analysis of memory systems through cognitive architectures can be performed at the computational, or functi...
详细信息
Memory is a fundamental component in human brain and plays very important roles for all mental processes. The analysis of memory systems through cognitive architectures can be performed at the computational, or functional level, on the basis of empirical data. In this paper we discuss memory systems in the extended Consciousness and Memory Model (CAM) The knowledge representations used in CAM for working memory, semantic memory, episodic memory and procedural memory are introduced. It will be explained how, in CAM, all of these knowledge types are represented in dynamic decription logic (DDL), a formal logic with the capability for description and reasoning regarding dynamic application domains characterized by actions.
Collecting massive commonsense knowledge (CSK) for commonsense reasoning has been a long time standing challenge within artificial intelligence research. Numerous methods and systems for acquiring CSK have been deve...
详细信息
Collecting massive commonsense knowledge (CSK) for commonsense reasoning has been a long time standing challenge within artificial intelligence research. Numerous methods and systems for acquiring CSK have been developed to overcome the knowledge acquisition bottleneck. Although some specific commonsense reasoning tasks have been presented to allow researchers to measure and compare the performance of their CSK systems, we compare them at a higher level from the following aspects: CSK acquisition task (what CSK is acquired from where), technique used (how can CSK be acquired), and CSK evaluation methods (how to evaluate the acquired CSK). In this survey, we first present a categorization of CSK acquisition systems and the great challenges in the field. Then, we review and compare the CSK acquisition systems in detail. Finally, we conclude the current progress in this field and explore some promising future research issues.
The training algorithm of classical twin support vector regression (TSVR) can be attributed to the solution of a pair of quadratic programming problems (QPPs) with inequality constraints in the dual ***,this solution ...
详细信息
The training algorithm of classical twin support vector regression (TSVR) can be attributed to the solution of a pair of quadratic programming problems (QPPs) with inequality constraints in the dual ***,this solution is affected by time and memory constraints when dealing with large *** this paper,we present a least squares version for TSVR in the primal space,termed primal least squares TSVR (PLSTSVR).By introducing the least squares method,the inequality constraints of TSVR are transformed into equality ***,we attempt to directly solve the two QPPs with equality constraints in the primal space instead of the dual space;thus,we need only to solve two systems of linear equations instead of two *** results on artificial and benchmark datasets show that PLSTSVR has comparable accuracy to TSVR but with considerably less computational *** further investigate its validity in predicting the opening price of stock.
Recently, several binary descriptors are proposed, which represent interest points in image using binary codes. In these binary feature schemes, two descriptors are considered as a match, if the Hamming distance betwe...
详细信息
ISBN:
(纸本)9781479957521
Recently, several binary descriptors are proposed, which represent interest points in image using binary codes. In these binary feature schemes, two descriptors are considered as a match, if the Hamming distance between them is below a threshold. Applying Hamming distance to measure the similarity between binary descriptors can extremely promote the computational efficiency. However, our experimental results presents that there exists a large number of bits in the binary feature vector cannot maintain the robustness while image conditions change. Rather than ignore the impacts of those unstable bits, we take into account the difference of robustness among the feature bits and propose a novel similarity measurement, which called the Fragile Bit Ratio (FBR). FBR is used in binary feature matching to measure how two features differ. High FBRs are associated with genuine matches between two binary features and low FBRs are associated with impostor ones. Based on this metric, we propose a new binary feature matching scheme to fuse the Hamming distance and Fragile Bit Ratio. In our approach, we match the descriptors using the Hamming distance threshold roughly, and then filtered by the Fragile Bits Ratio to refine the candidate set. In experiments, using Fragile Bits Radio can effectively remove the false matches and highly improve the accuracy of image search. Furthermore, our method can easily be integrated into the other well-established binary features schemes.
Messages spreading inside vehicular ad hoc networks (VANETs) generally need to achieve the property of verifiability and content integrity, while preserving user privacy. Otherwise, VANETs will either fall into chaos,...
详细信息
ISBN:
(纸本)9781479965144
Messages spreading inside vehicular ad hoc networks (VANETs) generally need to achieve the property of verifiability and content integrity, while preserving user privacy. Otherwise, VANETs will either fall into chaos, or prevent users from embracing it. To achieve this goal, we propose a protocol, which contains a priori and posteriori countermeasures, to guarantee these features. The a priori process firstly verifies that each message is sent by a vehicle only once. Then it collects and checks whether the count of the message exceeds the threshold value to improve the trustworthiness of the message. The posteriori process verifies the integrity of the message, ensuring it is unchanged during transmission between the vehicle and the road side unit. The privacy is preserved by applying group signature. In case of disruptive events, the proposed solution can trace back to the source vehicle which generates the message.
Most existing rat able aspect generating methods for aspect mining focus on identifying and rating aspects of reviews with overall ratings, while huge amount of unrated reviews are beyond their ability. This drawback ...
详细信息
ISBN:
(纸本)9781479943012
Most existing rat able aspect generating methods for aspect mining focus on identifying and rating aspects of reviews with overall ratings, while huge amount of unrated reviews are beyond their ability. This drawback motivates the research problem in this paper: predicting aspect ratings and overall ratings for unrated reviews. To solve this problem, we novelly propose a topic model based on Latent Dirichlet Allocation with indirect supervision. Compared with the previous bag-of-words representation of review documents, we utilize the quad-tuples of (head, modifier, rating, entity) to explicitly model the associations between modifiers and ratings. Specifically, our solution for aspect mining in unrated reviews is decomposed into three steps. Firstly, rat able aspects are generated over sentiments from training reviews with overall ratings. Afterwards, inference of aspect identification and rating for unrated reviews are provided. Finally, overall ratings are predicted for unrated reviews. Under this framework, aspect and sentiment associations are captured in the form of joint probabilities through a generative process. The effectiveness of our approach is testified on a real-world dataset crawled from Trip Advisor http://***/, and extensive experiments show that our method significantly outperforms state-of-the-art methods.
The problem of efficiently finding top-k frequent items has attracted much attention in recent years. Storage constraints in the processing node and intrinsic evolving feature of the data streams are two main challeng...
详细信息
ISBN:
(纸本)9781479967162
The problem of efficiently finding top-k frequent items has attracted much attention in recent years. Storage constraints in the processing node and intrinsic evolving feature of the data streams are two main challenges. In this paper, we propose a method to tackle these two challenges based on space-saving and gossip-based algorithms respectively. Our method is implemented on SAMOA, a scalable advanced massive online analysis machine learning framework. The experimental results show its effectiveness and scalability.
Scientific workflows integrate data and computing interfaces as configurable, semi-automatic graphs to solve a scientific problem. Kepler is such a software system for designing, executing, reusing, evolving, archivin...
详细信息
Scientific workflows integrate data and computing interfaces as configurable, semi-automatic graphs to solve a scientific problem. Kepler is such a software system for designing, executing, reusing, evolving, archiving and sharing scientific workflows. Electron tomography (ET) enables high-resolution views of complex cellular structures, such as cytoskeletons, organelles, viruses and chromosomes. Imaging investigations produce large datasets. For instance, in Electron Tomography, the size of a 16 fold image tilt series is about 65 Gigabytes with each projection image including 4096 by 4096 pixels. When we use serial sections or montage technique for large field ET, the dataset will be even larger. For higher resolution images with multiple tilt series, the data size may be in terabyte range. Demands of mass data processing and complex algorithms require the integration of diverse codes into flexible software structures. This paper describes a workflow for Electron Tomography Programs in Kepler (EPiK). This EPiK workflow embeds the tracking process of IMOD, and realizes the main algorithms including filtered backprojection (FBP) from TxBR and iterative reconstruction methods. We have tested the three dimensional (3D) reconstruction process using EPiK on ET data. EPiK can be a potential toolkit for biology researchers with the advantage of logical viewing, easy handling, convenient sharing and future extensibility.
暂无评论