Peer-to-Peer (P2P) networks are largely used for file-sharing and hence must provide efficient mechanisms for searching the files stored at various nodes. The existing structuredP2P overlays support only ”exact-match...
详细信息
Peer-to-Peer (P2P) networks are largely used for file-sharing and hence must provide efficient mechanisms for searching the files stored at various nodes. The existing structuredP2P overlays support only ”exact-match” look-up which is hardly sufficient in a file sharing network. This paper addresses the problem of keyword-based search in structured P2P networks. We propose a new keyword-based searching algorithm which can be implemented on top of any structured P2P overlay. We demonstrate that the proposed algorithm achieves very good searching results as it requires the minimum number of messages to be sent in order to find all the references to files containing at least the given set of keywords.
The Semantic Web is a project and vision of the World Wide Web Consortium to extend the current Web, so that information is given well-defined meaning and structure, enhancing computers and people to work in cooperati...
详细信息
ISBN:
(纸本)9781424469826;9788988678183
The Semantic Web is a project and vision of the World Wide Web Consortium to extend the current Web, so that information is given well-defined meaning and structure, enhancing computers and people to work in cooperation. Semantic technologies are being added to enterprise solutions to accommodate new techniques for discovering relationships across different database, business applications and Web services. In this paper, we present an architectural model for a software tool which combines the use of Semantic Web mechanisms with database metadata and data warehousing mechanisms. If the benefits of the Semantic Web concept are combined with a powerful database server, then the information management will be much improved.
Since data is becoming more and more unstructured, clustering heterogeneous data is essential to getting structured information in response to user queries. In this paper, we test and validate the results of a new clu...
Since data is becoming more and more unstructured, clustering heterogeneous data is essential to getting structured information in response to user queries. In this paper, we test and validate the results of a new clustering technique - clustering by compression - when applied to metadata associated with heterogeneous sets of documents. The clustering by compression procedure is based on a parameter-free, universal, similarity distance, the normalized compression distance or NCD, computed from the lengths of compressed data files (singly and in pair-wise concatenation). Experimental results show that using metadata could improve the average clustering performances with about 10% over clustering the same sample data set without using metadata.
Notice of Violation of IEEE Publication Principles "Improving Heterogeneous Data Clustering by Using Metadata and Compression Algorithms" by Alexandra Cernian, Dorin Carstoiu, Valentin Sgarciu, in the Procee...
Notice of Violation of IEEE Publication Principles "Improving Heterogeneous Data Clustering by Using Metadata and Compression Algorithms" by Alexandra Cernian, Dorin Carstoiu, Valentin Sgarciu, in the Proceedings of the 2010 Roedunet International Conference (RoEduNet),June 2010, pp.169-173 After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles. This paper contains portions of text from the paper(s) cited below. A credit notice is used, but due to the absence of quotation marks or offset text, copied material is not clearly referenced or specifically identified. "Etude des Methodes de Classification par Compression" by Tudor Basarab IONESCU, published in Rapport interne 2005-06-28-DI-FB http://***/fb/download/Articles/Rapport_*** Nowadays, we have to deal with a large quantity of unstructured, heterogeneous data, produced by an increasing number of sources. Clustering heterogeneous data is essential to getting structured information in response to user queries. In this paper, we assess the results of a new clustering technique - clustering by compression - when applied to metadata associated with heterogeneous sets of data. The clustering by compression procedure is based on a parameter-free, universal, similarity distance, the normalized compression distance or NCD, computed from the lengths of compressed data files (singly and in pair-wise concatenation). Experimental results show that using metadata could improve the average clustering performances with about 20% over clustering the same sample data set without using metadata.
The paper discusses a generic, open architecture for industrial or non-industrial robot controllers allowing system designers and robot manufacturers to develop rapid deployment automation solutions for particular mec...
详细信息
The paper discusses a generic, open architecture for industrial or non-industrial robot controllers allowing system designers and robot manufacturers to develop rapid deployment automation solutions for particular mechanics of robot manipulators. The paper describes a multiple-axis open controller design for a mobile AGV platform carrying a robotic arm, with inclination control to provide horizontal alignment in any terrain configuration. The navigation and locating of the mobile robot platform, the motion control of the robotic arm, as well as monitoring, learning, program editing, debugging and execution are embedded in a multiprocessor system developed around a Motion control solution for which a structured programming language was developed. A strongly coupled multi-processor architecture embedding model learning, control and man-machine GUI functionalities is described both as hardware implementing and basic software system design (RTOS, multitasking and operating modes). Experimental results are reported for the motion control of the 5-d.o.f. robot arm.
The current trend in processor's design is to add multiple cores to increase the system's overall performance but this is not a solution to increasing the performance of serial applications. Due to its potenti...
详细信息
ISBN:
(纸本)9781424473359
The current trend in processor's design is to add multiple cores to increase the system's overall performance but this is not a solution to increasing the performance of serial applications. Due to its potential to greatly accelerate a wide variety of serial applications, reconfigurable computing has become a subject of a great deal of research. Its key feature is the ability to perform computations in hardware in order to increase performance, while retaining much of the flexibility of a software solution. In this paper, we address the problem of fully automating the process of selecting the code to be used for hardware acceleration. We present a software-hardware partitioning system that transforms Impulse C source code into blocks of C and VHDL code. The resulting C code will be run on the CPU, while the VHDL code will be implemented on a reconfigurable hardware, e.g. a FPGA.
This paper tries to define mechanisms which insure the right functionality for systems based on the Volunteer Grid concept and propose a new framework for Volunteer Grid computing management. Volunteer Grid systems al...
详细信息
ISBN:
(纸本)9781424473359
This paper tries to define mechanisms which insure the right functionality for systems based on the Volunteer Grid concept and propose a new framework for Volunteer Grid computing management. Volunteer Grid systems allow setting up high performance computer networks, easily, rapidly and at low costs, the main characteristic of their nodes being the volunteer participation. Thus, a supercomputer is created, able to perform most complex calculations in a relatively short period of time. This volunteering itself is a weakness of the system, because not only physical faults must be taken into consideration, but also sabotages through which participants could try to increase their rating.
Over the Internet today, computing and communications environments are more complex and chaotic than classical distributed systems, lacking any centralized organization or hierarchical control. Peer-to-Peer network ov...
详细信息
Over the Internet today, computing and communications environments are more complex and chaotic than classical distributed systems, lacking any centralized organization or hierarchical control. Peer-to-Peer network overlays provide a good substrate for creating large-scale data sharing, content distribution and application-level multicast applications. We present DistHash, a P2P overlay network designed to share large sets of replicated distributed objects in the context of large-scale highly dynamic infrastructures. The system uses original solutions to achieve optimal message routing in hop-count and throughput, provide an adequate consistency among replicas, as well as provide a fault-tolerant substrate. In this we present result proving that the system is able to scale to a large number of nodes, and it includes the fault tolerance and system orchestration mechanisms, added in order to assess the reliability and availability of the distributed system in an autonomic manner.
This article concerns the hardware iterative decoder for a subclass of LDPC (Low-Density Parity-Check) codes that are implementation oriented. They are known as Architecture Aware LDPC (AA-LDPC). The decoder has been ...
详细信息
This article concerns the hardware iterative decoder for a subclass of LDPC (Low-Density Parity-Check) codes that are implementation oriented. They are known as Architecture Aware LDPC (AA-LDPC). The decoder has been implemented in a form of parameterizable VHDL description. To achieve high clock frequency of the decoder hardware implementation, a large number of pipeline registers has been used in the processing chain. However, the registers increase the processing path delay, since the number of clock cycles required for data propagating is increased. Thus in general the idle cycles must be introduced between decoding subiterations. In this paper we provide a method for calculation the exact number of required idle cycles on the basis of parity check matrix of the code. Then we propose a heuristic algorithm for parity check matrix optimization to minimize the total number of required idle cycles and hence maximize the decoder throughput. The proposed matrix optimization by sorting rows and columns does not change the code properties, however the decoder throughput can be significantly increased.
An improved frequency tracker is proposed for the recently introduced self optimizing narrowband interference canceller (SONIC). The scheme is designed for disturbances with quasi-linear frequency modulation and, unde...
详细信息
An improved frequency tracker is proposed for the recently introduced self optimizing narrowband interference canceller (SONIC). The scheme is designed for disturbances with quasi-linear frequency modulation and, under second-order Gaussian random-walk assumption, can be shown to be statistically efficient. One real-world experiment and several simulations show that a considerable improvement in disturbance rejection may be achieved with the new algorithm.
暂无评论