A new methodology for the design of control systems for real time applications is presented. It is proposed to have a first stage of assisted design of nominal systems with simulation based verification of the achieve...
详细信息
ISBN:
(纸本)9783902661913
A new methodology for the design of control systems for real time applications is presented. It is proposed to have a first stage of assisted design of nominal systems with simulation based verification of the achieved performances. The control algorithm will then be improved by robustification to reproduce as good as possible the performances obtained in simulation on the real time process. Finally, a supervisor is implemented to optimize the thermo-energetic process and to compute the best choice for the reference signals.
The Semantic Web is a development of the World Wide Web in which the meaning (semantics) of information web is defined, making it possible for the web to "understand" and satisfy the requests of people. Sema...
详细信息
The Semantic Web is a development of the World Wide Web in which the meaning (semantics) of information web is defined, making it possible for the web to "understand" and satisfy the requests of people. Semantic technologies are being added to enterprise solutions to accommodate new techniques for discovering relationships across different databases, business applications and Web services. In this paper, we present an architectural model for a distributed software tool which combines the use of Semantic Web mechanisms with database metadata and data warehousing mechanisms. If the benefits of the Semantic Web concept are combined with a powerful database server and with the strong points of a distributed application, then the information management will be improved.
Failure detection is a fundamental building block for ensuring fault tolerance in large scale distributed Systems. In this paper we present an innovative Solution to this problem. The approach is based on adaptive, de...
详细信息
ISBN:
(纸本)9781424459179;9780769539676
Failure detection is a fundamental building block for ensuring fault tolerance in large scale distributed Systems. In this paper we present an innovative Solution to this problem. The approach is based on adaptive, decentralized failure detectors, capable of working asynchronous and independent on the application flow. The proposed failure detectors are based on clustering, the use of a gossip-based algorithm for detection at local level and the use of a hierarchical structure among Clusters of detectors along which traffic is channeled. In this we present result proving that the system is able to scale to a large number of nodes, while still considering the QoS requirements of both applications and resources, and it includes the fault tolerance and system orchestration mechanisms, added in order to asses the reliability and availability of distributed Systems in an autonomic manner.
The paper presents a software application for modeling, simulation and analysis of urban traffic networks behavior in different functioning contexts. A module of control techniques, based on different approaches (as g...
The paper presents a software application for modeling, simulation and analysis of urban traffic networks behavior in different functioning contexts. A module of control techniques, based on different approaches (as genetic algorithms, fuzzy and neural networks) allows the platform to provide decision support for context-based traffic control in large urban areas. The application is based on a three-layered control architecture: the basic level monitoring and controlling junctions, the middle one ensuring smooth functioning of sub-networks, by the coordination of interconnected junctions and the top level ensuring a globally optimized functioning of the system.
Low-Density Parity-Check codes are one of the best modern error-correcting codes due to their excellent error-correcting performance and highly parallel decoding scheme. This article concerns the hardware iterative de...
详细信息
Low-Density Parity-Check codes are one of the best modern error-correcting codes due to their excellent error-correcting performance and highly parallel decoding scheme. This article concerns the hardware iterative decoder for a subclass of LDPC codes that are implementation oriented, known also as Architecture Aware LDPC. The parameterizable decoder has been designed in the form of synthesizable VHDL description. Implementation in Xilinx FPGA devices achieves throughput nearly 100Mb/s. Significant part of the decoder area is occupied by Configurable Interconnection Network. The network consists of a set of multiplexers that propagate the data from memory to the computation units. Behavioral description of the interconnection network gives quite poor synthesis results: decoder area is large and exponentially dependent on the number of inputs / outputs. Instead of straightforward behavioral description, the switching network can be described structurally making use of ideas known from theory of telecommunication interconnection networks: Benes or Banyan switches. In this article 1 present in detail the interconnection network implementation based on Banyan switch with additional multiplexer stage to enable non-power-of-2 numbers of outputs. Comparison of synthesis results for the network obtained by synthesis of behavioral description as well as the Banyan structural description shows significant decrease of decoder area in the second case.
Peer-to-Peer (P2P) networks are largely used for file-sharing and hence must provide efficient mechanisms for searching the files stored at various nodes. The existing structuredP2P overlays support only ”exact-match...
详细信息
Peer-to-Peer (P2P) networks are largely used for file-sharing and hence must provide efficient mechanisms for searching the files stored at various nodes. The existing structuredP2P overlays support only ”exact-match” look-up which is hardly sufficient in a file sharing network. This paper addresses the problem of keyword-based search in structured P2P networks. We propose a new keyword-based searching algorithm which can be implemented on top of any structured P2P overlay. We demonstrate that the proposed algorithm achieves very good searching results as it requires the minimum number of messages to be sent in order to find all the references to files containing at least the given set of keywords.
Notice of Violation of IEEE Publication Principles "Improving Heterogeneous Data Clustering by Using Metadata and Compression Algorithms" by Alexandra Cernian, Dorin Carstoiu, Valentin Sgarciu, in the Procee...
Notice of Violation of IEEE Publication Principles "Improving Heterogeneous Data Clustering by Using Metadata and Compression Algorithms" by Alexandra Cernian, Dorin Carstoiu, Valentin Sgarciu, in the Proceedings of the 2010 Roedunet International Conference (RoEduNet),June 2010, pp.169-173 After careful and considered review of the content and authorship of this paper by a duly constituted expert committee, this paper has been found to be in violation of IEEE's Publication Principles. This paper contains portions of text from the paper(s) cited below. A credit notice is used, but due to the absence of quotation marks or offset text, copied material is not clearly referenced or specifically identified. "Etude des Methodes de Classification par Compression" by Tudor Basarab IONESCU, published in Rapport interne 2005-06-28-DI-FB http://***/fb/download/Articles/Rapport_*** Nowadays, we have to deal with a large quantity of unstructured, heterogeneous data, produced by an increasing number of sources. Clustering heterogeneous data is essential to getting structured information in response to user queries. In this paper, we assess the results of a new clustering technique - clustering by compression - when applied to metadata associated with heterogeneous sets of data. The clustering by compression procedure is based on a parameter-free, universal, similarity distance, the normalized compression distance or NCD, computed from the lengths of compressed data files (singly and in pair-wise concatenation). Experimental results show that using metadata could improve the average clustering performances with about 20% over clustering the same sample data set without using metadata.
The Semantic Web is a project and vision of the World Wide Web Consortium to extend the current Web, so that information is given well-defined meaning and structure, enhancing computers and people to work in cooperati...
详细信息
ISBN:
(纸本)9781424469826;9788988678183
The Semantic Web is a project and vision of the World Wide Web Consortium to extend the current Web, so that information is given well-defined meaning and structure, enhancing computers and people to work in cooperation. Semantic technologies are being added to enterprise solutions to accommodate new techniques for discovering relationships across different database, business applications and Web services. In this paper, we present an architectural model for a software tool which combines the use of Semantic Web mechanisms with database metadata and data warehousing mechanisms. If the benefits of the Semantic Web concept are combined with a powerful database server, then the information management will be much improved.
Since data is becoming more and more unstructured, clustering heterogeneous data is essential to getting structured information in response to user queries. In this paper, we test and validate the results of a new clu...
Since data is becoming more and more unstructured, clustering heterogeneous data is essential to getting structured information in response to user queries. In this paper, we test and validate the results of a new clustering technique - clustering by compression - when applied to metadata associated with heterogeneous sets of documents. The clustering by compression procedure is based on a parameter-free, universal, similarity distance, the normalized compression distance or NCD, computed from the lengths of compressed data files (singly and in pair-wise concatenation). Experimental results show that using metadata could improve the average clustering performances with about 10% over clustering the same sample data set without using metadata.
The paper discusses a generic, open architecture for industrial or non-industrial robot controllers allowing system designers and robot manufacturers to develop rapid deployment automation solutions for particular mec...
详细信息
The paper discusses a generic, open architecture for industrial or non-industrial robot controllers allowing system designers and robot manufacturers to develop rapid deployment automation solutions for particular mechanics of robot manipulators. The paper describes a multiple-axis open controller design for a mobile AGV platform carrying a robotic arm, with inclination control to provide horizontal alignment in any terrain configuration. The navigation and locating of the mobile robot platform, the motion control of the robotic arm, as well as monitoring, learning, program editing, debugging and execution are embedded in a multiprocessor system developed around a Motion control solution for which a structured programming language was developed. A strongly coupled multi-processor architecture embedding model learning, control and man-machine GUI functionalities is described both as hardware implementing and basic software system design (RTOS, multitasking and operating modes). Experimental results are reported for the motion control of the 5-d.o.f. robot arm.
暂无评论