A novel graph-theoretic model for describing the relation between a decomposed algorithm and its execution in a multiprocessor environment is developed. Called ATAMM, the model consists of a set of Petri-net marked gr...
详细信息
A novel graph-theoretic model for describing the relation between a decomposed algorithm and its execution in a multiprocessor environment is developed. Called ATAMM, the model consists of a set of Petri-net marked graphs that incorporates the general specifications of a data-flow architecture. The model is useful for representing decision-free algorithms having large-grained, computationally complex primitive operations. Performance measures of computing speed and throughput capacity are defined. The ATAMM model is used to develop analytically lower bounds for these parameters.< >
The philosophy of composing new software tools from previously created tool fragments can facilitate the development software systems. An examination is made of the extension of this philosophy to the design of progra...
详细信息
ISBN:
(纸本)0897912586
The philosophy of composing new software tools from previously created tool fragments can facilitate the development software systems. An examination is made of the extension of this philosophy to the design of program interpreters, demonstrating how the separation of interpretation into a core algorithm, value-kind definitions, and computation model allows the capture of conventional execution models, symbolic execution models, dynamic dataflow tracking, and other useful forms of program interpretation. An interpretation system based on this separation, called ARIES, is currently under development.< >
An overview is given of the main specification and design issues for parallel systems of programs from a software engineering perspective. A parallel system design approach based on the Large-Grain dataflow 2 (LGDF2)...
详细信息
ISBN:
(纸本)9780818623202
An overview is given of the main specification and design issues for parallel systems of programs from a software engineering perspective. A parallel system design approach based on the Large-Grain dataflow 2 (LGDF2) computation model is outlined. An assessment of LGDF2 as the basis for unified specification, design, and implementation of parallel programs is given, along with a brief assessment of its potential impact on parallel software development and software project management.< >
This paper introduces a powerful novel sequencer for controlling computational machines and for structured DMA (direct memory access) applications. It is mainly focused on applications using 2-dimensional memory organ...
详细信息
This paper introduces a powerful novel sequencer for controlling computational machines and for structured DMA (direct memory access) applications. It is mainly focused on applications using 2-dimensional memory organization, where most inherent speed-up is obtained thereof. A classification scheme of computational sequencing patterns and storage schemes is derived. In the context of application specific computing the paper illustrates its usefulness especially for data sequencing-recalling examples hereafter published earlier, as far as needed for completeness. The paper also discusses, how the new sequencer hardware provides substantial speed-up compared to traditional sequencing hardware use.
Hierarchical Signal flow Graphs (HSFGs) am used to illustrate the computations and the dataflow required for the block regularised parameter estimation algorithm. This algorithm protects the parameter estimation from...
详细信息
Hierarchical Signal flow Graphs (HSFGs) am used to illustrate the computations and the dataflow required for the block regularised parameter estimation algorithm. This algorithm protects the parameter estimation from numerical difficulties associated with insufficiently exciting data or where the behaviour of the underlying model is unknown. Hierarchical signal flow graphs (HSFGs) aid the user's understanding of the algorithm as they clearly show how the algorithm differs from exponentially weighted recursive least squares, but also allow the user to develop fast efficient parallel algorithms easily and effectively, as demonstrated.
Network accountability and forensic analysis have become increasingly important, as a means of performing network diagnostics, identifying malicious nodes, enforcing trust management policies, and imposing diverse bil...
详细信息
Network accountability and forensic analysis have become increasingly important, as a means of performing network diagnostics, identifying malicious nodes, enforcing trust management policies, and imposing diverse billing over the Internet. This has led to a series of work to provide better network support for accountability, and efficient mechanisms to trace packets and information flows through the Internet. In this paper, we make the following contributions. First, we show that network accountability and forensic analysis can be posed generally as data provenance computations and queries over distributed streams. In particular, one can utilize declarative networks with appropriate security and provenance extensions to provide a unified declarative framework for specifying, analyzing and auditing networks. Second, we propose a taxonomy of data provenance along multiple axes, and show that they map naturally to different use cases in networks. Third, we suggest techniques to efficiently compute and store network provenance, and provide an initial performance evaluation on the P2 declarative networking system with modifications to support authenticated communication and provenance.
This paper presents an overview of VisiTile-a toolkit for developing domain-oriented visual languages. The class of visual languages that can be constructed with VisiTile is briefly described, followed by examples of ...
详细信息
This paper presents an overview of VisiTile-a toolkit for developing domain-oriented visual languages. The class of visual languages that can be constructed with VisiTile is briefly described, followed by examples of such languages. An overview of the VisiTile architecture is presented, including discussion of the major components and features of the toolkit. The VisiTile toolkit facilitates the specification and implementation of a hybrid class of visual languages that combine data-flow with grammar-based layout. A two-dimensional layout grammar is used to specify legal constructions of data-flow processors. The language specification is used as the basis for syntax-directed editing and interpretation of visual programs.
We present an innovative methodology aimed at rapidly designing image processing systems. Within this environment the first step consists in emulating an IP algorithm on a massively parallel dedicated computer. A comp...
详细信息
We present an innovative methodology aimed at rapidly designing image processing systems. Within this environment the first step consists in emulating an IP algorithm on a massively parallel dedicated computer. A compact and functionally equivalent VLSI circuit is then derived by using a high level synthesis system called ALPHA. The whole methodology is presented and illustrated with an IP algorithm effectively designed.
Our research addresses "information appliances' used in modern large-scale distributed systems to: (1) virtualize their dataflows by applying actions such as filtering, format translation, etc., and (2) sepa...
详细信息
Our research addresses "information appliances' used in modern large-scale distributed systems to: (1) virtualize their dataflows by applying actions such as filtering, format translation, etc., and (2) separate such actions from enterprise applications' business logic, to make it easier for future service-oriented codes to inter-operate in diverse and dynamic environments. Our specific contribution is the enrichment of runtimes of these appliances with methods for QoS-awareness, thereby giving them the ability to deliver desired levels of QoS even under sudden requirement changes - IQ-appliances. For experimental evaluation, we prototype an IQ-appliance. Measurements demonstrate the feasibility and utility of the approach.
Sharing the resources among various users and the lack of a centralized control are two key characteristics of many distributed heterogeneous computing systems. A critical challenge for designing applications in such ...
详细信息
Sharing the resources among various users and the lack of a centralized control are two key characteristics of many distributed heterogeneous computing systems. A critical challenge for designing applications in such systems is to coordinate the resources in a decentralized fashion while adapting to the changes in the system. In this paper, we consider the computation of a large set of equal-sized independent tasks. This represents the computation paradigm for a variety of large scale applications such as SETI@home and Monte Carlo simulations. We focus on the performance optimization for a decentralized adaptive task allocation protocol. We develop a bandwidth allocation strategy based on our decentralized task allocation algorithm, and a simple task buffer management policy. Simulation results show that our task allocation protocol achieves close to the optimal system throughput.
暂无评论