Financial systems are nowadays dependent on electronic data processing technology. One of the most frequent causes of failure in this highly sensitive system are overvoltages mostly caused by lightning. This paper des...
详细信息
Financial systems are nowadays dependent on electronic data processing technology. One of the most frequent causes of failure in this highly sensitive system are overvoltages mostly caused by lightning. This paper describes how lightning electromagnetic pulse intrudes into computer information systems and causes system damage. Damage prevention is also discussed.
This paper describes several new algorithms for computing lower bounds on the length of the schedule and the number of functional units in high-level synthesis.
This paper describes several new algorithms for computing lower bounds on the length of the schedule and the number of functional units in high-level synthesis.
Modern applications are often defined as sets of several computational tasks. This paper presents a synthesis algorithm for ASIC implementations which realize multiple computational tasks under hard real-time deadline...
详细信息
Modern applications are often defined as sets of several computational tasks. This paper presents a synthesis algorithm for ASIC implementations which realize multiple computational tasks under hard real-time deadlines. The algorithm analyzes constraints imposed by task sharing as well as the traditional datapath synthesis criteria. In particular we demonstrated an efficient technique to combine rate-monotonic scheduling, a widely used hard real-time systems scheduling discipline, with estimations and scheduling and allocation algorithms. Matching the number of bits in tasks assigned to the same processor was the most important factor in obtaining good designs. We have demonstrated the effectiveness of our algorithms on several multiple-task examples.
Describes work in progress towards development of integrated pen-based software systems for processing visual languages (VL). The primary assumption is that graphical input, editing and VL parsing facilities can be tr...
详细信息
Describes work in progress towards development of integrated pen-based software systems for processing visual languages (VL). The primary assumption is that graphical input, editing and VL parsing facilities can be treated as system resources to be shared among many applications. Prototypes of a pen-stroke editor, VL parser, and VL-based application are described.< >
A generic data-flow architecture for mapping large computation problems is designed. The architecture is based on reconfigurable shuffle buses, by which the complexity of interprocessor communications is largely simpl...
详细信息
A generic data-flow architecture for mapping large computation problems is designed. The architecture is based on reconfigurable shuffle buses, by which the complexity of interprocessor communications is largely simplified. The issues of representing the computation problems, deriving routing schemes for a generic linear array, and resolving the pipelining of multiple dataflows are addressed. It is shown that the shuffle bus provides a very efficient interconnection network for both data shuffling and I/O interface.< >
We have developed Argus, a novel approach for providing low-cost, comprehensive error detection for simple cores. The key to Argus is that the operation of a von Neumann core consists of four fundamental tasks - contr...
详细信息
ISBN:
(纸本)9780769530475
We have developed Argus, a novel approach for providing low-cost, comprehensive error detection for simple cores. The key to Argus is that the operation of a von Neumann core consists of four fundamental tasks - control flow, dataflow, computation, and memory access - that can be checked separately. We prove that Argus can detect any error by observing whether any of these tasks are performed incorrectly. We describe a prototype implementation, Argus-1, based on a single-issue, 4-stage, in-order processor to illustrate the potential of our approach. Experiments show that Argus-1 detects transient and permanent errors in simple cores with much lower impact on performance (<4% average overhead) and chip area (<17% overhead) than previous techniques.
The authors describe the STILE (Structure Interconnection Language and Environment) general-purpose computer-based graphical design and development system for describing logical relationships among components of syste...
详细信息
The authors describe the STILE (Structure Interconnection Language and Environment) general-purpose computer-based graphical design and development system for describing logical relationships among components of systems. The syntax of the graphs produced using STILE is separate from the semantics, which are supplied by a postprocessor. A major advantage of this approach is the ability to address different models of computation, especially different concurrency models, using the same graphical environment. The authors show how to create primary building block parts for analog computing and data-flowcomputing from parts defined in a different model of computing. Composing parts in this fashion eliminates the need to build postprocessors for these additional models of computation, thereby overcoming one of the major problems introduced by separating graphical syntax from semantics.< >
The n-dimensional twisted cube, denoted by TQ n , a variation of the hypercube, possesses some properties superior to the hypercube. In this paper, assuming that each vertex is incident with at least two fault-free l...
详细信息
The n-dimensional twisted cube, denoted by TQ n , a variation of the hypercube, possesses some properties superior to the hypercube. In this paper, assuming that each vertex is incident with at least two fault-free links, we show that TQ n can tolerate up to 2n - 5 edge faults, while retaining a fault-free Hamiltonian cycle. The result is optimal with respect to the number of edge faults tolerated
In distributed computer environments and parallel applications many factors determine system performance. The main problem in increasing the performance of such systems is the impact of synchronization on the processo...
详细信息
In distributed computer environments and parallel applications many factors determine system performance. The main problem in increasing the performance of such systems is the impact of synchronization on the processor utilization. Synchronization overhead is caused mainly by intrinsic properties of the application (inherent parallelism, data and control dependencies, etc.) and by architectural and organizational features of the system. Experiments are discussed which concern the profiling of a distributed system, based on the client model. The basic goals are to compare the simulation results, such as parallelism profile, synchronization profile and speed-up profile, produced by a profiler tool and the similar results produced by the execution of the prototype implementation. Both kinds of results are presented, and the limitations and advantages of the simulation approach are discussed.< >
Noninterference requires that there is no information flow from sensitive to public data in a given system. However, many systems perform intentional release of sensitive information as part of their correct functioni...
详细信息
Noninterference requires that there is no information flow from sensitive to public data in a given system. However, many systems perform intentional release of sensitive information as part of their correct functioning and therefore violate noninterference. To control information flow while permitting intentional information release, some systems have a downgrading or declassification mechanism. A major danger of such a mechanism is that it may cause unintentional information release. This paper shows that a robustness property can be used to characterize programs in which declassification mechanisms cannot be exploited by attackers to release more information than intended. It describes a simple way to provably enforce this robustness property through a type-based compile-time program analysis. The paper also presents a generalization of robustness that supports upgrading (endorsing) data integrity.
暂无评论