Recently, persistent data structures, like key-value stores (KVSs), which are stored in a high-performance computing (HPC) system's nonvolatile memory, provide an attractive solution for a number of emerging chall...
详细信息
Recently, persistent data structures, like key-value stores (KVSs), which are stored in a high-performance computing (HPC) system's nonvolatile memory, provide an attractive solution for a number of emerging challenges like limited I/O performance. Data compression and encryption are two well-known techniques for improving several properties of such data-oriented systems. This article investigates how to efficiently integrate data compression and encryption into persistent KVSs for HPC with the ultimate goal of hiding their costs and complexity in terms of performance and ease of use. Our compression technique exploits deep memory hierarchy in an HPC system to achieve both storage reduction and performance improvement. Our encryption technique provides a practical level of security and enables sharing of sensitive data securely in complex scientific workflows with nearly imperceptible cost. We implement the proposed techniques on top of a distributed embedded KVS to evaluate the benefits and costs of incorporating these capabilities along different points in the dataflow path, illustrating differences in effective bandwidth, latency, and additional computational expense on Swiss National Supercomputing Centre's Grand Tave and National Energy Research Scientific Computing Center's Cori.
Dramatic changes in the technology landscape marked by increasing scales and pervasiveness of compute and data have resulted in the proliferation of edge applications aimed at effectively processing data in a timely m...
详细信息
Dramatic changes in the technology landscape marked by increasing scales and pervasiveness of compute and data have resulted in the proliferation of edge applications aimed at effectively processing data in a timely manner. As the levels and fidelity of instrumentation increases and the types and volumes of available data grow, new classes of applications are being explored that seamlessly combine real-time data with complex models and data analytics to monitor and manage systems of interest. However, these applications require a fluid integration of resources at the edge, the core, and along the data path to support dynamic and data-driven application workflows, that is, they need to leverage a computing continuum. In this article, we present our vision for enabling such a computing continuum and specifically focus on enabling edge-to-cloud integration to support data-driven workflows. The research is driven by an online data-driven tsunami warning use case that is supported by the deployment of large-scale national environment observation systems. This article presents our overall approach as well as current status and next steps.
Application-level interoperability is defined as the ability of an application to utilize multiple distributed heterogeneous resources. Such interoperability is becoming increasingly important with increasing volumes ...
详细信息
Application-level interoperability is defined as the ability of an application to utilize multiple distributed heterogeneous resources. Such interoperability is becoming increasingly important with increasing volumes of data and multiple sources of data as well as resource types. The primary aim of this paper is to understand different ways and levels in which application-level interoperability can be provided across distributed infrastructure. Our approach is: (i) Given the simplicity of MapReduce, its widespread usage, and its ability to capture the primary challenges of developing distributed applications, use MapReduce as the underlying exemplar;we develop an interoperable implementation of MapReduce using SAGA - an API to support distributed programming, (ii) Using the canonical wordcount application that uses SAGA-based MapReduce, we investigate its scale-out across clusters, clouds and HPC resources, (iii) Establish the execution of wordcount application using MapReduce and other programming models such as Sphere concurrently. SAGA-based MapReduce in addition to being interoperable across different distributed infrastructures, also provides user-level control of the relative placement of compute and data. We provide performance measures and analysis of SAGA-MapReduce when using multiple, different, heterogeneous infrastructures concurrently for the same problem instance. (C) 2010 Elsevier B.V. All rights reserved.
It is shown that a recursively enumerable class F of total recursive functions is co-learnable in every numbering of F iff. any two numberings of F are equivalent. This characterization is of interest both for the the...
详细信息
It is shown that a recursively enumerable class F of total recursive functions is co-learnable in every numbering of F iff. any two numberings of F are equivalent. This characterization is of interest both for the theory of programming systems and for inductive inference.
A list processor, SLP, is presented as a means for manipulating data structures represented as compact lists. The list space is both paged and segmented in order to handle large data structures on a conventional minic...
详细信息
A list processor, SLP, is presented as a means for manipulating data structures represented as compact lists. The list space is both paged and segmented in order to handle large data structures on a conventional minicomputer. The list processing primitives arc provided as procedure calls embedded in a high-level language, and the system includes aids for developing user programs. The implementation for a PDP 11/45 computer is presented along with a discussion of the use and performance of the system in practice.
As computer technology matures, our growing ability to create large systems is leading to basic changes in the nature of programming. Current programming language concepts will not be adequate for building and maintai...
详细信息
As computer technology matures, our growing ability to create large systems is leading to basic changes in the nature of programming. Current programming language concepts will not be adequate for building and maintaining systems of the complexity called for by the tasks we attempt. Just as high level languages enabled the programmer to escape from the intricacies of a machine's order code, higher level programming systems can provide the means to understand and manipulate complex systems and components. In order to develop such systems, we need to shift our attention away from the detailed specification of algorithms, towards the description of the properties of the packages and objects with which we build. This paper analyzes some of the shortcomings of programming languages as they now exist, and lays out some possible directions for future research. [ABSTRACT FROM AUTHOR]
This paper presents a new method of the finite state machine logic synthesis intended for the modem FPGAs with embedded memory blocks. Although the functional decomposition is recognized as the most efficient method o...
详细信息
ISBN:
(纸本)081946211X
This paper presents a new method of the finite state machine logic synthesis intended for the modem FPGAs with embedded memory blocks. Although the functional decomposition is recognized as the most efficient method of digital circuits synthesis for implementation with FPGAs, none of the known state encoding algorithms is effective. This is caused by the fact that traditional methods comprise two steps: internal states encoding and, then, mapping of the encoded state transition table into target architecture. In this paper a new method of FSM state encoding is presented. It is an inherent part of the serial decomposition process and therefore no separate encoding step is required. It is shown that such state encoding guarantees the best solution. The paper presents examples from standard benchmark set, which confirm that the proposed method allows for a reduction of utilization of logic cells and embedded memory blocks.
A surprising development in recently announced HPC platforms is the addition of, sometimes massive amounts of, persistent (nonvolatile) memory (NVM) in order to increase memory capacity and compensate for plateauing I...
详细信息
ISBN:
(纸本)9781538639146
A surprising development in recently announced HPC platforms is the addition of, sometimes massive amounts of, persistent (nonvolatile) memory (NVM) in order to increase memory capacity and compensate for plateauing I/O capabilities. However, there are no portable and scalable programming interfaces using aggregate NVM effectively. This paper introduces Papyrus: a new software system built to exploit emerging capability of NVM in HPC architectures. Papyrus (or Parallel Aggregate Persistent-YRU-Storage) is a novel programming system that provides features for scalable, aggregate, persistent memory in an extreme-scale system for typical HPC usage scenarios. Papyrus mainly consists of Papyrus Virtual File System (VFS) and Papyrus Template Container Library (TCL). Papyrus VFS provides a uniform aggregate NVM storage image across diverse NVM architectures. It enables Papyrus TCL to provide a portable and scalable high-level container programming interface whose data elements are distributed across multiple NVM nodes without requiring the user to handle complex communication, synchronization, replication, and consistency model. We evaluate Papyrus on two HPC systems, including UTK Beacon and NERSC Cori, using real NVM storage devices.
This paper reviews the evolutionary changes that have taken place in the systems approach to macroeconomic modeling of the Indian economy over the last 40 years in general and the last 5 years in particular. Consideri...
详细信息
This paper reviews the evolutionary changes that have taken place in the systems approach to macroeconomic modeling of the Indian economy over the last 40 years in general and the last 5 years in particular. Considering the fact that the systems approach has been used both at the official level, for planning purposes, and at the academic level, as inputs to the planning process, the paper covers both these aspects in a thematic manner. Commencing from the initial official efforts, with their emphasis on aggregate consistency models, through the intermediate phases of developing programming and nonoptimizing systems and, finally, to the current academic interest in control and chaotic systems, this evolution is considered primarily against the backdrop of the changing perspectives of systems theorists and designers.
暂无评论