the proceedings contain 46 papers. the topics discussed include: kite: efficient and available release consistency for the datacenter;Oak: a scalable off-heap allocated key-value map;optimizing batched Winograd convol...
the proceedings contain 46 papers. the topics discussed include: kite: efficient and available release consistency for the datacenter;Oak: a scalable off-heap allocated key-value map;optimizing batched Winograd convolution on GPUs;taming unbalanced training workloads in deep learning with partial collective operations;scalable top-K retrieval with Sparta;waveSZ: a hardware-algorithm co-design of efficient lossy compression for scientific data;scaling concurrent queues by using HTM to profit from failed atomic operations;a wait-free universal construction for large objects;using sample-based time series data for automated diagnosis of scalability losses in parallel programs;scaling out speculative execution of finite-state machines withparallel merge;and detecting and reproducing error-code propagation bugs in MPI implementations.
the proceedings contain 46 papers. the topics discussed include: kite: efficient and available release consistency for the datacenter;oak: a scalable off-heap allocated key-value map;taming unbalanced training workloa...
ISBN:
(纸本)9781450368186
the proceedings contain 46 papers. the topics discussed include: kite: efficient and available release consistency for the datacenter;oak: a scalable off-heap allocated key-value map;taming unbalanced training workloads in deep learning with partial collective operations;scalable top-k retrieval with sparta;waveSZ: a hardware-algorithm co-design of efficient lossy compression for scientific data;scaling concurrent queues by using HTM to profit from failed atomic operations;a wait-free universal construction for large objects;universal wait-free memory reclamation;and using sample-based time series data for automated diagnosis of scalability losses in parallel programs.
the proceedings contain 13 papers. the topics discussed include: a guided walk into link key candidate extraction with relational concept analysis;reflections on profiling and cataloguing the content of SPARQL endpoin...
the proceedings contain 13 papers. the topics discussed include: a guided walk into link key candidate extraction with relational concept analysis;reflections on profiling and cataloguing the content of SPARQL endpoints using SPORTAL;reflections on: modeling linked open statistical data;reflections on: DCAT-AP representation of Czech national open data catalog and its impact;reflections on: deep learning for noise-tolerant RDFS reasoning;reflections on: finding melanoma drugs through a probabilistic knowledge graph;reflections on: knowledge graph fact prediction via knowledge-enriched tensor factorization;the semantic sensor network ontology, revamped;and reflections on: knowmore - knowledge base augmentation with structured web markup.
Molecular dynamics simulation emerges as an important area that HPC+AI helps to investigate the physical properties, with machine-learning interatomic potentials (MLIPs) being used. General-purpose machine-learning (M...
详细信息
ISBN:
(纸本)9798400714436
Molecular dynamics simulation emerges as an important area that HPC+AI helps to investigate the physical properties, with machine-learning interatomic potentials (MLIPs) being used. General-purpose machine-learning (ML) tools have been leveraged in MLIPs, but they are not perfectly matched with each other, since many optimization opportunities in MLIPs have been missed by ML tools. this inefficiency arises from the fact that HPC+AI applications work with far more computational complexity compared with pure AI scenarios. this paper has developed an MLIP, named TensorMD, independently from any ML tool. TensorMD has been evaluated on two supercomputers and scaled to 51.8 billion atoms, i.e., similar to 3x compared with state-of-the-art.
the proceedings contains 25 papers. Topics discussed include data and task parallelism, irregular applications, coherence protocols, shared memory, compilers and performances issue.
the proceedings contains 25 papers. Topics discussed include data and task parallelism, irregular applications, coherence protocols, shared memory, compilers and performances issue.
We introduce a structure-aware parallel technique for context-bounded analysis of concurrent programs. the key intuition consists in decomposing the set of concurrent traces into symbolic subsets that are separately e...
详细信息
ISBN:
(纸本)9781450368186
We introduce a structure-aware parallel technique for context-bounded analysis of concurrent programs. the key intuition consists in decomposing the set of concurrent traces into symbolic subsets that are separately explored by multiple instances of the same decision procedure running in parallel. the decision procedures work on different partitions of the search space without cooperating, whence distribution follows effortlessly. Our experiments on a selection of complex multi-threaded programs show significant analysis speedups and scalability, and greater performance gains than with general-purpose parallel solvers.
Data movement has a significant impact on program performance. For multithread programs, this impact is amplified, since different threads often interfere with each other by competing for shared cache space. However, ...
详细信息
ISBN:
(纸本)9781450368186
Data movement has a significant impact on program performance. For multithread programs, this impact is amplified, since different threads often interfere with each other by competing for shared cache space. However, recent de facto locality metrics consider either sequential execution only, or derive locality for multithread programs in an inefficient way, i.e. exhaustive simulation. this paper presents PLUM, a compiler solution for timescale locality analysis for parallel programs. Experiments demonstrate that the prediction accuracy is 93.97% on average. PLUM is the first tool that analyzes data locality for parallel programs during compile time;in addition, it provides an approach for efficiently studying the representative interleaving pattern for parallel executions.
the use of futures can generate arbitrary dependences in the computation, making it difficult to detect races efficiently. Algorithms proposed by priorwork to detect races on programs with futures all have to execute ...
ISBN:
(纸本)9781450368186
the use of futures can generate arbitrary dependences in the computation, making it difficult to detect races efficiently. Algorithms proposed by priorwork to detect races on programs with futures all have to execute the program sequentially. We propose F-Order, the first known parallel race detection algorithm that detects races on programs that use futures. Given a computation with work T-1 and span T-infinity, our algorithm detects races in time O((T-1 lg (k) over cap + k(2))/ P + T-infinity(k + lg r lg (k) over cap)) on P processors, where k is the number of future operations, r is the maximum number of readers per memory location, and (k) over cap is the maximum number of future operations done by a single future task, which is typically small. We have also implemented a prototype system based on the proposed algorithm and empirically demonstrates its practical efficiency and scalability.
Tensor computations present significant performance challenges that impact a wide spectrum of applications. Efforts on improving the performance of tensor computations include exploring data layout, execution scheduli...
详细信息
ISBN:
(纸本)9781450368186
Tensor computations present significant performance challenges that impact a wide spectrum of applications. Efforts on improving the performance of tensor computations include exploring data layout, execution scheduling, and parallelism in common tensor kernels. this work presents a benchmark suite for arbitrary-order sparse tensor kernels using state-of-the-art tensor formats: coordinate (COO) and hierarchical coordinate (HiCOO). It demonstrates a set of reference tensor kernel implementations and some observations on Intel CPUs and NVIDIA GPUs. the full paper can be referred to at http://***/abs/2001.00660.
We present XIndex, a concurrent ordered index designed for fast queries. Similar to a recent proposal of the learned index, XIndex uses learned models to optimize index efficiency. Comparing withthe learned index, XI...
详细信息
ISBN:
(纸本)9781450368186
We present XIndex, a concurrent ordered index designed for fast queries. Similar to a recent proposal of the learned index, XIndex uses learned models to optimize index efficiency. Comparing withthe learned index, XIndex is able to effectively handle concurrent writes without affecting the query performance by leveraging fine-grained synchronization and a new compaction scheme, Two-Phase Compaction. Furthermore, XIndex adapts its structure according to runtime workload characteristics to support dynamic workload. We demonstrate the advantages of XIndex with both YCSB and TPC-C (KV), a TPC-C variant for key-value stores. XIndex achieves up to 3.2x and 4.4x performance improvement comparing with Masstree and Wormhole, respectively, on a 24-core machine, and it is open-sourced(1).
暂无评论