the proceedings contain 27 papers. the topics discussed include: tiny-tail flash: near-perfect elimination of garbage collection tail latencies in NAND SSDs;the logic of physical garbage collection in deduplicating st...
ISBN:
(纸本)9781931971362
the proceedings contain 27 papers. the topics discussed include: tiny-tail flash: near-perfect elimination of garbage collection tail latencies in NAND SSDs;the logic of physical garbage collection in deduplicating storage;file systems fated for senescence? nonsense, says science!;to FUSE or not to FUSE: performance of user-space file systems;knockoff: cheap versions in the cloud;evolving EXT4 for shingled disks;SMaRT: an approach to shingled magnetic recording translation;and facilitating magnetic recording technology scaling for data center hard disk drives through filesystem-level transparent local erasure coding.
the proceedings contain 28 papers. the topics discussed include: NyxCache: flexible and efficient multi-tenant persistent memory caching;HTMFS: strong consistency comes for free with hardware transactional memory in p...
ISBN:
(纸本)9781939133267
the proceedings contain 28 papers. the topics discussed include: NyxCache: flexible and efficient multi-tenant persistent memory caching;HTMFS: strong consistency comes for free with hardware transactional memory in persistent memory file systems;ctFS: replacing file indexing with hardware memory translation through contiguous file allocation for persistent memory;FORD: fast one-sided RDMA-based distributed transactions for disaggregated persistent memory;closing the B+-tree vs. LSM-tree write amplification gap on modern storage hardware with built-in transparent compression;TVStore: automatically bounding time series storage via time-varying compression;removing double-logging with passive data persistence in LSM-tree based relational databases;hardware/software co-programmable framework for computational SSDs to accelerate deep learning service on large-scale graphs;and Aurogon: taming aborts in all phases for distributed in-memory transactions.
the proceedings contain 28 papers. the topics discussed include: ROART: range-query optimized persistent ART;SpanDB: a fast, cost-effective LSM-tree based KV store on hybrid storage;evolution of development priorities...
ISBN:
(纸本)9781939133205
the proceedings contain 28 papers. the topics discussed include: ROART: range-query optimized persistent ART;SpanDB: a fast, cost-effective LSM-tree based KV store on hybrid storage;evolution of development priorities in key-value stores serving large-scale applications: the RocksDB experience;high velocity kernel file systems with bento;scalable persistent memory file system with kernel-userspace collaboration;rethinking file mapping for persistent memory;pattern-guided file compression with user-experience enhancement for log-structured file system on mobile devices;ArchTM: architecture-aware, high performance transaction for persistent memory;the dilemma between deduplication and locality: can both be achieved?;and remap-SSD: safely and efficiently exploiting SSD address remapping to eliminate duplicate writes.
the proceedings contain 23 papers. the topics discussed include: lock-free collaboration support for cloud storage services with operation inference and transformation;carver: finding important parameters for storage ...
ISBN:
(纸本)9781939133120
the proceedings contain 23 papers. the topics discussed include: lock-free collaboration support for cloud storage services with operation inference and transformation;carver: finding important parameters for storage system tuning;uncovering access, reuse, and sharing characteristics of I/O-intensive files on large-scale production HPC systems;scalable parallel flash firmware for many-core architectures;an empirical guide to the behavior and use of scalable persistent memory;DC-store: eliminating noisy neighbor containers using deterministic I/O performance and resource isolation;and InfiniCache: exploiting ephemeral serverless functions to build a cost-effective memory cache.
Block-layer caching systems improve the I/O performance by using hybrid storage devices;the advent of fast, byteaddressable storage enables caching systems to further leverage new storage tiers (e.g., with persistent ...
详细信息
ISBN:
(纸本)9798400702242
Block-layer caching systems improve the I/O performance by using hybrid storage devices;the advent of fast, byteaddressable storage enables caching systems to further leverage new storage tiers (e.g., with persistent memory as the cache device and SSD as the backend device) to achieve better caching performance. However, the new storage devices also challenge the design and implementation of existing block-based caching systems. this paper conducts a comprehensive performance study of a popular caching system, Open CAS, and identifies new, unrevealed software bottlenecks. Our observations and root cause analysis cast light on optimizing the software stack of caching systems to incorporate emerging storagetechnologies.
the paper titled "Avoiding the Disk Bottleneck in the Data Domain Deduplication file System" [3] describes several fundamental ideas behind the file system that drives Data Domain’s deduplication storage pr...
详细信息
the performance of recent data storage devices has significantly improved over previous generations, with lower latency, greater throughput, and greater parallelism. Since we now have Ultra-Low Latency (ULL) data stor...
详细信息
ISBN:
(纸本)9798400702242
the performance of recent data storage devices has significantly improved over previous generations, with lower latency, greater throughput, and greater parallelism. Since we now have Ultra-Low Latency (ULL) data storage devices capable of providing data in less than 10 microseconds, in this paper we question the need for IO schedulers for better performance and energy efficiency. Specifically, we measure the latency costs of Linux IO scheduling algorithms and investigate their impact on overall performance and energy efficiency using a ULL storage device, a power meter, and various IO workloads. Our observations indicate that IO schedulers for ULL storage either do not help or significantly increase request latencies while also negatively impacting throughput and energy efficiency. Although we recognize the value of IO schedulers for slower devices or for other metrics such as fairness and QoS, we believe that IO schedulers have become unnecessary for ULL devices to improve performance or energy efficiency.
the proceedings contain 17 papers. the topics discussed include: neural cloud storage: innovative cloud storage solution for cold video;SAND: a storage abstraction for video-based deep learning;P2Cache: an application...
ISBN:
(纸本)9798400702242
the proceedings contain 17 papers. the topics discussed include: neural cloud storage: innovative cloud storage solution for cold video;SAND: a storage abstraction for video-based deep learning;P2Cache: an application-directed page cache for improving performance of data-intensive applications;when caching systems meet emerging storage devices: a case study;do we still need IO schedulers for low-latency disks?;deep note: can acoustic interference damage the availability of hard disk storage in underwater data centers?;energy implications of IO interface design choices;excessive SSD-internal parallelism considered harmful;hide-and-seek: hiding secrets in threshold voltage distributions of NAND flash memory cells;hide-and-seek: hiding secrets in threshold voltage distributions of NAND flash memory cells;and a free-space adaptive runtime zone-reset algorithm for enhanced ZNS efficiency.
Cloud storage providers offer different pricing tiers based on the access frequency of stored data. this pricing plan offers cost benefits for videos that are accessed less than once per month. However, the stringent ...
详细信息
ISBN:
(纸本)9798400702242
Cloud storage providers offer different pricing tiers based on the access frequency of stored data. this pricing plan offers cost benefits for videos that are accessed less than once per month. However, the stringent requirement falls short in addressing the large number of "cold" videos stored today. this paper proposes Neural Cloud storage (NCS), a pioneering approach to address the problem by applying neural enhancement, specifically content-aware super-resolution (SR). According to our preliminary cost-benefit analysis, NCS can further save an annual 14% total cost of ownership (TCO) compared to the cheapest AWS storage service for cold video. By reducing the cost, it expands the cold video coverage (from 25% to 38%) that can benefit from the multi-tiered service. As deep learning and computational resources continue to advance, we believe that neural enhancement will revolutionize the field of cloud storage.
file systems need testing to discover bugs and to help ensure reliability. Many file system testing tools are evaluated based on their code coverage. We analyzed recently reported bugs in Ext4 and BtrFS and found a we...
详细信息
ISBN:
(纸本)9798400702242
file systems need testing to discover bugs and to help ensure reliability. Many file system testing tools are evaluated based on their code coverage. We analyzed recently reported bugs in Ext4 and BtrFS and found a weak correlation between code coverage and test effectiveness: many bugs are missed because they depend on specific inputs, even though the code was covered by a test suite. Our position is that coverage of system call inputs and outputs is critically important for testing file systems. We thus suggest input and output coverage as criteria for file system testing, and show how they can improve the effectiveness of testing. We built a prototype called IOCov to evaluate the input and output coverage of file system testing tools. IOCov identified many untested cases (specific inputs and outputs or ranges thereof) for both CrashMonkey and xfstests. Additionally, we discuss a method and associated metrics to identify over- and undertesting using IOCov.
暂无评论