Resistive random access memory (ReRAM) is a promising emerging non-volatile memory (NVM) technology that shows high potential for both data storage and computing. However, its crossbar array architecture leads to the ...
详细信息
Resistive random access memory (ReRAM) is a promising emerging non-volatile memory (NVM) technology that shows high potential for both data storage and computing. However, its crossbar array architecture leads to the sneak path (SP) problem, which may severely degrade the data storage reliability of ReRAM. Due to the complicated nature of the SP-induced interference (SPI) [1], it is difficult to derive an accurate channel model for it. The deep learning (DL)-based detection scheme (Zhong et al., 2020) can better mitigate the SPI, at the cost of additional power consumption and read latency. In this letter, we first propose a novel constrained coding (CC) scheme which can not only reduce the SPI, but also effectively differentiate the memory arrays into two categories of SPI-free and SPI-affected arrays. For the SPI-free arrays, we can use a simple middle-point threshold detector to detect the low and high resistance cells of ReRAM. For the SPI-affected arrays, a DL detector is first trained off-line. To avoid the additional power consumption and latency introduced by the DL detector, we further propose a DL-based threshold detector, whose detection threshold can be derived based on the outputs of the DL detector. It is then utilized for the online data detection of all the identified SPI-affected arrays. Simulation results demonstrate that the above CC and DL aided threshold detection scheme can effectively mitigate the SPI of the ReRAM array and achieve better error rate performance than the prior art detection schemes, without the prior knowledge of the channel.
Advancements in DNA synthesis and sequencing technologies have enabled the storage of data on synthetic DNA strands. However, realizing its potential relies on the design of tailored coding techniques and algorithms. ...
详细信息
Advancements in DNA synthesis and sequencing technologies have enabled the storage of data on synthetic DNA strands. However, realizing its potential relies on the design of tailored coding techniques and algorithms. This survey paper offers an overview of past contributions, accompanied by a special issue that showcases recent developments in this field.
In this article, we propose a new coding algorithm for DNA storage over both error-free and error channels. For the error-free case, we propose a constrained code called bit insertion-based constrained (BIC) code. BIC...
详细信息
In this article, we propose a new coding algorithm for DNA storage over both error-free and error channels. For the error-free case, we propose a constrained code called bit insertion-based constrained (BIC) code. BIC codes convert a binary data sequence to multiple oligo sequences satisfying the maximum homopolymer run (i.e., run-length (RL)) constraint by inserting dummy bits. We show that the BIC codes nearly achieves the capacity in terms of information density while the simple structure of the BIC codes allows linear-time encoding and fast parallel decoding. Also, by combining a balancing technique with the BIC codes, we obtain the constrained coding algorithm to satisfy the GC-content constraint as well as the RL constraint. Next, for DNA storage channel with errors, we integrate the proposed constrained coding algorithm with a rate-compatible low-density parity-check (LDPC) code to correct errors and erasures. Specifically, we incorporate LDPC codes adopted in the 5 G new radio standard because they have powerful error-correction capability and appealing features for the integration. Simulation results show that the proposed integrated coding algorithm outperforms existing coding algorithms in terms of information density and error correctability.
The input-constrained binary erasure channel (BEC) with strictly causal feedback is studied. The channel input sequence must satisfy the (0, k)-runlength limited (RLL) constraint, i.e., no more than k consecutive '...
详细信息
The input-constrained binary erasure channel (BEC) with strictly causal feedback is studied. The channel input sequence must satisfy the (0, k)-runlength limited (RLL) constraint, i.e., no more than k consecutive '0's are allowed. The feedback capacity of this channel is derived for all k >= 1, and is given by C-(0,k)(fb)(epsilon) = max (epsilon) over barH(2)(delta(0))+Sigma(k-1)(i=1) (H--1+1(epsilon)2(delta(i)) Pi(i-1)(m=0) delta(m))/1+Sigma(k-1)(i=0)(epsilon(-i+1) Pi(i)(m=0) delta(m)) where epsilon is the erasure probability, epsilon = 1 -epsilon and H-2(.) is the binary entropy function. The maximization is only over delta(k-1), while the parameters delta(i) for i <= k - 2 are straightforward functions of delta(k-1). The lower bound is obtained by constructing a simple coding for all k >= 1. It is shown that the feedback capacity can be achieved using zero-error, variable length coding. For the converse, an upper bound on the non-causal setting, where the erasure is available to the encoder just prior to the transmission, is derived. This upper bound coincides with the lower bound and concludes the search for both the feedback capacity and the non-causal capacity. As a result, non-causal knowledge of the erasures at the encoder does not increase the feedback capacity for the (0, kl-RLL input-constrained BEC. This property does not hold in general: the (2, infinity)-RLL input-constrained BEC, where every '1' is followed by at least two '0's, is used to show that the feedback capacity can be strictly smaller than the non-causal capacity.
In this paper, we study binary constrained codes that are resilient to bit-flip errors and erasures. In our first approach, we compute the sizes of constrained subcodes of linear codes. Since there exist well-known li...
详细信息
In this paper, we study binary constrained codes that are resilient to bit-flip errors and erasures. In our first approach, we compute the sizes of constrained subcodes of linear codes. Since there exist well-known linear codes that achieve vanishing probabilities of error over the binary symmetric channel (which causes bit-flip errors) and the binary erasure channel, constrained subcodes of such linear codes are also resilient to random bit-flip errors and erasures. We employ a simple identity from the Fourier analysis of Boolean functions, which transforms the problem of counting constrained codewords of linear codes to a question about the structure of the dual code. We illustrate the utility of our method in providing explicit values or efficient algorithms for our counting problem, by showing that the Fourier transform of the indicator function of the constraint is computable, for different constraints. Our second approach is to obtain good upper bounds, using an extension of Delsarte's linear program (LP), on the largest sizes of constrained codes that can correct a fixed number of combinatorial errors or erasures. We observe that the numerical values of our LP-based upper bounds beat the generalized sphere packing bounds of Fazeli et al. (2015).
In this paper, we propose a novel iterative encoding algorithm for DNA storage to satisfy both the GC balance and run-length constraints using a greedy algorithm. DNA strands with run-length more than three and the GC...
详细信息
In this paper, we propose a novel iterative encoding algorithm for DNA storage to satisfy both the GC balance and run-length constraints using a greedy algorithm. DNA strands with run-length more than three and the GC balance ratio far from 50% are known to be prone to errors. The proposed encoding algorithm stores data with high flexibility of run-length at most m and GC balance between 0.5 +/- alpha for arbitrary m and alpha. More importantly, we propose a novel mapping method to reduce the average bit error compared to the randomly generated mapping method. By using the proposed method, the average bit error caused by the one base error is 2.3455 bits, which is reduced by 20.5%, compared to the randomized mapping. Also, it is robust to error propagation since the input sequence is partitioned into small blocks during the mapping step. The proposed algorithm is implemented through iterative encoding, consisting of three main steps: randomization, M-ary mapping, and verification. It has an information density of 1.833 bits/nt in the case of m = 3 and alpha = 0.05.
The need for high-density non-volatile memory is addressed by shrinking cells and increasing the number of bits per cell. These approaches, however, reduce the signal-to-noise ratio and degrade read and write latencie...
详细信息
ISBN:
(纸本)9781450398008
The need for high-density non-volatile memory is addressed by shrinking cells and increasing the number of bits per cell. These approaches, however, reduce the signal-to-noise ratio and degrade read and write latencies. This paper presents a method to reduce the number of occupied levels in multi-level cells (MLC/TLC/QLC) to improve reliability and performance. The proposed concept compresses the data and then expands it back to its original size using coding into patterns with fewer levels. The modulation maintains random access for page reads where each bit of each cell can be sensed independently. For example, 8-level TLC is used, and the data is reduced by 14%, and the utilized cell levels are reduced to six, advancing reliability, program, and sequential read bandwidth by over 29%.
With the reduction in device size and the increase in cell bit-density, NAND flash memory suffers from larger inter-cell interference (ICI) and disturbance effects. constrained coding can mitigate the ICI effects by a...
详细信息
ISBN:
(数字)9781665483414
ISBN:
(纸本)9781665483414
With the reduction in device size and the increase in cell bit-density, NAND flash memory suffers from larger inter-cell interference (ICI) and disturbance effects. constrained coding can mitigate the ICI effects by avoiding problematic error-prone patterns, but designing powerful constrained codes requires a comprehensive understanding of the flash memory channel. Recently, we proposed a modeling approach using conditional generative networks to accurately capture the spatio-temporal characteristics of the read signals produced by arrays of flash memory cells under program/erase (P/E) cycling. In this paper, we introduce a novel machine learning framework for extending the generative modeling approach to the coded storage channel. To reduce the experimental overhead associated with collecting extensive measurements from constrained program/read data, we train the generative models via transferring knowledge from models pre-trained with pseudo-random data. This technique can accelerate the training process and improve model accuracy in reconstructing the read voltages induced by constrained input data throughout the flash memory lifetime. We analyze the quality of the model by comparing flash page bit error rates (BERs) derived from the generated and measured read voltage distributions. We envision that this machine learning framework will serve as a valuable tool in flash memory channel modeling to aid the design of stronger and more efficient coding schemes.
In this paper we consider the problem of encoding data into repeat-free sequences in which sequences are imposed to contain any k-tuple at most once (for predefined k). First, the capacity of the repeat-free constrain...
详细信息
In this paper we consider the problem of encoding data into repeat-free sequences in which sequences are imposed to contain any k-tuple at most once (for predefined k). First, the capacity of the repeat-free constraint are calculated. Then, an efficient algorithm, which uses two bits of redundancy, is presented to encode length-n sequences for k = 2 + 2 log(n). This algorithm is then improved to support any value of k of the form k a log(n), for 1 < a, while its redundancy is o(n). We also calculate the capacity of repeat-free sequences when combined with local constraints which are given by a constrained system, and the capacity of multi-dimensional repeat-free codes.
We show that the broadcast capacity of an infinite-depth tree-structured network of error-free half-duplex-constrained relays can be achieved using constrained coding at the source and symbol forwarding at the relays.
We show that the broadcast capacity of an infinite-depth tree-structured network of error-free half-duplex-constrained relays can be achieved using constrained coding at the source and symbol forwarding at the relays.
暂无评论