Let S-n(lambda) be the set of all permutations over the multiset { [GRAPHICS] , .... , [GRAPHICS] } where n = m lambda. A frequency permutation array (FPA) of minimum distance d is a subset of S-n(lambda) which every ...
详细信息
Let S-n(lambda) be the set of all permutations over the multiset { [GRAPHICS] , .... , [GRAPHICS] } where n = m lambda. A frequency permutation array (FPA) of minimum distance d is a subset of S-n(lambda) which every two elements have distance at least d. FPAs have many applications related to error correcting codes. In coding theory, the Gilbert-Varshamov bound and the sphere-packing bound are derived from the size of balls of certain radii. We propose two efficient algorithms that compute the ball size of frequency permutations under Chebyshev distance. Here it is equivalent to computing the permanent of a special type of matrix, which generalizes the Toepliz matrix in some sense. Both methods extend previous known results. The first one runs in O (((2d lambda)(d lambda))(2.376) log n) time and O(((2d lambda)(d lambda))(2)) space. The second one runs in O (((2d lambda)(d lambda)) ((d lambda+lambda)(lambda))n/lambda) time and O (((2d lambda)(d lambda))) space. For small constants lambda and d, both are efficient in time and use constant storage space. (C) 2012 Elsevier Inc. All rights reserved.
Recently, minimum and non-minimum delay perfect codes were proposed for any channel of dimension n. Their construction appears in the literature as a subset of cyclic division algebras over Q(zeta(3)) only for the dim...
详细信息
Recently, minimum and non-minimum delay perfect codes were proposed for any channel of dimension n. Their construction appears in the literature as a subset of cyclic division algebras over Q(zeta(3)) only for the dimension n = 2(s)n(1), where s is an element of {0,1}, n(1) is odd and the signal constellations are isomorphic to Z[zeta(3)](n) In this work, we propose an innovative methodology to extend the construction of minimum and non-minimum delay perfect codes as a subset of cyclic division algebras over Q(zeta(3)), where the signal constellations are isomorphic to the hexagonal A(2)(n)-rotated lattice, for any channel of any dimension n such that gcd(n,3) = 1. (C) 2012 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
The only way to keep pace with Moore's Law is to use probabilistic computing for memory design. Probabilistic computing is 'unavoidable', especially when scaled memory dimensions go down to the levels wher...
详细信息
The only way to keep pace with Moore's Law is to use probabilistic computing for memory design. Probabilistic computing is 'unavoidable', especially when scaled memory dimensions go down to the levels where variability takes over. In order to print features below 20 nm, novel lithographies such as Extreme Ultra Violet (EUV) are required. However, transistor structures and memory arrays are strongly affected by pattern roughness caused by the randomness of such lithography, leading to variability induced data errors in the memory read-out. This paper demonstrates a probabilistic-holistic look at how to handle bit errors of NAND Flash memory and trades-off between lithography processes and error-correcting codes to ensure the data integrity. (C) 2012 Elsevier Ltd. All rights reserved.
A list successive cancellation (LSC) decoding algorithm to boost the performance of polar codes is proposed. Compared with traditional successive cancellation decoding algorithms, LSC simultaneously produces at most L...
详细信息
A list successive cancellation (LSC) decoding algorithm to boost the performance of polar codes is proposed. Compared with traditional successive cancellation decoding algorithms, LSC simultaneously produces at most L locally best candidates during the decoding process to reduce the chance of missing the correct codeword. The complexity of the proposed algorithm is O(LNlog N), where N and L are the code length and the list size, respectively. Simulation results of LSC decoding in the binary erasure channel and binary-input additive white Gaussian noise channel show a significant performance improvement.
The complementary design theory is powerful for searching for an optimal design when its complementary design is smaller. This paper introduces a new class of sliced equidistance designs and develops the corresponding...
详细信息
The complementary design theory is powerful for searching for an optimal design when its complementary design is smaller. This paper introduces a new class of sliced equidistance designs and develops the corresponding complementary design theory under the generalized minimum aberration criterion. Two rules are established to search for a generalized minimum aberration design through its complementary design in a sliced equidistance design. As a result, the developed theory covers the related results for the whole designs being saturated designs as special cases. Some examples are presented to illustrate its usefulness. (C) 2011 Elsevier B.V. All rights reserved.
It is known that the number of different classical messages which can be communicated with a single use of a classical channel with zero probability of decoding error can sometimes be increased by using entanglement s...
详细信息
It is known that the number of different classical messages which can be communicated with a single use of a classical channel with zero probability of decoding error can sometimes be increased by using entanglement shared between sender and receiver. It has been an open question to determine whether entanglement can ever increase the zero-error communication rates achievable in the limit of many channel uses. In this paper we show, by explicit examples, that entanglement can indeed increase asymptotic zero-error capacity, even to the extent that it is equal to the normal capacity of the channel.
We consider a noisy communication channel in which discrete messages are transmitted by redundant digital signals. It is shown that the probability of true hypotheses can be arbitrarily close to unity if optimal codin...
详细信息
We consider a noisy communication channel in which discrete messages are transmitted by redundant digital signals. It is shown that the probability of true hypotheses can be arbitrarily close to unity if optimal coding is used and the signal-to-noise ratio exceeds the threshold value. This allows us to obtain the Shannon theorem formulation in which the channel capacity and entropy notions are not used.
A new method is introduced for the lossy compression of a LAS file. LAS files store the results from LiDAR scanning, and contain a huge amount of points with associated scalar values. The proposed method consists of f...
详细信息
A new method is introduced for the lossy compression of a LAS file. LAS files store the results from LiDAR scanning, and contain a huge amount of points with associated scalar values. The proposed method consists of four steps: eliminating those points within the over-sampled regions, moving the position of the remaining points within the user-specified limits, variable-length coding, and arithmetic compression. This method was compared with the only known lossy LAS files compression method, as owned by LizardTech (TM). As shown by experimentation, the proposed method is considerably better than the referenced method.
Motivation: The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major...
详细信息
Motivation: The high throughput sequencing (HTS) platforms generate unprecedented amounts of data that introduce challenges for the computational infrastructure. Data management, storage and analysis have become major logistical obstacles for those adopting the new platforms. The requirement for large investment for this purpose almost signalled the end of the Sequence Read Archive hosted at the National Center for Biotechnology Information (NCBI), which holds most of the sequence data generated world wide. Currently, most HTS data are compressed through general purpose algorithms such as gzip. These algorithms are not designed for compressing data generated by the HTS platforms;for example, they do not take advantage of the specific nature of genomic sequence data, that is, limited alphabet size and high similarity among reads. Fast and efficient compression algorithms designed specifically for HTS data should be able to address some of the issues in data management, storage and communication. Such algorithms would also help with analysis provided they offer additional capabilities such as random access to any read and indexing for efficient sequence similarity search. Here we present SCALCE, a 'boosting' scheme based on Locally Consistent Parsing technique, which reorganizes the reads in a way that results in a higher compression speed and compression rate, independent of the compression algorithm in use and without using a reference genome. Results: Our tests indicate that SCALCE can improve the compression rate achieved through gzip by a factor of 4.19-when the goal is to compress the reads alone. In fact, on SCALCE reordered reads, gzip running time can improve by a factor of 15.06 on a standard PC with a single core and 6 GB memory. Interestingly even the running time of SCALCE + gzip improves that of gzip alone by a factor of 2.09. When compared with the recently published BEETL, which aims to sort the (inverted) reads in lexicographic order for improving bzi
A hybrid coding scheme using repeat-accumulate (RA) code and polar code as subcodes is presented to improve the performance of block error probability. As the simplest turbo-like code, punctured RA code is introduced ...
详细信息
A hybrid coding scheme using repeat-accumulate (RA) code and polar code as subcodes is presented to improve the performance of block error probability. As the simplest turbo-like code, punctured RA code is introduced to improve the reliability of the hybrid scheme. A low complexity algorithm, belief propagation/successive cancellation hybrid decoding algorithm is proposed. Compared with polar code, this hybrid scheme can provide 0.3 dB coding gain with the code length 1024 and code rate 1/2. In contrast with RA code, it has no error floor in the additive white Gaussian noise channel.
暂无评论