This paper builds a novel bridge between algebraic coding theory and mathematical knot theory, with applications in both directions. We give methods to construct error-correcting codes starting from the colorings of a...
详细信息
This paper builds a novel bridge between algebraic coding theory and mathematical knot theory, with applications in both directions. We give methods to construct error-correcting codes starting from the colorings of a knot, describing through a series of results how the properties of the knot translate into code parameters. We show that knots can be used to obtain error-correcting codes with prescribed parameters and an efficient decoding algorithm.
In this paper, a fast and efficient decoding algorithm for correcting the (23, 12, 7) Golay code up to four errors is presented. The aim of this paper is to develop a fast syndrome-group search method for finding the ...
详细信息
In this paper, a fast and efficient decoding algorithm for correcting the (23, 12, 7) Golay code up to four errors is presented. The aim of this paper is to develop a fast syndrome-group search method for finding the candidate codewords by utilizing the property that the syndromes of the weight-4 error patterns are identical to that of the weight-3 error pattern. When the set of the candidate codewords is constructed, the most likely one is determined by assessing the corresponding correlation metrics. The well-known Chase-II decoder, which needs to perform the hard-decision decoder multiple times, acts as a comparison basis. Simulation results over the additive white Gaussian noise channel show that the decoding complexity of the proposed method is averagely reduced by at least 86% in terms of the decoding time. Furthermore, the successful decoding percentage of the new decoder in the case of four errors is always superior to Chase-II decoder. At the signal-to-noise ratio of 0?dB, the proposed algorithm still can correct up to 97.40% weight-4 error patterns. The overall bit error rate performance of the proposed decoder is close to that of Chase-II decoder. It implies that the new decoder is beneficial to the practical implementation. Copyright (c) 2011 John Wiley & Sons, Ltd.
Quantum key distribution promises information-theoretically secure communication, with data post-processing playing a vital role in extracting secure keys from raw data. While hardware advancements have significantly ...
详细信息
Quantum key distribution promises information-theoretically secure communication, with data post-processing playing a vital role in extracting secure keys from raw data. While hardware advancements have significantly improved practical implementations, optimizing post-processing techniques offers a cost-effective avenue to enhance performance. Advantage distillation, which extends beyond standard information reconciliation and privacy amplification, has proven instrumental in various post-processing methods. However, the optimal post-processing remains an open question. Therefore, it is important to develop a comprehensive framework to encapsulate and enhance these existing methods. In this work, we propose an advantage distillation framework for quantum key distribution, generalizing and unifying existing key distillation protocols. Inspired by entanglement distillation, our framework not only integrates current techniques but also improves upon them. Notably, by employing classical linear codes, we achieve higher key rates, particularly in scenarios where one-time pad encryption is not used for post-processing. Our approach provides insights into existing protocols and offers a systematic way for further enhancements in quantum key distribution.
The advancement in the field of deep neural network (NN) leads practical recognition systems for biometrics such as face but also increases the threat to privacy such as recovering original biometrics from templates. ...
详细信息
The advancement in the field of deep neural network (NN) leads practical recognition systems for biometrics such as face but also increases the threat to privacy such as recovering original biometrics from templates. The efficiency, the security and the usability are three points of important but difficult-to-achieve simultaneously in template protection. IronMask (CVPR 2021) shows the importance of efficient error-correcting mechanism on the metric used in the recognition system when designing template protection satisfying these three points at the same time. It is a first modular protection that can be added to any NN-based face recognition system independently (pre)trained by metric learning with cosine similarity. In addition, its performance with three datasets (Multi-PIE, FEI, Color FERET), which are widely used for evaluating template protection, is comparable with protection-recognition integrated systems that limit the usability due to inefficient registration. In this paper, we first demonstrate and analyze limit of IronMask by using more wilder and larger face datasets (LFW, AgeDB-30, CFP-FP, IJB-C). On the basis of our analyses on IronMask, we propose anew face template protection that has several benefits over IronMask with preserving modular feature. First, ours provides more flexibility to manipulate the error-correcing capacity for balancing between true accept rate (TAR) and false accept rate (FAR). Second, ours minimizes performance degradation while keeping appropriate level of security;even evaluating with a large dataset IJB-C, we achieve a TAR of 96.31% at a FAR of 0.05% with 118-bit security when combined with ArcFace that achieves 96.97% TAR at 0.01% FAR.
Two words u and v have a t-overlap if the length t prefix of u is equal to the length t suffix of v, or vice versa. A code C is t-overlap-free if no two words u and v in C (including u = v) have a t-overlap. A code of...
详细信息
Two words u and v have a t-overlap if the length t prefix of u is equal to the length t suffix of v, or vice versa. A code C is t-overlap-free if no two words u and v in C (including u = v) have a t-overlap. A code of length n is said to be (t(1), t(2))-overlap-free if it is t-overlap-free for all t such that 1 <= t(1) <= t <= t(2) <= n - 1. A (1, n - 1)-overlap-free code of length n is called non-overlapping, which has applications in DNA-based data storage systems and frame synchronization. In this paper, we initialize the study for codes of length n which are simultaneously (1, k)-overlap-free and (n - k, n - 1)-overlap-free, and establish lower and upper bounds for the size of balanced and error-correcting (1, k)-overlap-free codes. (c) 2024 Elsevier B.V. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
Flash memory is a nonvolatile computer storage device which consists of blocks of cells. While increasing the voltage level of a single cell is fast and simple, reducing the level of a cell requires the erasing of the...
详细信息
Flash memory is a nonvolatile computer storage device which consists of blocks of cells. While increasing the voltage level of a single cell is fast and simple, reducing the level of a cell requires the erasing of the entire block containing the cell. Since block-erasures are costly, flash coding schemes have been developed to maximize the number of writes before a block-erasure is needed. A novel coding scheme based on error-correcting codes is presented that allows the cell levels to increase as evenly as possible and as a result, increases the number of writes before a block-erasure. The scheme is based on the premise that cells whose levels are higher than others need not be increased. This introduces errors in the recorded data which can be corrected by an error-correcting code provided that the number of erroneous cells is within the error-correcting capability of the code. The scheme is also capable of combating noise, causing additional errors and erasures, in flash memories in order to enhance data reliability. For added flexibility, the scheme can be combined with other flash codes to yield concatenated schemes of high memory rates.
The determination of bounds on the size of codes with given minimum distance is an important problem in the coding theory. In this paper, we construct codes based on partial linear maps of finite-dimensional vector sp...
详细信息
The determination of bounds on the size of codes with given minimum distance is an important problem in the coding theory. In this paper, we construct codes based on partial linear maps of finite-dimensional vector spaces, define the measure of distance via rank function, and present several upper bounds and lower bounds on the size of these codes.
We replace the usual setting for error-correcting codes (i.e. vector spaces over finite fields) with that of permutation groups. We give an algorithm which uses a combinatorial structure which we call an uncovering-by...
详细信息
We replace the usual setting for error-correcting codes (i.e. vector spaces over finite fields) with that of permutation groups. We give an algorithm which uses a combinatorial structure which we call an uncovering-by-bases, related to covering designs, and construct some examples of these. We also analyse the complexity of the algorithm. We then formulate a conjecture about uncoverings-by-bases, for which we give some supporting evidence and prove for some special cases. In particular, we consider the case of the symmetric group in its action on 2-subsets, where we make use of the theory of graph decompositions. Finally, we discuss the implications this conjecture has for the complexity of the decoding algorithm. (C) 2009 Elsevier B.V. All rights reserved.
error-correcting codes based on quasigroups are defined elsewhere. These codes are a combination of cryptographic algorithms and errorcorrectingcodes. In a paper of ours we succeed to improve the speed of the decodi...
详细信息
ISBN:
(纸本)9781479914203
error-correcting codes based on quasigroups are defined elsewhere. These codes are a combination of cryptographic algorithms and errorcorrectingcodes. In a paper of ours we succeed to improve the speed of the decoding process by defining new algorithm for coding and decoding, named "cut-decoding algorithm". Here, a new modification of the cut-decoding algorithm is considered in order to obtain further improvements of the code performances. We present several experimental results obtained with different decoding algorithms for these codes.
error-correcting codes with both binary and ternary coordinates are considered. The maximum cardinality of such a code with n(2) binary coordinates, n(3) ternary coordinates, and minimum distance d is denoted by N(n(2...
详细信息
error-correcting codes with both binary and ternary coordinates are considered. The maximum cardinality of such a code with n(2) binary coordinates, n(3) ternary coordinates, and minimum distance d is denoted by N(n(2), n(3), d). A computer-aided method based on backtrack search and isomorph rejection is here used to settle many values of N(n(2), n(3), 3);several new upper bounds on this function are also obtained. For small parameters, a complete classification of optimal codes is carried out. It is shown that the maximum cardinality of a ternary one-error-correcting code of length 6 is 38 and that this code is unique. (C) 2000 Elsevier Science B.V. All rights reserved.
暂无评论