The main focus of this paper is the complete enumeration of self-dual abelian codes in nonprincipal ideal group algebras F-2k[A x Z(2) x Z(2s)]with respect to both the Euclidean and Hermitian inner products, where.. a...
详细信息
The main focus of this paper is the complete enumeration of self-dual abelian codes in nonprincipal ideal group algebras F-2k[A x Z(2) x Z(2s)]with respect to both the Euclidean and Hermitian inner products, where.. and.. are positive integers and.. is an abelian group of odd order. Based on the well-known characterization of Euclidean and Hermitian self-dual abelian codes, we show that such enumeration can be obtained in terms of a suitable product of the number of cyclic codes, the number of Euclidean self-dual cyclic codes, and the number of Hermitian self-dual cyclic codes of length 2(s) over some Galois extensions of the ring F-2k + uF(2k), where u(2) = 0. Subsequently, general results on the characterization and enumeration of cyclic codes and self-dual codes of length p(s) over F-pk + uF(pk) are given. Combining these results, the complete enumeration of self-dual abelian codes in F-2k [AxZ(2) xZ(2s)] is therefore obtained.
Sparse representation (or sparse coding) has been applied to deal with frontal face recognition. Two representative methods are the sparse representation-based classification (SRC) and the collaborative representation...
详细信息
Sparse representation (or sparse coding) has been applied to deal with frontal face recognition. Two representative methods are the sparse representation-based classification (SRC) and the collaborative representation-based classification (CRC), in which the query face image is represented by a sparse linear combination of all the training samples. The difference between SRC and CRC is that the L-1-norm constraint of coding is employed in the former to guarantee the sparse property, while the L-2-norm constraint is utilised in the latter. In this paper, we propose a novel loose L-1/2 regularised sparse representation (SR) for face recognition, named L-1/2 classification (LHC), which is inspired by L-1/2 regularisation. Additionally, an iterative Tikhonov regularisation (ITR) is proposed to solve LHC efficiently compared with the original algorithm. Using ITR, the balance between the collaborative representation (CR) and the SR can be tuned by the iterations. Attributed to the sparser L-1/2 regularisation and the iterative solution mechanism, a better performance can be achieved by LHC. Extensive experiments on three benchmark face databases demonstrated that LHC is more effective than the state-of-the-art SR-based methods in dealing with frontal face recognition.
Linear Predictive coding is applied for transmitting the samples in a binary system consisting of (+1, -1). This binary system provides a unified method of encoding signed quantities. All the pole filters used in the ...
详细信息
Linear Predictive coding is applied for transmitting the samples in a binary system consisting of (+1, -1). This binary system provides a unified method of encoding signed quantities. All the pole filters used in the LPC have been achieved with different algorithms. LPC in this system is found to be suitable for encoding signed numbers in unified way.
Progressive image transmission (PIT) is an elegant method for making effective use of communication bandwidth. Unlike conventional sequential transmission, an approximate image is transmitted first, which is then prog...
详细信息
Progressive image transmission (PIT) is an elegant method for making effective use of communication bandwidth. Unlike conventional sequential transmission, an approximate image is transmitted first, which is then progressively improved over a number of transmission passes. PIT allows the user to quickly recognize an image and is essential for databases with large images and image transmission over low-bandwidth connections. This article presents a review of PIT techniques. A classification scheme based on the method used to progressively update the image is proposed. Four different classes of PIT methods are identified: successive approximation, transmission sequence-based, multistage residual quantization, and multiresolutional or hierarchical coding methods. Subclasses are defined based on the image compression method used. Using this classification, a comprehensive survey and comparison of these methods is performed. (C) 1999 John Wiley & Sons, Inc.
The problem of signal compression is to achieve a low bit rate in the digital representation of an input signal with minimum perceived loss of signal quality. In compressing signals such as speech, audio, image, and v...
详细信息
The problem of signal compression is to achieve a low bit rate in the digital representation of an input signal with minimum perceived loss of signal quality. In compressing signals such as speech, audio, image, and video, the ultimate criterion of signal quality is usually that judged or measured by the human receiver. As we seek lower bit rates in the digital representations of these signals, it is imperative that we design the compression (or coding) algorithm to minimize perceptually meaningful measures of signal distortion, rather than more traditional and tractable criteria such as the mean squared difference between the waveforms at the input and output of the coding system. This paper develops the notion of perceptual coding based on the concept of distortion masking by the signal being compressed, and describes how the field has progressed as a result of advances in classical coding theory, modeling of human perception, and digital signal processing. We propose that fundamental limits in the science can be expressed by the semi-quantitative concepts of perceptual entropy and the perceptual distortion-rate function, and we examine current compression technology with respect to that framework. We conclude with a summary of future challenges and research directions.
We have studied the locking effect of photon-echo responses in a three-level system and the information reproducibility upon coding the information in the temporal shape of the object laser pulse. We have shown that t...
详细信息
We have studied the locking effect of photon-echo responses in a three-level system and the information reproducibility upon coding the information in the temporal shape of the object laser pulse. We have shown that these effects differ from their analogs in the two-level system.
The paper proposes a new data hiding method based on deoxyribonucleic acid (DNA) coding, using the Word document as carrier. The plain message becomes a cipher sequence after being encoded to a DNA sequence and encryp...
详细信息
The paper proposes a new data hiding method based on deoxyribonucleic acid (DNA) coding, using the Word document as carrier. The plain message becomes a cipher sequence after being encoded to a DNA sequence and encrypted by the addition operation. The cipher sequence is attached to a random DNA primer sequence and circularly shifted for finite times, then hide the whole sequence into a Word document through substituting each character's color. The plaintext can be extracted according to the keys, and the key space is large enough to resist brute force attacks. Experimental results show the feasibility of the scheme. (C) 2013 Elsevier Ltd. All rights reserved.
In this paper, a coding-theory construction of Cartesian authentication codes is presented. The construction is a generalization of some known constructions. Within the framework of this generic construction, several ...
详细信息
In this paper, a coding-theory construction of Cartesian authentication codes is presented. The construction is a generalization of some known constructions. Within the framework of this generic construction, several classes of authentication codes using certain classes of error-correcting codes are described. The authentication codes presented in this paper are better than known ones with comparable parameters. It is demonstrated that the construction is related to certain combinatorial designs, such as difference matrices and generalized Hadamard matrices.
The source term from the 1 MW TRIGA Mark II research reactor core of the Democratic Republic of the Congo was derived in this study. An atmospheric dispersion modeling followed by radiation dose calculation were perfo...
详细信息
The source term from the 1 MW TRIGA Mark II research reactor core of the Democratic Republic of the Congo was derived in this study. An atmospheric dispersion modeling followed by radiation dose calculation were performed based on two possible postulated accident scenarios. This derivation was made from an inventory of peak radioisotope activities released in the core by using the Karlsruhe version of isotope generation code KORIGEN. The atmospheric dispersion modeling was performed with HotSpot code, and its application yielded to radiation dose profile around the site using meteorological parameters specific to the area under study. The two accident scenarios were picked from possible accident analyses for TRIGA and TRIGA-fueled reactors, involving the case of destruction of the fuel element with highest activity release and a plane crash on the reactor building as the worst case scenario. Deterministic effects of these scenarios are used to update the Safety Analysis Report (SAR) of the reactor, and for its current version, these scenarios are not yet incorporated. Site-specific meteorological conditions were collected from two meteorological stations: one installed within the Atomic Energy Commission and another at the National Meteorological Agency (METTELSAT), which is not far from the site. Results show that in both accident scenarios, radiation doses remain within the limits, far below the recommended maximum effective (whole body) dose of 20 mSv/year for workers and 1 mSv/year for the general public in the IAEA Basic Safety Standards 115 and demonstrate the radiation safety of this reactor. This guarantees the safety of workers and the population around the plant site. (C) 2014 Elsevier B.V. All rights reserved.
The time-varying modulated lapped transform (MLT) is used in speech and audio coding schemes to adjust the time-frequency resolution, to eliminate pre-echoes in the reconstructed signal, and to improve the coding qual...
详细信息
The time-varying modulated lapped transform (MLT) is used in speech and audio coding schemes to adjust the time-frequency resolution, to eliminate pre-echoes in the reconstructed signal, and to improve the coding quality. In order to maintain the perfect-reconstruction property in transition periods, an asymmetrical window has to be used at cost of poorer frequency characteristics. We firstly generalize a window-design method for transition periods in the time-varying MLT with a rigorous proof of its PR property, then present a new window-design method, with which the prototype window is so designed that the total reconstruction distortion in presence of coefficient quantization is minimized. This leads to the time-varying minimum mean-square error (MMSE) MLT. Experiments have shown that the designed windows have better frequency characteristics than the sine window in both transition and regular periods. A general formulation of the quantization distortion for different quantization-error models and for all coding systems is given. A simplified optimal window-design algorithm without direct minimization of the distortion equation is suggested. As an example a transform-coding scheme with time-varying MMSE MLT for speech and audio signals is presented. (C) 2002 Elsevier Science B.V. All rights reserved.
暂无评论