This tutorial paper reviews the theory and design of codes for hiding or embedding information in signals such as images, video, audio, graphics, and text. Such codes have also been called watermarking codes;they can ...
详细信息
This tutorial paper reviews the theory and design of codes for hiding or embedding information in signals such as images, video, audio, graphics, and text. Such codes have also been called watermarking codes;they can be used in a variety of applications, including copyright protection for digital media, content authentication, media forensics, data binding, and covert communications. Some of these applications imply the presence of an adversary attempting to disrupt the transmission of information to the receiver;other applications involve a noisy, generally unknown, communication channel. Our focus is on the mathematical models, fundamental principles, and code design. techniques that are applicable to data hiding. The approach draws from basic concepts in information theory, coding theory, game theory, and signal processing, and is illustrated with applications to the problem of hiding data in images.
This study assessed whether a speeded coding task that used a computer-based mouse response (CBMR) format was a measure of general processing speed (Gs). By analyzing the task within a network of tasks representing bo...
详细信息
This study assessed whether a speeded coding task that used a computer-based mouse response (CBMR) format was a measure of general processing speed (Gs). By analyzing the task within a network of tasks representing both traditional Gs tests and reaction time tasks, it was shown that a CBMR test can be used to measure the same construct as traditional paper-and-pencil (PP) tests and that this response format does not introduce variance associated with psychomotor performance. Differences between PP and CBMR formats were observed, and it is argued that these may provide information on individual differences in performance not available from traditional coding tests.
Secret sharing schemes have been studied for over twenty years. An important approach to the construction of secret sharing schemes is based on linear codes. The access structure of the secret sharing scheme based on ...
详细信息
Secret sharing schemes have been studied for over twenty years. An important approach to the construction of secret sharing schemes is based on linear codes. The access structure of the secret sharing scheme based on a linear code is determined by the minimal codewords of the dual code. However, it is very hard in general to determine the minimal codewords of linear codes. Although every linear code gives rise to a secret sharing scheme, the access structure of secret sharing schemes based on only a few classes of codes are determined. The main contributions of this thesis are a new characterization of the minimal codewords of linear codes, the construction of several classes of linear codes that are either optimal or almost optimal, and the determination of the access structures of the secret sharing schemes based on the duals of these linear codes. The access structures of the secret sharing schemes presented in this thesis are of two types and are very nice.
This book is easy to read and fun. The examples and tutorials are short, focused and interesting. You can dip in and get what you want fast. This book is written by VirtualDub enthusiasts for new and intermediate user...
详细信息
ISBN:
(数字)9781847190246
ISBN:
(纸本)9781904811350
This book is easy to read and fun. The examples and tutorials are short, focused and interesting. You can dip in and get what you want fast. This book is written by VirtualDub enthusiasts for new and intermediate users. It's ideal if you are just starting with video processing and want a powerful and free tool, or if you've already started with VirtualDub and want to take it further.
The aim of this paper is to describe a new class of problems and some new results in coding theory arising from the analysis of the composition and functionality of the genetic code. The major goal of the proposed wor...
详细信息
ISBN:
(纸本)0780387201
The aim of this paper is to describe a new class of problems and some new results in coding theory arising from the analysis of the composition and functionality of the genetic code. The major goal of the proposed work is to initiate research on investigating possible connections between the regulatory network of gene interactions (RNGI) and the proofreading (error-control) mechanism of the processes of the central dogma of genetics. New results include establishing a direct relationship between Boolean Network (BN) Models of RNGI and Gallager's LDPC decoding algorithms. The proposed research topics and described results are expected to have a two-fold impact on coding theory and genetics research. Firstly, they may provide a different setting in which to analyze standard LDPC decoding algorithms, by using dynamical systems and Boolean function theory. Secondly, they may be of use in establishing deeper relationships between the DNA proofreading mechanism, RNGIs, as well as their joint influence on the development and possible treatment of genetic diseases like cancer.
This paper will focus on the construction of superimposed codes using incidence matrices. Such constructions require a set of elements and a partial order defined on the set. We will define a partial order on partitio...
详细信息
This paper will focus on the construction of superimposed codes using incidence matrices. Such constructions require a set of elements and a partial order defined on the set. We will define a partial order on partitions. The construction will be made using elements from the partially ordered set of partitions of n elements. (C) 2005 Elsevier Ltd. All rights reserved.
Orthogonal frequency division multiplexing (OFDM) enables low-complexity equalization and has been adopted in several wireless standards. However, OFDM cannot exploit multipath diversity without computationally comple...
详细信息
Orthogonal frequency division multiplexing (OFDM) enables low-complexity equalization and has been adopted in several wireless standards. However, OFDM cannot exploit multipath diversity without computationally complex coding and decoding. We show here that by sampling at a rate higher than the symbol rate, which is also known as fractional sampling (FS), one can improve the diversity that the wireless channel can provide in an OFDM system. We propose maximal ratio combining at each sub-carrier for the FS-OFDM system, argue that the diversity gains acquired through this approach are related to the spectral shape of the pulse and its excess bandwidth, and derive analytical bit error and symbol error rate expressions for our scheme. We also explore extensions to differentially encoded systems that do not require channel status information at the receiver, multiple-input multiple-output (MIMO) systems that exploit space diversity, and low peak-to-average (PAR) options such as zero-padded (ZP) and cyclic-prefix only (CP-only) transmissions. We corroborate our approach with simulations.
This paper describes our approach to support the development of large-scale Web applications. Large development efforts have to be divided into a number of smaller tasks of different kinds that can be performed by mul...
详细信息
This paper describes our approach to support the development of large-scale Web applications. Large development efforts have to be divided into a number of smaller tasks of different kinds that can be performed by multiple developers. Once this process has taken place, it is important to manage the consistency among the artifacts in an efficient and systematic manner. Our model-driven approach makes this possible. In this paper, we discuss how a metamodel is used to describe part of the specification as a central contract among the developers. We also describe a tool that we implemented on the basis of the metamodel. The tool provides a variety of code generators and a mechanism for checking whether view artifacts, such as JavaServer Pages(TM), are compliant with the model. This feature helps developers manage the consistency between a view artifact and the related business logic-HyperText Transfer Protocol request handlers.
Seismic data volumes, which require huge transmission capacities and massive storage media, continue to increase rapidly due to acquisition of 3D and 4D multiple streamer surveys, multicomponent data sets, reprocessin...
详细信息
Seismic data volumes, which require huge transmission capacities and massive storage media, continue to increase rapidly due to acquisition of 3D and 4D multiple streamer surveys, multicomponent data sets, reprocessing of prestack seismic data, calculation of post-stack seismic data attributes, etc. We consider lossy compression as an important tool for efficient handling of large seismic data sets. We present a 2D lossy seismic data compression algorithm, based on sub-band coding, and we focus on adaptation and optimization of the method for common-offset gathers. The sub-band coding algorithm consists of five stages: first, a preprocessing phase using an automatic gain control to decrease the non-stationary behaviour of seismic data;second, a decorrelation stage using a uniform analysis filter bank to concentrate the energy of seismic data into a minimum number of sub-bands;third, an iterative classification algorithm, based on an estimation of variances of blocks of sub-band samples, to classify the sub-band samples into a fixed number of classes with approximately the same statistics;fourth, a quantization step using a uniform scalar quantizer, which gives an approximation of the sub-band samples to allow for high compression ratios;and fifth, an entropy coding stage using a fixed number of arithmetic encoders matched to the corresponding statistics of the classified and quantized sub-band samples to achieve compression. Decompression basically performs the opposite operations in reverse order. We compare the proposed algorithm with three other seismic data compression algorithms. The high performance of our optimized sub-band coding method is supported by objective and subjective results.
This paper proposes a dictionary-based code compression technique that maps the source register operands to the nearest occurrence of a destination register in the predecessor instructions. The key idea is that most d...
详细信息
This paper proposes a dictionary-based code compression technique that maps the source register operands to the nearest occurrence of a destination register in the predecessor instructions. The key idea is that most destination registers have a great possibility to be used as source registers in the following instructions. The dependent registers can be removed from the dictionary if this information can be specified otherwise. Such destination-source relationships are so common that making use of them can result in much better code compression. After removing the dependent register operands, the original dictionary size can be reduced significantly. As a result, the compression ratio can benefit from: (a) the reduction of dictionary size due to the removal of dependent registers, and (b) the reduction of program encoding due to the reduced number of dictionary entries. A set of programs has been compressed using this feature. The compression results show that the average compression ratio is reduced to 38.41% on average for MediaBench benchmarks compiled for MIPS R2000 processor, as opposed to 45% using operand factorization. (C) 2003 Elsevier Inc. All rights reserved.
暂无评论