The recent availability of small personal digital assistants (PDAs) with a touchscreen and communication capabilities has been an influential factor in the renewed interest in telewriting, a technique for the exchange...
详细信息
The recent availability of small personal digital assistants (PDAs) with a touchscreen and communication capabilities has been an influential factor in the renewed interest in telewriting, a technique for the exchange of handwritten information through telecommunications means. In this context, differential chain coding algorithms for compression of the handwritten ink are revisited. In particular, it is shown that the coding efficiency of multi-ring differential chain coding (MRDCC) is not always better when compared to single ring differential chain coding (DCC), as previously suggested. These, algorithms were tested on over 300 handwritten messages using a relative compactness criterion and a per-length distortion measure. The probabilities of relative vectors in MRDCC and DCC are related, an expression for relative compactness in the MRDCC case is introduced, and the application of Freeman's criteria for the selection of the appropriate code for a family of curves is illustrated.
Steganographic Channel Model (SCM) is hard to build for different steganography algorithms in different embedding domains. Thus, theoretical analysis for some important factors in steganography, such as capacity, dist...
详细信息
ISBN:
(纸本)9783319641850;9783319641843
Steganographic Channel Model (SCM) is hard to build for different steganography algorithms in different embedding domains. Thus, theoretical analysis for some important factors in steganography, such as capacity, distortion, is hard to obtain. In this paper, to avoid introducing significant distortion into HEVC video file, a novel HEVC SCM is presented and analyzed. It is firstly proposed that the distortion optimization method in this SCM should be applied on coding efficiency instead of visual quality. According to this conclusion, a novel coding efficiency preserving steganography algorithm based on Prediction Units (PUs) is proposed for HEVC videos. The intra prediction modes of candidate PUs are taken as cover. This algorithm was tested on the dataset consisting of 17,136 HD sequences. The Experimental results prove the correctness of the previous conclusion and the practicability of the proposed channel model, and show that our algorithm outperforms the existing HEVC steganography algorithm in capacity and perceptibility.
This work presents a coding efficiency evaluation of the recently published first release of the video coding scheme of the Alliance for Open Media (AOM), so called AOM/AV1, in comparison to the video coding standards...
详细信息
ISBN:
(纸本)9781509059669
This work presents a coding efficiency evaluation of the recently published first release of the video coding scheme of the Alliance for Open Media (AOM), so called AOM/AV1, in comparison to the video coding standards H.264/MPEG-AVC (Advanced Video coding) and H.265/MPEG-HEVC (High-efficiency Video coding). As representatives of the two last-mentioned video coding standards, the corresponding reference software encoders of JM and HM were selected, and for HEVC, in addition, the Fraunhofer HHI HEVC commercial software encoder and the open source software implementation x265 were used. According to the experimental results, which were obtained by using similar configurations for all examined representative encoders, the H.265/MPEG-HEVC reference software implementation provides significant average bit-rate savings of 38.4% and 32.8% compared to AOM/AV1 and H. 264/MPEG-AVC, respectively. Particularly, when directly compared to H. 264/MPEG-AVC High Profile, the AOM/AV1 encoder produces an average bit-rate overhead of 10.5% at the same objective quality. In addition, it was observed that the AOM/AV1 encoding times are quite similar to those of the full-fledged HM and JM reference software encoders. On the other hand, the typical encoding times of the HI encoder are in the range of 30-300 times higher on average than those measured for the configurable HHI HEVC encoder, depending on its chosen trade-off between encoding speed and coding efficiency.
The Bjontegaard model is widely used to calculate the coding efficiency between different codecs. However, this model might not be an accurate predictor of the true coding efficiency as it relies on PSNR measurements....
详细信息
The Bjontegaard model is widely used to calculate the coding efficiency between different codecs. However, this model might not be an accurate predictor of the true coding efficiency as it relies on PSNR measurements. Therefore, in this paper, we propose a model to calculate the average coding efficiency based on subjective quality scores, i.e., mean opinion scores (MOS). We call this approach Subjective Comparison of ENcoders based on fitted Curves (SCENIC). To consider the intrinsic nature of bounded rating scales, a logistic function is used to fit the rate-distortion (R-D) values. The average MOS and bit rate differences are computed between the fitted R-D curves. The statistical property of subjective scores is considered to estimate corresponding confidence intervals on the calculated average MOS and bit rate differences. The proposed model is expected to report more realistic coding efficiency as PSNR is not always correlated with perceived visual quality. (C) 2013 Elsevier Inc. All rights reserved.
A macro block-level deblocking method is proposed for H.264/AVC, in which blocking artifacts are effectively eliminated in the discrete cosine transform domain at the macroblock encoding stage. Experimental results sh...
详细信息
A macro block-level deblocking method is proposed for H.264/AVC, in which blocking artifacts are effectively eliminated in the discrete cosine transform domain at the macroblock encoding stage. Experimental results show that the proposed algorithm outperforms conventional H.264 in terms of coding efficiency, and the bitrate saving is up to 5.7% without reconstruction quality loss.
Maximizing coding efficiency is important in applications where bandpass Sigma Delta modulation is used as a source encoder to synthesize a two-level pulse train in RF class-D amplifiers. A periodic pulse-train model ...
详细信息
Maximizing coding efficiency is important in applications where bandpass Sigma Delta modulation is used as a source encoder to synthesize a two-level pulse train in RF class-D amplifiers. A periodic pulse-train model is developed to mimic the encoding of a bandpass Sigma Delta modulator for sinusoidal source signals, and it is shown that the coding efficiency of both the model and the modulator vary similarly with changes in carrier over-sample ratio. The relationship between-coding efficiency and carrier over-sample ratio is not monotonic and has significant dips at certain ratios. The predictions of the model are compared with simulation results for a fourth-order bandpass modulator. The results show that coding efficiency is low in a modulator design where the input frequency is one-fourth the sample rate (f(s)) and can be increased by as much as 15% by selecting a slightly lower over-sample ratio such-as (3/10)f(s).
More efficient data compression can be achieved in encoding line drawings by vector chain coding (VCC) than by the traditional runlength coding (RLC), provided the total length of the lines within a line drawing is no...
详细信息
More efficient data compression can be achieved in encoding line drawings by vector chain coding (VCC) than by the traditional runlength coding (RLC), provided the total length of the lines within a line drawing is not excessive. More bandwidth or time can thus be saved in transmitting such pictures by using VCC. Although this has so far been established only intuitively, quantitative analysis and comparison of the coding efficiency of these two codes for line drawings are performed in this paper. The coding efficiency is measured in terms of both per-length coding rate and data compression ratio, which are determined for a class of handwritten line drawings characterised by a proper statistical model. In particular, the critical point of line drawing complexity is derived at which VCC becomes less efficient than RLC. Experimental observations are also presented to verify the theoretical results.
It has been widely believed that motion-compensated update steps performed in the motion-compensated temporal filtering (MCTF) framework improve coding efficiency in comparison to conventional hybrid coding. However, ...
详细信息
It has been widely believed that motion-compensated update steps performed in the motion-compensated temporal filtering (MCTF) framework improve coding efficiency in comparison to conventional hybrid coding. However, the coding efficiency of MCTF update steps for scalable video coding has not been examined sufficiently in the past. In this paper, we investigate the efficiency of the decoder side update steps by analyzing the effect of performing the inverse update steps at the decoder side. We demonstrate, by theoretical approach, that the MCTF update step at the decoder side does not contribute significantly to the coding efficiency, except at high spatial-temporal resolutions with high bit rates. The results in our study provide a theoretical justification for not including MCTF update steps in the decoding process for scalable video coding.
The visual system employs a gain control mechanism in the cortical coding of contrast whereby the response of each cell is normalised by the integrated activity of neighbouring cells. While restricted in space, the no...
详细信息
The visual system employs a gain control mechanism in the cortical coding of contrast whereby the response of each cell is normalised by the integrated activity of neighbouring cells. While restricted in space, the normalisation pool is broadly tuned for spatial frequency and orientation, so that a cell's response is adapted by stimuli which fall outside its 'classical' receptive field. Various functions have been attributed to divisive gain control: in this paper we consider whether this output nonlinearity serves to increase the information carrying capacity of the neural code. 46 natural scenes were analysed with the use of oriented, frequency-tuned filters whose bandwidths were chosen to match those of mammalian striate cortical cells. The images were logarithmically transformed so that the filters responded to a luminance ratio or contrast. In the first study, the response of each filter was calibrated relative to its response to a grating stimulus, and local image contrast was expressed in terms of the familiar Michelson metric. We found that the distribution of contrasts in natural images is highly kurtotic, peaking at low values and having a long exponential tail. There is considerable variability in local contrast, both within and between images. In the second study we compared the distribution of response activity before and after implementing contrast normalisation, and noted two major changes. Response variability, both within and between scenes, is reduced by normalisation, and the entropy of the response distribution is increased after normalisation, indicating a more efficient transfer of information.
Growing video resolutions have led to an increasing volume of Internet video traffic, which has created a need for more efficient video compression. New video coding standards, such as High efficiency Video coding (HE...
详细信息
Growing video resolutions have led to an increasing volume of Internet video traffic, which has created a need for more efficient video compression. New video coding standards, such as High efficiency Video coding (HEVC), enable a higher level of compression, but the complexity of the corresponding encoder implementations is also higher. Therefore, encoders that are efficient in terms of both compression and complexity are required. In this work, we implement four optimizations to Kvazaar HEVC encoder: 1) uni- form inter and intra cost comparison; 2) concurrency-oriented SAO implementation; 3) resolution-adaptive thread allocation; and 4) fast cost estimation of coding co- efficients. Optimization 1 changes the selection criterion of the prediction mode in fast configurations, which greatly improves the coding efficiency. Optimization 2 re- places the implementation of one of the in-loop filters with one that better supports concurrent processing. This allows removing some dependencies between encoding tasks, which provides more opportunities for parallel processing to increase coding speed. Optimization 3 reduces the overhead of thread management by spawning fewer threads when there is not enough work for all available threads. Optimiza- tion 4 speeds up the computation of residual coefficient coding costs by switching to a faster but less accurate estimation. The impact of the optimizations is measured with two coding configurations of Kvazaar: the ultrafast preset, which aims for the fastest coding speed, and the veryslow preset, which aims for the best coding efficiency. Together, the introduced optimizations give a 2.8× speedup in the ultrafast configuration and a 3.4× speedup in the veryslow configuration. The trade-off for the speedup with the veryslow preset is a 0.15 % bit rate increase. However, with the ultrafast preset, the optimizations also improve coding efficiency by 14.39 %.
暂无评论