In VVC, the intra prediction mode of a block is encoded using the most probable mode (MPM). For this purpose, possible candidate modes for MPM are maintained in the MPM candidate list. The list is constructed differen...
详细信息
ISBN:
(数字)9781510679931
ISBN:
(纸本)9781510679931;9781510679924
In VVC, the intra prediction mode of a block is encoded using the most probable mode (MPM). For this purpose, possible candidate modes for MPM are maintained in the MPM candidate list. The list is constructed differently depending on the number of neighboring angular intra prediction modes. While it is important to maintain a proper MPM candidate list, but how to encode the selected list index is also important. In this paper, we investigate on context coding of the MPM candidate list index instead of the current bypass-coding in VVC. The proposed method achieves a BDBR gain of 0.08% in the Y channel, and 0.10% and 0.06% respectively in the Cb and Cr channels.
It is well established that in most species, the hippocampus shows extensive postnatal development. This delayed maturation has a number of implications, which can be thought of in three categories. First, the late ma...
详细信息
It is well established that in most species, the hippocampus shows extensive postnatal development. This delayed maturation has a number of implications, which can be thought of in three categories. First, the late maturation has the direct effect of depriving the developing organism of at least some of the functions of the hippocampus, in particular place learning, context coding and in humans, episodic memory. Second, such learning that does occur very early in life, prior to hippocampal maturation, will largely bear the imprint and properties of those brain systems that, unlike the hippocampus, are fully functional early in life. Third, the active state of development of hippocampus in the first weeks and months of life render this structure susceptible to disruption by environmental and/or chromosomal factors. In this article, I discuss my efforts, with many colleagues over the past 40 years, to understand each of these implications.
This paper introduces an efficient method for lossless compression of depth map images, using the representation of a depth image in terms of three entities: 1) the crack-edges;2) the constant depth regions enclosed b...
详细信息
This paper introduces an efficient method for lossless compression of depth map images, using the representation of a depth image in terms of three entities: 1) the crack-edges;2) the constant depth regions enclosed by them;and 3) the depth value over each region. The starting representation is identical with that used in a very efficient coder for palette images, the piecewise-constant image model coding, but the techniques used for coding the elements of the representation are more advanced and especially suitable for the type of redundancy present in depth images. Initially, the vertical and horizontal crack-edges separating the constant depth regions are transmitted by 2D context coding using optimally pruned context trees. Both the encoder and decoder can reconstruct the regions of constant depth from the transmitted crack-edge image. The depth value in a given region is encoded using the depth values of the neighboring regions already encoded, exploiting the natural smoothness of the depth variation, and the mutual exclusiveness of the values in neighboring regions. The encoding method is suitable for lossless compression of depth images, obtaining compression of about 10-65 times, and additionally can be used as the entropy coding stage for lossy depth compression.
This paper describes a novel lossless compression method for point cloud geometry, building on a recent lossy compression method that aimed at reconstructing only the bounding volume of a point cloud. The proposed sch...
详细信息
ISBN:
(纸本)9781665441155
This paper describes a novel lossless compression method for point cloud geometry, building on a recent lossy compression method that aimed at reconstructing only the bounding volume of a point cloud. The proposed scheme starts by partially reconstructing the geometry from the two depthmaps associated to a single projection direction. The partial reconstruction obtained from the depthmaps is completed to a full reconstruction of the point cloud by sweeping section by section along one direction and encoding the points which were not contained in the two depthmaps. The main ingredient is a list-based encoding of the inner points (situated inside the feasible regions) by a novel arithmetic three dimensional context coding procedure that efficiently utilizes rotational invariances present in the input data. State-of-the-art bits-per-voxel results are obtained on benchmark datasets.
This paper describes a novel lossless point cloud compression algorithm that uses a neural network for estimating the coding probabilities for the occupancy status of voxels, depending on wide three dimensional contex...
详细信息
ISBN:
(纸本)9781665432870
This paper describes a novel lossless point cloud compression algorithm that uses a neural network for estimating the coding probabilities for the occupancy status of voxels, depending on wide three dimensional contexts around the voxel to be encoded. The point cloud is represented as an octree, with each resolution layer being sequentially encoded and decoded using arithmetic coding, starting from the lowest resolution, until the final resolution is reached. The occupancy probability of each voxel of the splitting pattern at each node of the octree is modeled by a neural network, having at its input the already encoded occupancy status of several octree nodes (belonging to the past and current resolutions), corresponding to a 3D context surrounding the node to be encoded. The algorithm has a fast and a slow version, the fast version selecting differently several voxels of the context, which allows an increased parallelization by sending larger batches of templates to be estimated by the neural network, at both encoder and decoder. The proposed algorithms yield state-of-the-art results on benchmark datasets. The implementation will be available at https://***/marmus12/nnctx
A new efficient and fast context lossless image coding method is presented in the paper, named Multi-ctx. High performance is obtained due to the fact that predictors are chosen from a very large database. On the othe...
详细信息
ISBN:
(纸本)9781450366205
A new efficient and fast context lossless image coding method is presented in the paper, named Multi-ctx. High performance is obtained due to the fact that predictors are chosen from a very large database. On the other hand, simple rules determining contexts result in low computational complexity of the method. The technique can be easily adapted to specific class of applications, as the algorithm can be trained on a set of exemplary images. For the purpose of proving its potential it has been trained on a set of 45 images of various type, then its performance compared to that of some widely used fast techniques. Indeed, the new method appears to be the best, even better than CALIC approach.
In the paper improvements obtained for context lossless audio coding are investigated. The approach is not popular in audio compression, hence, the research concentrates on static forward predictors optimized using MM...
详细信息
ISBN:
(纸本)9781467360371
In the paper improvements obtained for context lossless audio coding are investigated. The approach is not popular in audio compression, hence, the research concentrates on static forward predictors optimized using MMSE criterion. Two and three context algorithms are tested on 16 popular benchmark recordings. Savings due to inter-channel audio dependencies are also considered. It is shown that indeed, context approach has potential of improving data compaction properties of audio coding algorithms.
In the paper an improved context-Based Adaptive Linear Prediction (CoBALP_ultra) lossless image coding algorithm is presented. Three principal and few auxiliary contexts are defined. CoBALP stage is followed by two-st...
详细信息
ISBN:
(纸本)9781479970094
In the paper an improved context-Based Adaptive Linear Prediction (CoBALP_ultra) lossless image coding algorithm is presented. Three principal and few auxiliary contexts are defined. CoBALP stage is followed by two-step NLMS predictor, then cumulated predictor error is calculated on the basis of a new formula. It is shown experimentally that indeed, the new technique is time-effective while it outperforms the well known methods having reasonable time complexity, and is inferior only to extremely computationally complex ones.
A new lossless image coding method competitive with the best known image coding techniques in terms of efficiency and complexity is suggested. It is based on adaptive color space transform, adaptive context coding, an...
详细信息
A new lossless image coding method competitive with the best known image coding techniques in terms of efficiency and complexity is suggested. It is based on adaptive color space transform, adaptive context coding, and improved prediction of pixel values of image color components. Examples of application of the new algorithm to a set of standard images are given and comparison with known algorithms is performed.
Novel ideas for lossless audio coding analyzed in the paper are linked with forward predictor adaptation, and concern optimization of predictors on the basis of zero-order entropy and MMAE criterions, and context soun...
详细信息
Novel ideas for lossless audio coding analyzed in the paper are linked with forward predictor adaptation, and concern optimization of predictors on the basis of zero-order entropy and MMAE criterions, and context sound coding. Direct use of the former criterion is linked with exponential growth of optimization procedure, hence, a suboptimal algorithm having polynomial complexity is proposed. It is shown that on average the new types of predictors are better than those obtained by MMSE technique, while two-and three context systems are on average better than a single predictor one. It also appears that 7-bit PARCOR coefficients in the MPEG-4 ALS standard have insufficient precision for some predictor length, and that for very long frames coding results improve with the predictor rank practically in unlimited way.
暂无评论