In order to solve the efficiency problem of massive multiresolution vector data organization in DGGS field, we propose a vector data organization method based on progressive tile structure in DGGS. The method is a com...
详细信息
In order to solve the efficiency problem of massive multiresolution vector data organization in DGGS field, we propose a vector data organization method based on progressive tile structure in DGGS. The method is a combination of DGGS with tile structure, which are traditionally used in the field of GIS. That structure can support tile scheduling and express vector objects asymptotically in the inner pairs of tiles, which reduces the amount of data transfer and improves the access and construction efficiency of vector objects while guaranteeing the efficiency of data scheduling. To validate the above design, we first resampled the global 1:100,000 vector data containing multiple topological types to multiresolution vector data with a data resolution of 1:559,000-1:35,000, the data volume is 22,312 MB after converting to tile structure. Then we carried out three experiments: data volume, data construction, and data access. The results of the experiments show that compared with the organization method based on the traditional tile-pyramid structure, the method is able to achieve higher construction efficiency and access efficiency, as well as the smaller data storage volume. Therefore, we believe that this method provides an efficient method for the organization of massive multiresolution vector data based on DGGS and also provides an idea for the organization of massive multiresolution data in the field of GIS.
While sensing imagery in space missions has broad applications, the growing image resolution and data volume have caused a major challenge due to limited deep space channel capacities. To address this challenge, seman...
详细信息
While sensing imagery in space missions has broad applications, the growing image resolution and data volume have caused a major challenge due to limited deep space channel capacities. To address this challenge, semantics-aware image compression becomes a promising direction. This paper is motivated to explore lossy compression at the ultra-low rate regime, which is a deviation from the high-fidelity- oriented tradition. Specifically, we propose an ultra-low rate deep image compression (DIC) codec by synthesizing multiple neural computing techniques such as style generative adversarial network (GAN), inverse GAN mapping, and contrastive disentangled representation learning. In addition, a residual-based progressive encoding framework is proposed to enable smooth transitions from the ultra-low rate regime to near- lossless regime. Experiments on the FFHQ and DOTA dataset demonstrate that compared with existing DICs, the proposed DIC can push the minimum rate boundary by about one order of magnitude while preserving the semantic attributes and maintaining a high perception quality. We further elaborate the design considerations for cross-rate-regime progressive DIC. Our study confirm that a semantically disentangled DIC holds the promise to bridge multiple rate regimes.
We present PLONQ, a progressive neural image compression scheme which pushes the boundary of variable bitrate compression by allowing quality scalable coding with a single bitstream. In contrast to existing learned va...
详细信息
ISBN:
(纸本)9781665441155
We present PLONQ, a progressive neural image compression scheme which pushes the boundary of variable bitrate compression by allowing quality scalable coding with a single bitstream. In contrast to existing learned variable bitrate solutions which produce separate bitstreams for each quality, it enables easier rate-control and requires less storage. Leveraging the latent scaling based variable bitrate solution, we introduce nested quantization, a method that defines multiple quantization levels with nested quantization grids, and progressively refines all latents from the coarsest to the finest quantization level. To achieve finer progressiveness in between any two quantization levels, latent elements are incrementally refined with an importance ordering defined in the rate-distortion sense. To the best of our knowledge, PLONQ is the first learning-based progressive image coding scheme and it outperforms SPIHT, a well-known wavelet-based progressive image codec.
Point clouds are gaining importance as the format to represent complex 3D objects and scenes, offering high user immersion and interaction, although at the cost of requiring massive data. Scalable coding is an importa...
详细信息
ISBN:
(纸本)9781728163956
Point clouds are gaining importance as the format to represent complex 3D objects and scenes, offering high user immersion and interaction, although at the cost of requiring massive data. Scalable coding is an important feature for point cloud coding, especially for real-time applications, where the fast and bitrate efficient access to a decoded point cloud is important;however, this issue is still rather unexplored in the literature. With the rise of deep learning methods as a promising solution for efficient coding, this paper proposes the first deep learning-based point cloud geometry scalable coding solution. Experimental results show that the proposed scalable coding solution consistently outperforms the MPEG standard for static point cloud geometry coding. In this way, a new research path is open for point cloud scalable coding technology.
In this paper, we propose an electrocardiogram (ECG) signal compression algorithm that is based on wavelet and a new modified set partitioning in hierarchical trees (SPIHT) algorithm. The proposed algorithm contains a...
详细信息
In this paper, we propose an electrocardiogram (ECG) signal compression algorithm that is based on wavelet and a new modified set partitioning in hierarchical trees (SPIHT) algorithm. The proposed algorithm contains a preprocessing of the approximation subband before the coding step by mean removing. Three other modifications are also introduced to the SPIHT algorithm. The first one is a new initialization of the two lists of insignificant points (LIP) and insignificant sets (LIS), while the second is concerning the position of inserting new entries of type A at the LIS, and in the last one, the redundancy in checking type B entries in the original method was found and avoided. The new proposed coding algorithm is applied to ECG signal compression and the obtained numerical results on the MIT-BIH database show the efficient performances of the proposed SPIHT algorithm over the original method and other existing methods.
High dynamic range (HDR) image has larger luminance range than conventional low dynamic range (LDR) image, which is more consistent with human visual system (HVS). Recently, JPEG committee releases a new HDR image com...
详细信息
ISBN:
(纸本)9781538641606
High dynamic range (HDR) image has larger luminance range than conventional low dynamic range (LDR) image, which is more consistent with human visual system (HVS). Recently, JPEG committee releases a new HDR image compression standard JPEG XT. It decomposes input HDR image into base layer and extension layer. However, this method doesn't make full use of HVS, causing waste of bits on imperceptible regions to human eyes. In this paper, a visual saliency based HDR image compression scheme is proposed. The saliency map of tone mapped HDR image is first extracted, then is used to guide extension layer encoding. The compression quality is adaptive to the saliency of the coding region of the image. Extensive experimental results show that our method outperforms JPEG XT profile A, B, C, and offers the JPEG compatibility at the same time. Moreover, our method can provide progressive coding of extension layer.
Wireless data gathering networks are often tasked to gather correlated data under severe energy constraints. The use of simple channel codes with source-channel decoding can potentially provide good performance with l...
详细信息
ISBN:
(纸本)9781424492688
Wireless data gathering networks are often tasked to gather correlated data under severe energy constraints. The use of simple channel codes with source-channel decoding can potentially provide good performance with low energy consumption. Here we consider progressive coding in multi-hop networks, where an intermediate node decodes its received noisy codewords. The estimated information is concatenated with the node's own information word and encoded;the resulting progressively-encoded codeword is then transmitted to the next node. In non-progressive coding, the node simply forwards the received noisy codewords along with its own encoded data. Here we compare the performance of two codes with low decoding complexity, Repeat-Accumulate (RA) and Low-Density Parity-Check (LDPC) codes, in combination with two progressive coding schemes. progressive channel coding uses only channel decoding at the intermediate node, while progressive source-channel coding uses source-channel decoding, exploiting the probabilistic dependency of the information words (caused by the correlation structure of the data) jointly with the deterministic dependency induced by channel coding. Two decoding schemes are considered at the data center: channel decoding only and iterative source-channel decoding. In simulation experiments, we consider a line network topology with systematic RA and LDPC coding. Results show that progressive coding performs better than non-progressive coding, and RA codes perform better with lower computational complexity than LDPC codes, both for channel-decoding-only and iterative source-channel decoding.
Compression of arbitrary 3D geometry like a human figure in 3D space is challenging. Existing 3D representations like point cloud require encoding of input-specified 3D coordinates, resulting in a large overhead. In t...
详细信息
ISBN:
(纸本)9781509021758
Compression of arbitrary 3D geometry like a human figure in 3D space is challenging. Existing 3D representations like point cloud require encoding of input-specified 3D coordinates, resulting in a large overhead. In this paper, assuming that there exists an underlying smooth 2D manifold in 3D space that describes the geometric shape of a target object, we develop a new progressive 3D geometry representation that signal-adaptively identifies new samples on the manifold surface and encodes them efficiently as graph-signals. Specifically, at each iteration, using previous encoded samples in 3D space, the encoder and decoder first synchronously interpolate a continuous sampling kernel (a 3D mesh)-an approximation of the target surface. We next distribute new sample locations on the continuous kernel based on locally computed kernel curvatures, and compute the signed distances between sample locations and the target surface as sample values. Finally, we connect new discrete samples into a graph for graph-based transform coding of the sample values, which are transmitted to the decoder to refine 3D reconstruction. Experimental results show that our coding scheme outperforms an existing mesh-bsed approach significantly at the low-bitrate region for two different datasets.
A new coding method to exploit intra-subband and inter-subband correlation of the medical images in the transform domain is presented. Subbands corresponding to each direction are combined together and treated as a se...
详细信息
ISBN:
(纸本)9789380544168
A new coding method to exploit intra-subband and inter-subband correlation of the medical images in the transform domain is presented. Subbands corresponding to each direction are combined together and treated as a set of hierarchical trees. If a set of hierarchical trees becomes significant, it is divided into four sets and each set is processed separately. The image is encoded progressively and the user can stop decoding at any instant, depending upon the extent of clarity required. Experiments are conducted on CT images. The results show a significant improvement in terms of bit-rate for the required peak signal to noise ratio (PSNR) as compared to both progressive and region based image coding methods.
This paper proposes an adaptive block-based compressed sensing (ABCS) technique to build a new progressive image coding scheme, in which both image acquisition and reconstruction are carried out in two layers. At the ...
详细信息
This paper proposes an adaptive block-based compressed sensing (ABCS) technique to build a new progressive image coding scheme, in which both image acquisition and reconstruction are carried out in two layers. At the base layer, an original image is sampled and restored by the block-based compressed sensing (BCS) method with a low and fixed measurement rate. Second, all blocks in the enhancement layer are re-sampled with different rates according to a block classification. The final reconstruction of a block at the enhancement layer is performed in multiple stages where each stage only knows a part of sampled coefficients. We present some experimental results to show that our proposed ABCS method outperforms the BCS method;in particular, it produces a better visual quality in regions that contain edges, patterns, and textures.
暂无评论