This paper discusses memory issues for advertising billboards with multi-pigment color electronic paper (e-paper). Enlarging the size of the electronic billboards increases the size of the memory buffer and flash memo...
详细信息
ISBN:
(纸本)9798350386851;9798350386844
This paper discusses memory issues for advertising billboards with multi-pigment color electronic paper (e-paper). Enlarging the size of the electronic billboards increases the size of the memory buffer and flash memory used to store display content. Due to the specific display mechanism of the color e-paper, currently available image compression methods are not suitable for this specific application. An efficient image compression method for color e-paper with a low-cost decompression mechanism is presented in the paper. The display quality is good enough, and the total image buffer size could be significantly reduced.
Future high-density and high channel count neural interfaces that enable simultaneous recording of tens of thousands of neurons will provide a gateway to study, restore and augment neural functions. However, building ...
详细信息
Future high-density and high channel count neural interfaces that enable simultaneous recording of tens of thousands of neurons will provide a gateway to study, restore and augment neural functions. However, building such technology within the bit-rate limit and power budget of a fully implantable device is challenging. The wired-OR compressive readout architecture addresses the data deluge challenge of a high channel count neural interface using lossy compression at the analog-to-digital interface. In this article, we assess the suitability of wired-OR for several steps that are important for neuroengineering, including spike detection, spike assignment and waveform estimation. For various wiring configurations of wired-OR and assumptions about the quality of the underlying signal, we characterize the trade-off between compression ratio and task-specific signal fidelity metrics. Using data from 18 large-scale microelectrode array recordings in macaque retina ex vivo, we find that for an event SNR of 7-10, wired-OR correctly detects and assigns at least 80% of the spikes with at least 50x compression. The wired-OR approach also robustly encodes action potential waveform information, enabling downstream processing such as cell-type classification. Finally, we show that by applying an LZ77-based lossless compressor (gzip) to the output of the wired-OR architecture, 1000x compression can be achieved over the baseline recordings.
Cloud-based data processing latency mainly depends on the transmission delay of data to the cloud and the used data processing algorithm. To minimize the transmission delay, it is important to compress the transferred...
详细信息
ISBN:
(纸本)9798350399462
Cloud-based data processing latency mainly depends on the transmission delay of data to the cloud and the used data processing algorithm. To minimize the transmission delay, it is important to compress the transferred data without reducing the quality of the data. When using datacompression algorithms, it is important to validate the impact of these algorithms on the detection quality. This work evaluates the effects of image compression and transmission over wireless interfaces on state of the art neural networks. Therefore, a modern image processing platform for next generation automotive processing architectures, as used in software defined vehicles, is introduced. The impacts of different image encoders as well as data transmission parameters are investigated and discussed.
Distributed data-parallel (DDP) training improves overall application throughput as multiple devices train on a subset of data and aggregate updates to produce a globally shared model. The periodic synchronization at ...
详细信息
ISBN:
(纸本)9798350304817
Distributed data-parallel (DDP) training improves overall application throughput as multiple devices train on a subset of data and aggregate updates to produce a globally shared model. The periodic synchronization at each iteration incurs considerable overhead, exacerbated by the increasing size and complexity of state-of-the-art neural networks. Although many gradient compression techniques propose to reduce communication cost, the ideal compression factor that leads to maximum speedup or minimum data exchange remains an open-ended problem since it varies with the quality of compression, model size and structure, hardware, network topology and bandwidth. We propose GraVAC, a framework to dynamically adjust compression factor throughout training by evaluating model progress and assessing gradient information loss associated with compression. GraVAC works in an online, black-box manner without any prior assumptions about a model or its hyperparameters, while achieving the same or better accuracy than dense SGD (i.e., no compression) in the same number of iterations/epochs. As opposed to using a static compression factor, GraVAC reduces end-to-end training time for ResNet101, VGG16 and LSTM by 4.32x, 1.95x and 6.67x respectively. Compared to other adaptive schemes, our framework provides 1.94x to 5.63x overall speedup.
Quantizing one single deep neural network into multiple compression rates (precisions) has been recently considered for flexible deployments in real-world scenarios. In this paper, we propose a novel scheme that achie...
详细信息
Distributing the computational workload of neural networks across cloud servers and local devices is an effective strategy for deploying resource-intensive deep models to edge devices. Therefore, compressing the deep ...
详细信息
ISBN:
(纸本)9798350390155;9798350390162
Distributing the computational workload of neural networks across cloud servers and local devices is an effective strategy for deploying resource-intensive deep models to edge devices. Therefore, compressing the deep features without compromising performance is crucial for saving server storage and transmission bandwidth. However, existing feature compression approaches are model or task specific and require training from scratch. In this paper, we propose a general and efficient framework for compressing deep features without requiring any prior knowledge of the semantics or task of the features. Our key observation is that different parts of the the feature map have different importance levels for a specific task. We can apply compression operation to a deeper degree for less irrelevant parts to achieve a high compression rate, while preserving the performance by applying a lower compression ratio to the more important parts. Focusing on this idea, we use the activation map generated by GradCAM [1] to classify each deep feature channel into essential and peripheral categories. To improve classification accuracy, we utilise semantic segmentations to provide natural boundaries for scoring each semantic channel. Peripheral channels are compressed using binary compression to achieve a high compression rate, while essential channels are compressed using mask compression. To effectively separate the essential and peripheral channels for a given input feature, we adopt a data-driven approach to identify the essential channels from datasets. Experimental results demonstrate that our method outperforms the state-of-the-art feature compression methods and is generalizable for various deep models.
EEG signals are known for their high temporal resolution and data volume, posing challenges in storage, transmission, and analysis. With the increasing prevalence of wearable mobile BCI devices replacing large medical...
详细信息
ISBN:
(纸本)9798350338416
EEG signals are known for their high temporal resolution and data volume, posing challenges in storage, transmission, and analysis. With the increasing prevalence of wearable mobile BCI devices replacing large medical-grade recording devices, the need for efficient on-device processing and wireless transmission has grown. We propose a novel analyzability-aware EEG compression scheme optimized for machine consumption, facilitating efficient, real-time wearable processing while allowing reconstruction for detailed analysis and human understanding. Our method employs a wavelet-based compression technique that considers the compression Factor, reconstruction error, and the impact of compression on on-device processing and data transmission. Experimental results using publicly available datasets demonstrate that machine learning models trained on compressed data using our method achieve performance comparable to the winning solution of BCI Competition II. Moreover, our approach offers multiple compression configurations suitable for various wearable computing scenarios.
The k-Minimum Values (KMV) data sketch algorithm stores the k least hash keys generated by hashing the items in a dataset. We show that compression based on ordering the keys and encoding successive differences can of...
详细信息
Effective compression of 360 degrees images, also referred to as omnidirectional images (ODIs), is of high interest for various virtual reality (VR) and related applications. 2D image compression methods ignore the eq...
详细信息
ISBN:
(纸本)9798350349405;9798350349399
Effective compression of 360 degrees images, also referred to as omnidirectional images (ODIs), is of high interest for various virtual reality (VR) and related applications. 2D image compression methods ignore the equator-biased nature of ODIs and fail to address oversampling near the poles, leading to inefficient compression when applied to ODI. We present a new learned saliency-aware 360 degrees image compression architecture that prioritizes bit allocation to more significant regions, considering the unique properties of ODIs. By assigning fewer bits to less important regions, significant data size reduction can be achieved while maintaining high visual quality in the significant regions. To the best of our knowledge, this is the first study that proposes an end-to-end variable-rate model to compress 360 degrees images leveraging saliency information. The results show significant bit-rate savings over the state-of-the-art learned and traditional ODI compression methods at similar perceptual visual quality. Supplementary materials are available at [Supplementary URL].
—This paper is based on the application of lossless compression algorithms in image compression, aiming to solve problems such as insufficient storage space, low transmission efficiency, and heavy data processing bur...
详细信息
暂无评论