The Internet of Things (IoT) is a collection of physical entities that forms a huge communication between the IoT devices. The exchange of data between these IoT devices may lead to modification of data and transmissi...
详细信息
Smart grids typically incorporate a communication networking layer onto the electric power grid to exchange the data and the information between the intelligent electronic devices and the supervisory control and data ...
详细信息
ISBN:
(数字)9781665484329
ISBN:
(纸本)9781665484329
Smart grids typically incorporate a communication networking layer onto the electric power grid to exchange the data and the information between the intelligent electronic devices and the supervisory control and data acquisition (SCADA). The smart grid monitoring and control requires the acquisition and the transmission of a large volume of data over such communication networks. This process sets new requirements for the existing data communication networks within the smart grid regarding the network transmission capacity and the data storage. datacompression is an effective mean to compress the power quality (PQ) data and hence saving the cost of data storage while meeting the communication network transmission capacity limits. The previous work on PQ datacompression uses wavelets multiresolution signal decomposition into various decomposition levels. Thresholding is then applied on each wavelet decomposition level to capture the features needed for signal reconstruction. The goodness of compression is significantly affected by the choice of the wavelet basic function as well as the choice of the number of decomposition levels. This paper looks into identifying the most suitable wavelet basic function and the suitable number of wavelet decomposition level to achieve high compression of PQ disturbances. The study considered 80 wavelets and 4 categories of PQ disturbances such as sag, swell, interruption and notches. The results have been presented and the conclusions were drawn.
data transmission over mobile voice channel (DoV) suffers from vocoder compression and smartphone de-noising. Most previous algorithms ignore these two factors leading to the generated signal being distorted severely ...
详细信息
After encoding audio, image, video, or text data, the output always has the form of the bit stream. Therefore, whether the bit stream can be further compressed is an interesting issue. Many ways, including direct enco...
详细信息
Domain adaptation is an efficient technique to improve the performance of a system by adapting a pre-trained model to the given input data. The adaptation technique has been generally applied in conventional video cod...
详细信息
ISBN:
(数字)9781665496209
ISBN:
(纸本)9781665496209
Domain adaptation is an efficient technique to improve the performance of a system by adapting a pre-trained model to the given input data. The adaptation technique has been generally applied in conventional video codecs. For neural network-based systems, the encoder may adapt the decoder to the input data by fine-tuning a pre-trained model present at the decoder side. The weight update is then transferred to the decoder and the updated model is used to decode the bitstream. However, due to the large number of parameters in deep neural networks, the overhead of the weight update may diminish the gain from the adaptation technique. In recent years, various methods have been proposed to reduce the overhead without significantly compromising the gain. In this paper, we propose an adaptive multi-scale progressive probability model for lossless image compression. The proposed method uses the data that has already been processed at the inference stage to fine-tune the probability model. Importantly, the decoder can apply the fine-tuning by itself resulting a small adaptation overhead to help the decoder in performing the fine-tuning. The proposed method achieves up to 0.28 bits-per-pixel (BPP) reduction on four benchmark datasets compared to the state-of-the-art method.
This study addresses the problem of datacompression in the context of IEC61850 communication protocol used in smart grids. The study presents a new approach based on wavelet compression that uses the predictor import...
详细信息
ISBN:
(数字)9781665484329
ISBN:
(纸本)9781665484329
This study addresses the problem of datacompression in the context of IEC61850 communication protocol used in smart grids. The study presents a new approach based on wavelet compression that uses the predictor importance to identify the wavelet detail level that holds the salient features of the disturbance signal and then applies a hybrid threshold that includes both hard and soft thresholds to the wavelet details. The effectiveness of the proposed approach has been tested and evaluated and the results have shown that the proposed approach was very effective in increasing the compression ratio leading to reduction in the file sizes. Furthermore, the results of testing the proposed approach using a real-time digital simulator in the context of IEC61850 showed that the implementation of the proposed approach not only led to a significant reduction in the number of messages but also leads to a reduction in their sizes while maintaining a high-quality signal reconstruction following the compression process.
With the development of edge computing and cloud computing in power scenarios, the cloud center collects a large amount of data from edge nodes every day, and the load of edge nodes is overloaded and the transmission ...
详细信息
In computer vision, neural network models typically require a large amount of manually annotated images or video data for training. To reduce annotation costs, self-supervised learning has gained significant attention...
详细信息
Treatments of disuse-induced muscle atrophy entail unmet clinical needs due to the lack of medical devices capable of mimicking physicians manual therapies. Therefore, in this paper we develop and model a wearable sof...
详细信息
Screen content images typically contain a mix of natural and synthetic image parts. Synthetic sections usually are comprised of uniformly colored areas and repeating colors and patterns. In the VVC standard, these pro...
详细信息
ISBN:
(纸本)9798350344868;9798350344851
Screen content images typically contain a mix of natural and synthetic image parts. Synthetic sections usually are comprised of uniformly colored areas and repeating colors and patterns. In the VVC standard, these properties are exploited using Intra Block Copy and Palette Mode. In this paper, we show that pixel-wise lossless coding can outperform lossy VVC coding in such areas. We propose an enhanced VVC coding approach for screen content images using the principle of soft context formation. First, the image is separated into two layers in a block-wise manner using a learning-based method with four block features. Synthetic image parts are coded losslessly using soft context formation, the rest with VVC. We modify the available soft context formation coder to incorporate information gained by the decoded VVC layer for improved coding efficiency. Using this approach, we achieve Bjontegaard-Delta-rate gains of 4.98% on the evaluated data sets compared to VVC.
暂无评论