咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >Pr-Ge-Ne: Efficient Encoding o... 收藏

Pr-Ge-Ne: Efficient Encoding of Pervasive Video Sensing Streams by Pruned Generative Networks

作     者:Wu, Ji-Yan Gamlath, Kasun Misra, Archan 

作者机构:School of Computing and Information Systems Singapore Management University Singapore Singapore 

出 版 物:《ACM Transactions on Multimedia Computing, Communications and Applications》 (ACM Trans. Multimedia Comput. Commun. Appl.)

年 卷 期:2025年第21卷第2期

页      面:1-22页

核心收录:

学科分类:0810[工学-信息与通信工程] 0809[工学-电子科学与技术(可授工学、理学学位)] 08[工学] 0837[工学-安全科学与工程] 0835[工学-软件工程] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:This work was supported by the National Research Foundation  Singapore under its NRF Investigatorship grant (NRFNRFI05-2019-0007). Any opinions  findings  and conclusions or recommendations expressed in this material are those of the author(s) and do not reflect the views of National Research Foundation  Singapore 

主  题:Encoding (symbols) 

摘      要:While video sensing, performed by resource-constrained pervasive devices, is a key enabler of many machine intelligence applications, the high energy and bandwidth overheads of streaming video transmission continue to present formidable deployment challenges. Motivated by the recent advancements in deep learning models, this article proposes the usage of a Generative Network-based technique for resource-efficient streaming video compression and transmission. However, we empirically show that while such generative network-based models offer superior compression gains compared to H.265, additional DNN optimization mechanisms are needed to substantially reduce their encoder complexity. Our proposed optimized system, dubbed Pr-Ge-Ne, adopts a carefully pruned encoder-decoder DNN, on the pervasive device, to efficiently encode a latent vector representation of intra-frame relative motion, and then uses a generator network at the decoder to reconstruct the frames by overlaying such motion information to animatean initial reference frame. By evaluating three representative streaming video datasets, we show that Pr-Ge-Ne achieves around - fold reduction in video transmission rates (with negligible impact on the accuracy of machine perception tasks) compared to H.265, while simultaneously reducing latency and energy overheads on a pervasive device by 90% and 15-50%, respectively. © 2025 Copyright held by the owner/author(s). Publication rights licensed to ACM.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分