咨询与建议

限定检索结果

文献类型

  • 2 篇 会议

馆藏范围

  • 2 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 2 篇 工学
    • 2 篇 计算机科学与技术...
    • 1 篇 软件工程

主题

  • 2 篇 converting autoe...
  • 1 篇 dnn compression
  • 1 篇 deep neural netw...
  • 1 篇 early-exit dnns
  • 1 篇 energy-efficienc...
  • 1 篇 edge computing
  • 1 篇 edge devices
  • 1 篇 low latency
  • 1 篇 energy efficienc...

机构

  • 2 篇 univ texas san a...

作者

  • 2 篇 lama palden
  • 2 篇 prasad sushil k.
  • 2 篇 kang peng
  • 2 篇 desai kevin
  • 2 篇 mahmud hasanul

语言

  • 2 篇 英文
检索条件"主题词=Converting Autoencoder"
2 条 记 录,以下是1-10 订阅
排序:
A converting autoencoder Toward Low-latency and Energy-efficient DNN Inference at the Edge
A Converting Autoencoder Toward Low-latency and Energy-effic...
收藏 引用
1st International Conference on Smart Energy Systems and Artificial Intelligence (SESAI)
作者: Mahmud, Hasanul Kang, Peng Desai, Kevin Lama, Palden Prasad, Sushil K. Univ Texas San Antonio Dept Comp Sci San Antonio TX 78249 USA
Reducing inference time and energy usage while maintaining prediction accuracy has become a significant concern for deep neural networks (DNN) inference on resourcecon-strained edge devices. To address this problem, w... 详细信息
来源: 评论
CAE-Net: Enhanced converting autoencoder based Framework for Low-latency Energy-efficient DNN with SLO-constraints
CAE-Net: Enhanced Converting Autoencoder based Framework for...
收藏 引用
IEEE Cloud Summit
作者: Mahmud, Hasanul Kang, Peng Lama, Palden Desai, Kevin Prasad, Sushil K. Univ Texas San Antonio Dept Comp Sci San Antonio TX 78249 USA
As deep neural networks (DNNs) continue to be used on resource-limited edge devices with low latency requirements for interactive applications, there is a growing need to reduce inference time and energy consumption w... 详细信息
来源: 评论