咨询与建议

限定检索结果

文献类型

  • 3 篇 期刊文献
  • 2 篇 会议

馆藏范围

  • 5 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 5 篇 工学
    • 4 篇 计算机科学与技术...
    • 3 篇 电气工程
    • 1 篇 信息与通信工程
    • 1 篇 软件工程

主题

  • 5 篇 deep neural netw...
  • 1 篇 motivation
  • 1 篇 split point deci...
  • 1 篇 parallel archite...
  • 1 篇 data centers
  • 1 篇 deep learning pr...
  • 1 篇 secure multi-par...
  • 1 篇 feedforward neur...
  • 1 篇 central processi...
  • 1 篇 semiconductor de...
  • 1 篇 neural networks
  • 1 篇 neural architect...
  • 1 篇 privacy-preservi...
  • 1 篇 split computing
  • 1 篇 tensors
  • 1 篇 tensile stress
  • 1 篇 coprocessors
  • 1 篇 semiconductor te...
  • 1 篇 neural network a...
  • 1 篇 machine learning

机构

  • 1 篇 hewlett packard ...
  • 1 篇 lg elect commun ...
  • 1 篇 google mountain ...
  • 1 篇 hewlett packard ...
  • 1 篇 hewlett packard ...
  • 1 篇 hewlett packard ...
  • 1 篇 pukyong natl uni...
  • 1 篇 korea univ sch e...
  • 1 篇 korea adv inst s...
  • 1 篇 hewlett packard ...
  • 1 篇 hewlett packard ...
  • 1 篇 hewlett packard ...
  • 1 篇 hewlett packard ...
  • 1 篇 hewlett packard ...
  • 1 篇 univ macau fac s...
  • 1 篇 shenzhen univ co...

作者

  • 1 篇 strachan john pa...
  • 1 篇 pack sangheon
  • 1 篇 aguiar glaucimar
  • 1 篇 huang caishi
  • 1 篇 lee jaewook
  • 1 篇 kim heejae
  • 1 篇 zhou jiantao
  • 1 篇 milojicic dejan
  • 1 篇 knuppe gustavo
  • 1 篇 chatterjee soumi...
  • 1 篇 chalamalasetti s...
  • 1 篇 lakshiminarashim...
  • 1 篇 jung daeyoung
  • 1 篇 sharma amit
  • 1 篇 patil nishant
  • 1 篇 saenz felipe
  • 1 篇 kolhe jitendra o...
  • 1 篇 duan jia
  • 1 篇 jouppi norman p.
  • 1 篇 lee kyungchae

语言

  • 5 篇 英文
检索条件"主题词=deep neural network inference"
5 条 记 录,以下是1-10 订阅
排序:
Fast and fair split computing for accelerating deep neural network (DNN) inference
收藏 引用
ICT EXPRESS 2025年 第1期11卷 47-52页
作者: Cha, Dongju Lee, Jaewook Jung, Daeyoung Pack, Sangheon LG Elect Commun & Media Stand Lab Seoul South Korea Pukyong Natl Univ Dept Informat & Commun Engn Busan South Korea Korea Univ Sch Elect Engn Seoul South Korea
Conventional split computing approaches for AI models that generate large outputs suffer from long transmission and inference times. Due to the limited resources of the edge server and selfish MDs, some MDs cannot off... 详细信息
来源: 评论
Privacy-preserving and verifiable deep learning inference based on secret sharing
收藏 引用
NEUROCOMPUTING 2022年 第0期483卷 221-234页
作者: Duan, Jia Zhou, Jiantao Li, Yuanman Huang, Caishi Univ Macau Fac Sci & Technol Dept Comp & Informat Sci State Key Lab Internet Things Smart City Taipa Macao Peoples R China Shenzhen Univ Coll Elect & Informat Engn Shenzhen Peoples R China
deep learning inference, providing the model utilization of deep learning, is usually deployed as a cloud-based framework for the resource-constrained client. However, the existing cloud-based frameworks suffer from s... 详细信息
来源: 评论
neural Architecture Search for Computation Offloading of DNNs from Mobile Devices to the Edge Server  12
Neural Architecture Search for Computation Offloading of DNN...
收藏 引用
12th International Conference on ICT Convergence (ICTC) - Beyond the Pandemic Era with ICT Convergence Innovation
作者: Lee, KyungChae Le Vu Linh Kim, Heejae Youn, Chan-Hyun Korea Adv Inst Sci & Technol KAIST Sch Elect Engn Daejeon South Korea
With the rapid development of modern deep learning technology, deep neural network (DNN)-based mobile applications have also been considered for various areas. However, since mobile devices are not optimized to run th... 详细信息
来源: 评论
FPGA demonstrator of a Programmable Ultra-efficient Memristor-based Machine Learning inference Accelerator  4
FPGA demonstrator of a Programmable Ultra-efficient Memristo...
收藏 引用
4th IEEE International Conference on Rebooting Computing (ICRC)
作者: Foltin, Martin Warner, Craig Lee, Eddie Chalamalasetti, Sai Rahul Brueggen, Chris Williams, Charles Jansen, Nathaniel Saenz, Felipe Li, Luis Federico Aguiar, Glaucimar Antunes, Rodrigo Silveira, Plinio Knuppe, Gustavo Ambrosi, Joao Chatterjee, Soumitra Kolhe, Jitendra Onkar Lakshiminarashimha, Sunil Milojicic, Dejan Strachan, John Paul Sharma, Amit Hewlett Packard Enterprise Silicon Design Lab Ft Collins CO 80528 USA Hewlett Packard Enterprise Silicon Design Lab Plano TX USA Hewlett Packard Enterprise Silicon Design Lab Palo Alto CA USA Hewlett Packard Enterprise Silicon Design Lab Houston TX USA Hewlett Packard Enterprise Silicon Design Lab Heredia Costa Rica Hewlett Packard Enterprise Brazil Labs Barueri Brazil Hewlett Packard Enterprise Brazil Labs Porto Alegre RS Brazil Hewlett Packard Enterprise SSTO RnD Bangalore Karnataka India Hewlett Packard Enterprise Composable Engn Bangalore Karnataka India
Hybrid analog-digital neuromorphic accelerators show promise for significant increase in performance per watt of deep learning inference and training as compared with conventional technologies. In this work we present... 详细信息
来源: 评论
Motivation for and Evaluation of the First Tensor Processing Unit
收藏 引用
IEEE MICRO 2018年 第3期38卷 10-19页
作者: Jouppi, Norman P. Young, Cliff Patil, Nishant Patterson, David Google Mountain View CA 94043 USA
The first-generation tensor processing unit (TPU) runs deep neural network (DNN) inference 15-30 times faster with 30-80 times better energy efficiency than contemporary CPUs and GPUs in similar semiconductor technolo... 详细信息
来源: 评论