咨询与建议

限定检索结果

文献类型

  • 509 篇 会议
  • 191 篇 期刊文献
  • 2 册 图书

馆藏范围

  • 702 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 459 篇 工学
    • 353 篇 计算机科学与技术...
    • 258 篇 软件工程
    • 86 篇 信息与通信工程
    • 58 篇 电子科学与技术(可...
    • 53 篇 控制科学与工程
    • 35 篇 机械工程
    • 35 篇 生物工程
    • 28 篇 电气工程
    • 18 篇 仪器科学与技术
    • 16 篇 动力工程及工程热...
    • 11 篇 土木工程
    • 10 篇 材料科学与工程(可...
    • 10 篇 网络空间安全
    • 8 篇 化学工程与技术
    • 8 篇 农业工程
    • 8 篇 环境科学与工程(可...
    • 7 篇 交通运输工程
    • 6 篇 光学工程
  • 168 篇 理学
    • 101 篇 数学
    • 36 篇 生物学
    • 29 篇 系统科学
    • 25 篇 物理学
    • 24 篇 统计学(可授理学、...
    • 11 篇 化学
  • 120 篇 管理学
    • 81 篇 管理科学与工程(可...
    • 42 篇 图书情报与档案管...
    • 23 篇 工商管理
  • 13 篇 经济学
    • 13 篇 应用经济学
  • 13 篇 法学
    • 11 篇 社会学
  • 9 篇 农学
    • 8 篇 作物学
  • 3 篇 教育学
  • 3 篇 文学
  • 3 篇 医学
  • 3 篇 军事学
  • 1 篇 艺术学

主题

  • 32 篇 computational mo...
  • 22 篇 training
  • 19 篇 benchmark testin...
  • 18 篇 fault tolerance
  • 18 篇 distributed proc...
  • 18 篇 feature extracti...
  • 17 篇 kernel
  • 16 篇 computer archite...
  • 16 篇 semantics
  • 15 篇 deep learning
  • 15 篇 concurrent compu...
  • 15 篇 laboratories
  • 14 篇 servers
  • 14 篇 hardware
  • 13 篇 algorithm design...
  • 13 篇 cloud computing
  • 12 篇 parallel process...
  • 12 篇 graphics process...
  • 12 篇 optimization
  • 12 篇 protocols

机构

  • 112 篇 college of compu...
  • 81 篇 national laborat...
  • 77 篇 science and tech...
  • 47 篇 national laborat...
  • 35 篇 school of comput...
  • 30 篇 national laborat...
  • 22 篇 science and tech...
  • 22 篇 national key lab...
  • 18 篇 national key lab...
  • 18 篇 national laborat...
  • 16 篇 national laborat...
  • 14 篇 national laborat...
  • 13 篇 science and tech...
  • 13 篇 school of comput...
  • 12 篇 national key lab...
  • 11 篇 science and tech...
  • 11 篇 national key lab...
  • 10 篇 national laborat...
  • 10 篇 national key lab...
  • 10 篇 national key lab...

作者

  • 32 篇 dongsheng li
  • 28 篇 yijie wang
  • 28 篇 wang yijie
  • 26 篇 li dongsheng
  • 25 篇 wang huaimin
  • 21 篇 huaimin wang
  • 20 篇 zhigang luo
  • 18 篇 naiyang guan
  • 18 篇 peng yuxing
  • 16 篇 yuxing peng
  • 14 篇 dou yong
  • 14 篇 liu jie
  • 14 篇 ji wang
  • 14 篇 yin gang
  • 13 篇 wang ji
  • 13 篇 ding bo
  • 13 篇 jie liu
  • 12 篇 xiang zhang
  • 12 篇 lai zhiquan
  • 11 篇 zhiquan lai

语言

  • 657 篇 英文
  • 42 篇 中文
  • 3 篇 其他
检索条件"机构=National Laboratory for Parallel and Distributed Processing College of Computer"
702 条 记 录,以下是61-70 订阅
Local-Adaptive Transformer for Multivariate Time Series Anomaly Detection and Diagnosis
Local-Adaptive Transformer for Multivariate Time Series Anom...
收藏 引用
IEEE International Conference on Systems, Man and Cybernetics
作者: Xiaohui Zhou Yijie Wang Hongzuo Xu Mingyu Liu Ruyi Zhang National Key Laboratory of Parallel and Distributed Computing College of Computer National University of Defense Technology Changsha China
Time series data are pervasive in varied real-world applications, and accurately identifying anomalies in time series is of great importance. Many current methods are insufficient to model long-term dependence, wherea...
来源: 评论
Rethinking the distributed DNN Training Cluster Design from the Cost-effectiveness View
Rethinking the Distributed DNN Training Cluster Design from ...
收藏 引用
IEEE International Conference on High Performance Computing and Communications (HPCC)
作者: Zhiquan Lai Yujie Liu Wei Wang Yanqi Hao Dongsheng Li National Key Laboratory of Parallel and Distributed Computing College of Computer National University of Defense Technology Changsha China
As deep learning grows rapidly, model training heavily relies on parallel methods and there exist numerous cluster configurations. However, current preferences for parallel training focus on data centers, overlooking ...
来源: 评论
Communication Analysis for Multidimensional parallel Training of Large-scale DNN Models
Communication Analysis for Multidimensional Parallel Trainin...
收藏 引用
IEEE International Conference on High Performance Computing and Communications (HPCC)
作者: Zhiquan Lai Yanqi Hao Shengwei Li Dongsheng Li National Key Laboratory of Parallel and Distributed Computing College of Computer National University of Defense Technology Changsha China
Multidimensional parallel training has been widely applied to train large-scale deep learning models like GPT-3. The efficiency of parameter communication among training devices/processes is often the performance bott...
来源: 评论
Efficient Large Models Fine-tuning on Commodity Servers via Memory-balanced Pipeline parallelism
Efficient Large Models Fine-tuning on Commodity Servers via ...
收藏 引用
IEEE International Conference on High Performance Computing and Communications (HPCC)
作者: Yujie Liu Zhiquan Lai Weijie Liu Wei Wang Dongsheng Li National Key Laboratory of Parallel and Distributed Computing College of Computer National University of Defense Technology Changsha China
Large models have achieved impressive performance in many downstream tasks. Using pipeline parallelism to fine-tune large models on commodity GPU servers is an important way to make the excellent performance of large ...
来源: 评论
Graph Structure Learning via Transfer Entropy for Multivariate Time Series Anomaly Detection
Graph Structure Learning via Transfer Entropy for Multivaria...
收藏 引用
International Conference on Acoustics, Speech, and Signal processing (ICASSP)
作者: Mingyu Liu Yijie Wang Xiaohui Zhou Yongjun Wang National Key Laboratory of Parallel and Distributed Computing College of Computer Science and Technology National University of Defense Technology Changsha China College of Computer Science and Technology National University of Defense Technology Changsha China
Multivariate time series anomaly detection (MTAD) poses a challenge due to temporal and feature dependencies. The critical aspects of enhancing the detection performance lie in accurately capturing the dependencies be... 详细信息
来源: 评论
A Deeply Pipelined 64-bit Multiplier for High-Performance RISC-V Processors
A Deeply Pipelined 64-bit Multiplier for High-Performance RI...
收藏 引用
Frontiers Technology of Information and computer (ICFTIC), IEEE International Conference on
作者: Wenyi Liu Feng Hu Guilan Li Bangjian Xu Xin Niu College of Computer Science and Electronic Hunan University Changsha China Science and Technology on Parallel and Distributed Laboratory College of Computer National University of Defense Technology Changsha China
The multiplier is an important component of the processor's computing unit. Multiplication, multiplication, addition, and multiplication and subtraction operations are widely used in various signal processing algo... 详细信息
来源: 评论
Merak: An Efficient distributed DNN Training Framework with Automated 3D parallelism for Giant Foundation Models
arXiv
收藏 引用
arXiv 2022年
作者: Lai, Zhiquan Li, Shengwei Tang, Xudong Ge, Keshi Liu, Weijie Duan, Yabo Qiao, Linbo Li, Dongsheng The National Laboratory for Parallel and Distributed Processing College of Computer National University of Defense Technology in Changsha Hunan China
Foundation models are in the process of becoming the dominant deep learning technology. Pretraining a foundation model is always time-consuming due to the large scale of both the model parameter and training dataset. ... 详细信息
来源: 评论
SCGraph: Accelerating Sample-based GNN Training by Staged Caching of Features on GPUs
SCGraph: Accelerating Sample-based GNN Training by Staged Ca...
收藏 引用
IEEE International Conference on Big Data and Cloud Computing (BdCloud)
作者: Yuqi He Zhiquan Lai Zhejiang Ran Lizhi Zhang Dongsheng Li National Key Laboratory of Parallel and Distributed Processing College of Computer National University of Defense Technology Changsha China
Graph neural networks (GNNs) have been becoming important tools for processing structured graph data and successfully applied to multiple graph-based application scenarios. The existing GNN systems adopt sample-based ... 详细信息
来源: 评论
DMSA: Decentralized and Multi-keyword Selective Data Sharing and Acquisition
DMSA: Decentralized and Multi-keyword Selective Data Sharing...
收藏 引用
International Symposium on parallel and distributed processing with Applications, ISPA
作者: Moheng Lin Peichang Shi Xiang Fu Feng Jiang Guodong Yi National Key Laboratory of Parallel and Distributed Computing College of Computer Science National University of Defense Technology Changsha China Xiangjiang Lab Changsha China
Blockchain technology has been extensively uti-lized in decentralized data-sharing applications, with the immutability of blockchain providing a witness for the circulation of data. However, current blockchain data-sh... 详细信息
来源: 评论
HAF: a hybrid annotation framework based on expert knowledge and learning technique
收藏 引用
Science China(Information Sciences) 2022年 第1期65卷 276-278页
作者: Zhixing LI Yue YU Tao WANG Gang YIN Xinjun MAO Huaimin WANG Key Laboratory of Parallel and Distributed Computing National University of Defense Technology College of Computer National University of Defense Technology
Dear editor,The increasing awareness of the potential value hidden in data has resulted in many data mining studies being conducted. In the domain of software engineering, for example, developers' behavioral data ...
来源: 评论