咨询与建议

限定检索结果

文献类型

  • 372 篇 会议
  • 166 篇 期刊文献
  • 2 册 图书

馆藏范围

  • 540 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 317 篇 工学
    • 265 篇 计算机科学与技术...
    • 205 篇 软件工程
    • 51 篇 信息与通信工程
    • 38 篇 控制科学与工程
    • 36 篇 生物工程
    • 24 篇 电子科学与技术(可...
    • 23 篇 机械工程
    • 16 篇 电气工程
    • 11 篇 网络空间安全
    • 10 篇 化学工程与技术
    • 8 篇 仪器科学与技术
    • 7 篇 动力工程及工程热...
    • 7 篇 交通运输工程
    • 6 篇 材料科学与工程(可...
    • 6 篇 农业工程
    • 5 篇 光学工程
    • 5 篇 建筑学
  • 128 篇 理学
    • 76 篇 数学
    • 38 篇 生物学
    • 17 篇 统计学(可授理学、...
    • 15 篇 物理学
    • 15 篇 化学
    • 9 篇 系统科学
  • 101 篇 管理学
    • 57 篇 管理科学与工程(可...
    • 48 篇 图书情报与档案管...
    • 20 篇 工商管理
  • 10 篇 法学
    • 8 篇 社会学
  • 8 篇 经济学
    • 8 篇 应用经济学
  • 7 篇 农学
    • 6 篇 作物学
  • 6 篇 教育学
    • 6 篇 教育学
  • 5 篇 医学
  • 1 篇 文学

主题

  • 17 篇 computational mo...
  • 17 篇 feature extracti...
  • 15 篇 training
  • 14 篇 deep neural netw...
  • 14 篇 laboratories
  • 14 篇 semantics
  • 13 篇 cloud computing
  • 12 篇 distributed proc...
  • 11 篇 servers
  • 11 篇 machine learning
  • 11 篇 distributed comp...
  • 10 篇 programming
  • 9 篇 scalability
  • 9 篇 deep learning
  • 9 篇 costs
  • 9 篇 data models
  • 8 篇 optimization
  • 8 篇 topology
  • 8 篇 hardware
  • 8 篇 accuracy

机构

  • 38 篇 school of cyber ...
  • 36 篇 college of compu...
  • 32 篇 school of comput...
  • 29 篇 national key lab...
  • 27 篇 national enginee...
  • 26 篇 hubei key labora...
  • 26 篇 hubei engineerin...
  • 24 篇 services computi...
  • 24 篇 cluster and grid...
  • 22 篇 national key lab...
  • 21 篇 national key lab...
  • 16 篇 school of softwa...
  • 16 篇 national laborat...
  • 15 篇 school of inform...
  • 14 篇 national laborat...
  • 14 篇 national key lab...
  • 13 篇 national key lab...
  • 12 篇 national key lab...
  • 11 篇 shanghai key lab...
  • 11 篇 science and tech...

作者

  • 26 篇 jin hai
  • 25 篇 wang huaimin
  • 23 篇 li dongsheng
  • 21 篇 hai jin
  • 19 篇 wang yijie
  • 18 篇 hu shengshan
  • 18 篇 huaimin wang
  • 16 篇 dongsheng li
  • 15 篇 ding bo
  • 14 篇 zhang leo yu
  • 13 篇 li minghui
  • 11 篇 lai zhiquan
  • 11 篇 zhou ziqi
  • 10 篇 wang tao
  • 10 篇 yijie wang
  • 10 篇 zhiquan lai
  • 10 篇 chen haibo
  • 10 篇 ji wang
  • 10 篇 tao wang
  • 9 篇 gang yin

语言

  • 505 篇 英文
  • 26 篇 中文
  • 9 篇 其他
检索条件"机构=National Key Laboratory of Parallel and Distributed Computing"
540 条 记 录,以下是51-60 订阅
排序:
Funnel: An Efficient Sparse Attention Accelerator with Multi-Dataflow Fusion  22
Funnel: An Efficient Sparse Attention Accelerator with Multi...
收藏 引用
22nd IEEE International Symposium on parallel and distributed Processing with Applications, ISPA 2024
作者: Ma, Shenghong Xu, Jinwei Jiang, Jingfei Wang, Yaohua Li, Dongsheng National University of Defense Technology National Key Laboratory of Parallel and Distributed Computing College of Computer Changsha China
The self-attention mechanism is the core component of Transformer, which provides a powerful ability to understand the sequence context. However, the self-attention mechanism also suffers from a large amount of redund... 详细信息
来源: 评论
A clustering-based approach for mining dockerfile evolutionary trajectories
收藏 引用
Science China(Information Sciences) 2019年 第1期62卷 211-213页
作者: Yang ZHANG Huaimin WANG Vladimir FILKOV Key Laboratory of Parallel and Distributed Computing National University of Defense Technology College of Computer National University of Defense Technology DECAL Lab University of California Computer Science Department University of California
Dear editor,Docker1), as a de-facto industry standard [1], enables the packaging of an application with all its dependencies and execution environment in a light-weight, self-contained unit, i.e., *** launching the co... 详细信息
来源: 评论
Recovering Performance for Vector-based Machine Learning on Managed Runtime
收藏 引用
ACM SIGPLAN Notices 2017年 第8期52卷 457-458页
作者: Wu, Mingyu Guan, Haibing Zang, Binyu Chen, Haibo Shanghai Key Laboratory of Scalable Computing and Systems Institute of Parallel and Distributed Systems Shanghai Jiao Tong University China
来源: 评论
Efficient Large Models Fine-tuning on Commodity Servers via Memory-balanced Pipeline parallelism  25
Efficient Large Models Fine-tuning on Commodity Servers via ...
收藏 引用
25th IEEE International Conferences on High Performance computing and Communications, 9th International Conference on Data Science and Systems, 21st IEEE International Conference on Smart City and 9th IEEE International Conference on Dependability in Sensor, Cloud and Big Data Systems and Applications, HPCC/DSS/SmartCity/DependSys 2023
作者: Liu, Yujie Lai, Zhiquan Liu, Weijie Wang, Wei Li, Dongsheng College of Computer National University of Defense Technology National Key Laboratory of Parallel and Distributed Computing Changsha China
Large models have achieved impressive performance in many downstream tasks. Using pipeline parallelism to fine-tune large models on commodity GPU servers is an important way to make the excellent performance of large ... 详细信息
来源: 评论
Rethinking the distributed DNN Training Cluster Design from the Cost-effectiveness View  25
Rethinking the Distributed DNN Training Cluster Design from ...
收藏 引用
25th IEEE International Conferences on High Performance computing and Communications, 9th International Conference on Data Science and Systems, 21st IEEE International Conference on Smart City and 9th IEEE International Conference on Dependability in Sensor, Cloud and Big Data Systems and Applications, HPCC/DSS/SmartCity/DependSys 2023
作者: Lai, Zhiquan Liu, Yujie Wang, Wei Hao, Yanqi Li, Dongsheng College of Computer National University of Defense Technology National Key Laboratory of Parallel and Distributed Computing Changsha China
As deep learning grows rapidly, model training heavily relies on parallel methods and there exist numerous cluster configurations. However, current preferences for parallel training focus on data centers, overlooking ... 详细信息
来源: 评论
Area-NeRF: Area-based Neural Radiance Fields  2
Area-NeRF: Area-based Neural Radiance Fields
收藏 引用
2nd International Conference on Image Processing, Computer Vision and Machine Learning, ICICML 2023
作者: Ye, Zonxin Li, Wenyu Qiao, Peng Dou, Yong National University of Defense Technology National Key Laboratory of Parallel and Distributed Computing School of Computer Changsha China
Neural Radiance Field (NeRF) has received widespread attention for its photo-realistic novel view synthesis quality. Current methods mainly represent the scene based on point sampling of ray casting, ignoring the infl... 详细信息
来源: 评论
Local-Adaptive Transformer for Multivariate Time Series Anomaly Detection and Diagnosis
Local-Adaptive Transformer for Multivariate Time Series Anom...
收藏 引用
2023 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2023
作者: Zhou, Xiaohui Wang, Yijie Xu, Hongzuo Liu, Mingyu Zhang, Ruyi College of Computer National University of Defense Technology National Key Laboratory of Parallel and Distributed Computing Changsha China
Time series data are pervasive in varied real-world applications, and accurately identifying anomalies in time series is of great importance. Many current methods are insufficient to model long-term dependence, wherea...
来源: 评论
High Performance Interconnect Network for Tianhe System
收藏 引用
Journal of Computer Science & Technology 2015年 第2期30卷 259-272页
作者: 廖湘科 庞征 王克非 卢宇彤 谢旻 夏军 董德尊 所光 College of Computer National University of Defense Technology Changsha 410073 Science and Technology on Parallel and Distributed Processing Laboratory National Changsha 410073 China China University of Defense Technology State Key Laboratory of High Performance Computing National University of Defense Technology Changsha 410073 China
In this paper, we present the Tianhe-2 interconnect network and message passing services. We describe the architecture of the router and network interface chips, and highlight a set of hardware and software features e... 详细信息
来源: 评论
Mbapp: Efficient Memory-Balanced Pipeline parallelism for Large Model Fine-Tuning on Commodity GPU Servers  24
Mbapp: Efficient Memory-Balanced Pipeline Parallelism for La...
收藏 引用
5th International Conference on Computer Information and Big Data Applications, CIBDA 2024
作者: Liu, Yujie Lai, Zhiquan Li, Dongsheng National Key Laboratory of Parallel and Distributed Computing College of Computer National University of Defense Technology Changsha410000 China
Large-scale models have demonstrated outstanding performance across various downstream tasks. Pipeline parallelism is essential for fine-tuning large models on commodity GPU servers, as it plays a crucial role in maki... 详细信息
来源: 评论
Towards estimating expected sizes of probabilistic skylines
收藏 引用
Science China(Information Sciences) 2011年 第12期54卷 2574-2584页
作者: YANG YongTao & WANG YiJie national key laboratory for parallel and distributed Processing, School of Computer, national University of Defense Technology, Changsha 410073, China 1. National Key Laboratory for Parallel and Distributed Processing School of Computer National University of Defense Technology Changsha 410073 China
We consider the maximal vector problem on uncertain data, which has been recently posed by the study on processing skyline queries over a probabilistic data stream in the database context. Let D n be a set of n points... 详细信息
来源: 评论