咨询与建议

限定检索结果

文献类型

  • 371 篇 会议
  • 157 篇 期刊文献

馆藏范围

  • 528 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 308 篇 工学
    • 256 篇 计算机科学与技术...
    • 199 篇 软件工程
    • 50 篇 信息与通信工程
    • 36 篇 控制科学与工程
    • 36 篇 生物工程
    • 23 篇 电子科学与技术(可...
    • 22 篇 机械工程
    • 14 篇 电气工程
    • 11 篇 网络空间安全
    • 10 篇 化学工程与技术
    • 8 篇 仪器科学与技术
    • 7 篇 动力工程及工程热...
    • 7 篇 交通运输工程
    • 6 篇 材料科学与工程(可...
    • 6 篇 农业工程
    • 5 篇 光学工程
    • 5 篇 建筑学
  • 126 篇 理学
    • 74 篇 数学
    • 37 篇 生物学
    • 16 篇 统计学(可授理学、...
    • 15 篇 物理学
    • 15 篇 化学
    • 9 篇 系统科学
  • 98 篇 管理学
    • 56 篇 管理科学与工程(可...
    • 46 篇 图书情报与档案管...
    • 20 篇 工商管理
  • 10 篇 法学
    • 8 篇 社会学
  • 8 篇 经济学
    • 8 篇 应用经济学
  • 7 篇 农学
    • 6 篇 作物学
  • 6 篇 教育学
    • 6 篇 教育学
  • 5 篇 医学
  • 1 篇 文学

主题

  • 17 篇 computational mo...
  • 16 篇 feature extracti...
  • 14 篇 laboratories
  • 14 篇 training
  • 13 篇 semantics
  • 13 篇 cloud computing
  • 12 篇 deep neural netw...
  • 12 篇 distributed proc...
  • 11 篇 servers
  • 11 篇 machine learning
  • 11 篇 distributed comp...
  • 10 篇 programming
  • 9 篇 scalability
  • 9 篇 deep learning
  • 9 篇 costs
  • 9 篇 data models
  • 8 篇 optimization
  • 8 篇 topology
  • 8 篇 hardware
  • 8 篇 accuracy

机构

  • 38 篇 school of cyber ...
  • 36 篇 college of compu...
  • 32 篇 school of comput...
  • 29 篇 national key lab...
  • 27 篇 national enginee...
  • 26 篇 hubei key labora...
  • 26 篇 hubei engineerin...
  • 24 篇 services computi...
  • 24 篇 cluster and grid...
  • 22 篇 national key lab...
  • 21 篇 national key lab...
  • 16 篇 school of softwa...
  • 16 篇 national laborat...
  • 15 篇 school of inform...
  • 14 篇 national laborat...
  • 13 篇 national key lab...
  • 12 篇 national key lab...
  • 11 篇 shanghai key lab...
  • 11 篇 science and tech...
  • 11 篇 national key lab...

作者

  • 25 篇 jin hai
  • 23 篇 wang huaimin
  • 22 篇 li dongsheng
  • 18 篇 hu shengshan
  • 18 篇 wang yijie
  • 18 篇 huaimin wang
  • 17 篇 hai jin
  • 16 篇 dongsheng li
  • 14 篇 zhang leo yu
  • 13 篇 ding bo
  • 13 篇 li minghui
  • 11 篇 lai zhiquan
  • 11 篇 zhou ziqi
  • 10 篇 yijie wang
  • 10 篇 zhiquan lai
  • 10 篇 chen haibo
  • 10 篇 ji wang
  • 10 篇 tao wang
  • 9 篇 gang yin
  • 9 篇 dou yong

语言

  • 494 篇 英文
  • 26 篇 中文
  • 8 篇 其他
检索条件"机构=The National Key Laboratory of Parallel and Distributed Computing"
528 条 记 录,以下是51-60 订阅
排序:
Communication Analysis for Multidimensional parallel Training of Large-scale DNN Models
Communication Analysis for Multidimensional Parallel Trainin...
收藏 引用
IEEE International Conference on High Performance computing and Communications (HPCC)
作者: Zhiquan Lai Yanqi Hao Shengwei Li Dongsheng Li National Key Laboratory of Parallel and Distributed Computing College of Computer National University of Defense Technology Changsha China
Multidimensional parallel training has been widely applied to train large-scale deep learning models like GPT-3. The efficiency of parameter communication among training devices/processes is often the performance bott...
来源: 评论
Area-NeRF: Area-based Neural Radiance Fields
Area-NeRF: Area-based Neural Radiance Fields
收藏 引用
Image Processing, Computer Vision and Machine Learning (ICICML), International Conference on
作者: Zonxin Ye Wenyu Li Peng Qiao Yong Dou National Key Laboratory of Parallel and Distributed Computing School of Computer National University of Defense Technology Changsha China
Neural Radiance Field (NeRF) has received widespread attention for its photo-realistic novel view synthesis quality. Current methods mainly represent the scene based on point sampling of ray casting, ignoring the infl...
来源: 评论
Efficient Large Models Fine-tuning on Commodity Servers via Memory-balanced Pipeline parallelism
Efficient Large Models Fine-tuning on Commodity Servers via ...
收藏 引用
IEEE International Conference on High Performance computing and Communications (HPCC)
作者: Yujie Liu Zhiquan Lai Weijie Liu Wei Wang Dongsheng Li National Key Laboratory of Parallel and Distributed Computing College of Computer National University of Defense Technology Changsha China
Large models have achieved impressive performance in many downstream tasks. Using pipeline parallelism to fine-tune large models on commodity GPU servers is an important way to make the excellent performance of large ...
来源: 评论
Cooperative Air-Ground Instant Delivery by UAVs and Crowdsourced Taxis  40
Cooperative Air-Ground Instant Delivery by UAVs and Crowdsou...
收藏 引用
40th IEEE International Conference on Data Engineering, ICDE 2024
作者: Gao, Junhui Wang, Qianru Zhang, Xin Shi, Juan Zhao, Xiang Han, Qingye Pan, Yan School of Computer Science Northwestern Polytechnical University China School of Computer Science and Technology Xidian University China Air Force Engineering University China National University of Defense Technology Laboratory for Big Data and Decision China School of Management Science and Real Estate Chongqing University China National University of Defense Technology National Key Laboratory of Information Systems Engineering China National University of Defense Technology National Key Laboratory of Parallel and Distributed Computing China
Instant delivery has become a fundamental service in people's daily lives. Different from the traditional express service, the instant delivery has a strict shipping time constraint after being ordered. However, t... 详细信息
来源: 评论
An Efficient Broadcast Authentication Protocol in Wireless Sensor Networks
收藏 引用
Chinese Journal of Electronics 2023年 第2期18卷 368-372页
作者: Xin Zhao Xiaodong Wang Wanrong Yu Xingming Zhou National Key Laboratory of Parallel and Distributed Processing National University of Defense Technology Changsha China
Broadcast authentication is a critical security service in wireless sensor networks. A protocol named $\mu\text{TESLA}$ [1] has been proposed to provide efficient authentication service for such networks. However, w... 详细信息
来源: 评论
DMSA: Decentralized and Multi-keyword Selective Data Sharing and Acquisition
DMSA: Decentralized and Multi-keyword Selective Data Sharing...
收藏 引用
International Symposium on parallel and distributed Processing with Applications, ISPA
作者: Moheng Lin Peichang Shi Xiang Fu Feng Jiang Guodong Yi National Key Laboratory of Parallel and Distributed Computing College of Computer Science National University of Defense Technology Changsha China Xiangjiang Lab Changsha China
Blockchain technology has been extensively uti-lized in decentralized data-sharing applications, with the immutability of blockchain providing a witness for the circulation of data. However, current blockchain data-sh... 详细信息
来源: 评论
HAF: a hybrid annotation framework based on expert knowledge and learning technique
收藏 引用
Science China(Information Sciences) 2022年 第1期65卷 276-278页
作者: Zhixing LI Yue YU Tao WANG Gang YIN Xinjun MAO Huaimin WANG Key Laboratory of Parallel and Distributed Computing National University of Defense Technology College of Computer National University of Defense Technology
Dear editor,The increasing awareness of the potential value hidden in data has resulted in many data mining studies being conducted. In the domain of software engineering, for example, developers' behavioral data ...
来源: 评论
DaMSTF: Domain Adversarial Learning Enhanced Meta Self-Training for Domain Adaptation
arXiv
收藏 引用
arXiv 2023年
作者: Lu, Menglong Huang, Zhen Zhao, Yunxiang Tian, Zhiliang Liu, Yang Li, Dongsheng National Key Laboratory of Parallel and Distributed Computing National University of Defense Technology China Beijing Institute of Biotechnology China
Self-training emerges as an important research line on domain adaptation. By taking the model’s prediction as the pseudo labels of the unlabeled data, self-training bootstraps the model with pseudo instances in the t... 详细信息
来源: 评论
OLM2: Automatic Optimal Strategy Generating for Large-Scale Model Training with Limited-Memory
OLM2: Automatic Optimal Strategy Generating for Large-Scale ...
收藏 引用
IEEE International Conference on Joint Cloud computing (JCC)
作者: Zhilin Yang Yu Tang Linbo Qiao Xi Yang Zhen Huang National Key Laboratory of Parallel and Distributed Computing College of Computer Science National University of Defense Technology Changsha 410073 China
The scale of model parameters and the amount of training data is exponentially increasing. It requires more GPU memory with the exponential increasement of model parameters. Recomputation and swapping are two main mem...
来源: 评论
Don't Half-listen: Capturing key-part Information in Continual Instruction Tuning
arXiv
收藏 引用
arXiv 2024年
作者: He, Yongquan Huang, Xuancheng Tang, Minghao Meng, Lingxun Li, Xiang Lin, Wei Zhang, Wenyuan Gao, Yifu Meituan China Institute of Information Engineering Chinese Academy of Sciences China National Key Laboratory of Parallel and Distributed Computing National University of Defense Technology China
Instruction tuning for large language models (LLMs) can drive them to produce results consistent with human goals in specific downstream tasks. However, the process of continual instruction tuning (CIT) for LLMs may b... 详细信息
来源: 评论