咨询与建议

限定检索结果

文献类型

  • 237 篇 会议
  • 71 篇 期刊文献

馆藏范围

  • 308 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 170 篇 工学
    • 153 篇 计算机科学与技术...
    • 122 篇 软件工程
    • 20 篇 信息与通信工程
    • 20 篇 生物工程
    • 18 篇 控制科学与工程
    • 13 篇 机械工程
    • 12 篇 电子科学与技术(可...
    • 10 篇 电气工程
    • 7 篇 动力工程及工程热...
    • 7 篇 化学工程与技术
    • 7 篇 生物医学工程(可授...
    • 4 篇 力学(可授工学、理...
    • 3 篇 材料科学与工程(可...
    • 3 篇 交通运输工程
    • 3 篇 网络空间安全
  • 76 篇 理学
    • 43 篇 数学
    • 21 篇 生物学
    • 10 篇 化学
    • 9 篇 物理学
    • 7 篇 统计学(可授理学、...
    • 4 篇 系统科学
  • 53 篇 管理学
    • 33 篇 管理科学与工程(可...
    • 21 篇 图书情报与档案管...
    • 14 篇 工商管理
  • 5 篇 医学
    • 4 篇 基础医学(可授医学...
    • 4 篇 临床医学
    • 3 篇 药学(可授医学、理...
  • 4 篇 经济学
    • 4 篇 应用经济学
  • 4 篇 法学
    • 4 篇 社会学
  • 3 篇 教育学
    • 3 篇 教育学
  • 2 篇 农学

主题

  • 30 篇 distributed comp...
  • 25 篇 concurrent compu...
  • 21 篇 laboratories
  • 13 篇 parallel process...
  • 10 篇 application soft...
  • 10 篇 data mining
  • 10 篇 computational mo...
  • 9 篇 computer science
  • 9 篇 grid computing
  • 9 篇 accuracy
  • 8 篇 routing
  • 8 篇 kernel
  • 8 篇 data models
  • 7 篇 java
  • 6 篇 runtime
  • 6 篇 scheduling algor...
  • 6 篇 computer archite...
  • 6 篇 neural networks
  • 6 篇 contracts
  • 6 篇 algorithm design...

机构

  • 21 篇 national key lab...
  • 15 篇 college of compu...
  • 13 篇 national laborat...
  • 12 篇 national laborat...
  • 11 篇 shanghai key lab...
  • 11 篇 national key lab...
  • 10 篇 national key lab...
  • 8 篇 john von neumann...
  • 7 篇 science and tech...
  • 7 篇 laboratory of di...
  • 6 篇 parallel and dis...
  • 6 篇 laboratory of pa...
  • 6 篇 mta sztaki labor...
  • 5 篇 key laboratory o...
  • 5 篇 óbuda university...
  • 5 篇 parallel and dis...
  • 5 篇 department of co...
  • 5 篇 national univers...
  • 5 篇 institute of par...
  • 5 篇 mta sztaki/labor...

作者

  • 20 篇 li kuan-ching
  • 20 篇 yang chao-tung
  • 15 篇 li dongsheng
  • 11 篇 huaimin wang
  • 10 篇 dongsheng li
  • 10 篇 chen haibo
  • 10 篇 v. chaudhary
  • 9 篇 gang yin
  • 9 篇 dou yong
  • 9 篇 ji wang
  • 9 篇 tao wang
  • 8 篇 wang yijie
  • 8 篇 zang binyu
  • 7 篇 guan haibing
  • 7 篇 lai zhiquan
  • 7 篇 qiao peng
  • 7 篇 huang zhen
  • 7 篇 yue yu
  • 6 篇 yijie wang
  • 6 篇 s. roy

语言

  • 298 篇 英文
  • 5 篇 其他
  • 5 篇 中文
检索条件"机构=Parallel Distributed Computing Laboratory"
308 条 记 录,以下是41-50 订阅
排序:
A Physics and Data-Driven Hybrid PINNs Intelligent computing Method for Nuclear Engineering Simulation  4
A Physics and Data-Driven Hybrid PINNs Intelligent Computing...
收藏 引用
4th International Conference on Electronic Information Engineering and Computer Science, EIECS 2024
作者: Xie, Yufei Wang, Wenlin Wu, Guohua Yu, Yang An, Ping Sun, Zibin Zhang, Haichuan Luo, Shengfeng Li, Yue School of Automation Wuhan University of Technology Wuhan China Sino-German College of Intelligent Manufacturing Shenzhen Technology University Shenzhen China Nuclear Power Institute of China Chengdu China National University of Defense Technology National Key Laboratory of Parallel and Distributed Computing Changsha China
In the field of nuclear energy, the Loss of Coolant Accident (LOCA) is recognized as one of the most severe types of nuclear reactor accidents, characterized by its complex physical processes and potentially catastrop... 详细信息
来源: 评论
CHIME: A Cache-Efficient and High-Performance Hybrid Index on Disaggregated Memory  24
CHIME: A Cache-Efficient and High-Performance Hybrid Index o...
收藏 引用
30th ACM Symposium on Operating Systems Principles, SOSP 2024
作者: Luo, Xuchuan Shen, Jiacheng Zuo, Pengfei Wang, Xin Lyu, Michael R. Zhou, Yangfan National Key Laboratory of Parallel and Distributed Computing Changsha China School of Computer Science Fudan University Shanghai China Duke Kunshan University Kunshan China Huawei Cloud Shenzhen China Shanghai Key Laboratory of Intelligent Information Processing Shanghai China The Chinese University of Hong Kong Hong Kong Hong Kong
Disaggregated memory (DM) is a widely discussed datacenter architecture in academia and industry. It decouples computing and memory resources from monolithic servers into two network-connected resource pools. Range in... 详细信息
来源: 评论
Efficient distributed parallel Aligning Reads and Reference Genome with Many Repetitive Subsequences Using Compact de Bruijn Graph
Efficient Distributed Parallel Aligning Reads and Reference ...
收藏 引用
International Symposium on parallel Architectures, Algorithms and Programming (PAAP)
作者: Yao Li Cheng Zhong Danyang Chen Jinxiong Zhang Mengxiao Yin Key Laboratory of Parallel School of Computer Electronics and Information Distributed Computing Technology in Guangxi Universities Guangxi University Nanning China
A large number of reads generated by the next generation sequencing platform will contain many repetitive subsequences. Effective localizing and identifying genomic regions containing repetitive subsequences will cont... 详细信息
来源: 评论
Local-Adaptive Transformer for Multivariate Time Series Anomaly Detection and Diagnosis
Local-Adaptive Transformer for Multivariate Time Series Anom...
收藏 引用
IEEE International Conference on Systems, Man and Cybernetics
作者: Xiaohui Zhou Yijie Wang Hongzuo Xu Mingyu Liu Ruyi Zhang National Key Laboratory of Parallel and Distributed Computing College of Computer National University of Defense Technology Changsha China
Time series data are pervasive in varied real-world applications, and accurately identifying anomalies in time series is of great importance. Many current methods are insufficient to model long-term dependence, wherea...
来源: 评论
Rethinking the distributed DNN Training Cluster Design from the Cost-effectiveness View
Rethinking the Distributed DNN Training Cluster Design from ...
收藏 引用
IEEE International Conference on High Performance computing and Communications (HPCC)
作者: Zhiquan Lai Yujie Liu Wei Wang Yanqi Hao Dongsheng Li National Key Laboratory of Parallel and Distributed Computing College of Computer National University of Defense Technology Changsha China
As deep learning grows rapidly, model training heavily relies on parallel methods and there exist numerous cluster configurations. However, current preferences for parallel training focus on data centers, overlooking ...
来源: 评论
Communication Analysis for Multidimensional parallel Training of Large-scale DNN Models
Communication Analysis for Multidimensional Parallel Trainin...
收藏 引用
IEEE International Conference on High Performance computing and Communications (HPCC)
作者: Zhiquan Lai Yanqi Hao Shengwei Li Dongsheng Li National Key Laboratory of Parallel and Distributed Computing College of Computer National University of Defense Technology Changsha China
Multidimensional parallel training has been widely applied to train large-scale deep learning models like GPT-3. The efficiency of parameter communication among training devices/processes is often the performance bott...
来源: 评论
Area-NeRF: Area-based Neural Radiance Fields
Area-NeRF: Area-based Neural Radiance Fields
收藏 引用
Image Processing, Computer Vision and Machine Learning (ICICML), International Conference on
作者: Zonxin Ye Wenyu Li Peng Qiao Yong Dou National Key Laboratory of Parallel and Distributed Computing School of Computer National University of Defense Technology Changsha China
Neural Radiance Field (NeRF) has received widespread attention for its photo-realistic novel view synthesis quality. Current methods mainly represent the scene based on point sampling of ray casting, ignoring the infl...
来源: 评论
Efficient Large Models Fine-tuning on Commodity Servers via Memory-balanced Pipeline parallelism
Efficient Large Models Fine-tuning on Commodity Servers via ...
收藏 引用
IEEE International Conference on High Performance computing and Communications (HPCC)
作者: Yujie Liu Zhiquan Lai Weijie Liu Wei Wang Dongsheng Li National Key Laboratory of Parallel and Distributed Computing College of Computer National University of Defense Technology Changsha China
Large models have achieved impressive performance in many downstream tasks. Using pipeline parallelism to fine-tune large models on commodity GPU servers is an important way to make the excellent performance of large ...
来源: 评论
Highly parallelized Reinforcement Learning Training with Relaxed Assignment Dependencies
arXiv
收藏 引用
arXiv 2025年
作者: He, Zhouyu Qiao, Peng Li, Rongchun Dou, Yong Tan, Yusong College of Computer Science and Technology National University of Defense Technology China National Key Laboratory of Parallel and Distributed Computing National University of Defense Technology China
As the demands for superior agents grow, the training complexity of Deep Reinforcement Learning (DRL) becomes higher. Thus, accelerating training of DRL has become a major research focus. Dividing the DRL training pro... 详细信息
来源: 评论
DaMSTF: Domain Adversarial Learning Enhanced Meta Self-Training for Domain Adaptation
arXiv
收藏 引用
arXiv 2023年
作者: Lu, Menglong Huang, Zhen Zhao, Yunxiang Tian, Zhiliang Liu, Yang Li, Dongsheng National Key Laboratory of Parallel and Distributed Computing National University of Defense Technology China Beijing Institute of Biotechnology China
Self-training emerges as an important research line on domain adaptation. By taking the model’s prediction as the pseudo labels of the unlabeled data, self-training bootstraps the model with pseudo instances in the t... 详细信息
来源: 评论