咨询与建议

限定检索结果

文献类型

  • 3,494 篇 会议
  • 1,727 篇 期刊文献
  • 8 册 图书

馆藏范围

  • 5,229 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 4,015 篇 工学
    • 3,299 篇 计算机科学与技术...
    • 1,382 篇 软件工程
    • 917 篇 电气工程
    • 651 篇 信息与通信工程
    • 277 篇 控制科学与工程
    • 206 篇 电子科学与技术(可...
    • 147 篇 机械工程
    • 140 篇 生物工程
    • 87 篇 力学(可授工学、理...
    • 78 篇 动力工程及工程热...
    • 69 篇 材料科学与工程(可...
    • 63 篇 仪器科学与技术
    • 63 篇 生物医学工程(可授...
    • 51 篇 化学工程与技术
    • 34 篇 土木工程
    • 34 篇 网络空间安全
    • 31 篇 建筑学
  • 984 篇 理学
    • 553 篇 数学
    • 229 篇 物理学
    • 184 篇 生物学
    • 95 篇 系统科学
    • 86 篇 统计学(可授理学、...
    • 81 篇 化学
  • 519 篇 管理学
    • 373 篇 管理科学与工程(可...
    • 167 篇 图书情报与档案管...
    • 119 篇 工商管理
  • 116 篇 医学
    • 84 篇 临床医学
    • 46 篇 基础医学(可授医学...
  • 58 篇 法学
    • 49 篇 社会学
  • 39 篇 经济学
    • 37 篇 应用经济学
  • 27 篇 教育学
  • 27 篇 农学
  • 16 篇 文学
  • 7 篇 军事学
  • 5 篇 艺术学

主题

  • 166 篇 distributed comp...
  • 158 篇 computational mo...
  • 131 篇 cloud computing
  • 124 篇 concurrent compu...
  • 96 篇 computer archite...
  • 88 篇 deep learning
  • 87 篇 parallel process...
  • 85 篇 laboratories
  • 76 篇 optimization
  • 75 篇 protocols
  • 74 篇 grid computing
  • 73 篇 scalability
  • 70 篇 servers
  • 70 篇 hardware
  • 69 篇 training
  • 68 篇 fault tolerance
  • 65 篇 neural networks
  • 65 篇 machine learning
  • 64 篇 application soft...
  • 64 篇 distributed proc...

机构

  • 259 篇 natl univ def te...
  • 199 篇 natl univ def te...
  • 177 篇 natl univ def te...
  • 169 篇 national laborat...
  • 133 篇 science and tech...
  • 111 篇 college of compu...
  • 101 篇 univ stuttgart i...
  • 97 篇 univ stuttgart i...
  • 88 篇 national laborat...
  • 79 篇 national laborat...
  • 76 篇 shanghai jiao to...
  • 75 篇 natl univ def te...
  • 62 篇 univ stuttgart i...
  • 62 篇 institute of par...
  • 59 篇 natl univ def te...
  • 57 篇 natl lab paralle...
  • 50 篇 institute of par...
  • 49 篇 institute for pa...
  • 45 篇 natl univ def te...
  • 45 篇 institute for pa...

作者

  • 177 篇 chen haibo
  • 175 篇 dou yong
  • 156 篇 rothermel kurt
  • 150 篇 li dongsheng
  • 135 篇 liu jie
  • 128 篇 wang huaimin
  • 117 篇 wang yijie
  • 107 篇 mitschang bernha...
  • 81 篇 luo zhigang
  • 80 篇 peng yuxing
  • 75 篇 wang ji
  • 73 篇 duerr frank
  • 72 篇 kosec gregor
  • 71 篇 zang binyu
  • 71 篇 wang xiaodong
  • 62 篇 kurt rothermel
  • 61 篇 wang tao
  • 60 篇 zhang xiang
  • 55 篇 wang qinglin
  • 53 篇 huang zhen

语言

  • 5,059 篇 英文
  • 91 篇 其他
  • 75 篇 中文
  • 3 篇 德文
  • 2 篇 法文
  • 1 篇 意大利文
检索条件"机构=Parallel and Distributed"
5229 条 记 录,以下是11-20 订阅
排序:
Training large-scale language models with limited GPU memory:a survey
收藏 引用
Frontiers of Information Technology & Electronic Engineering 2025年 第3期26卷 309-331页
作者: Yu TANG Linbo QIAO Lujia YIN Peng LIANG Ao SHEN Zhilin YANG Lizhi ZHANG Dongsheng LI National Key Laboratory of Parallel and Distributed Computing College of ComputerNational University of Defense TechnologyChangsha 410073China
Large-scale models have gained significant attention in a wide range of fields,such as computer vision and natural language processing,due to their effectiveness across various ***,a notable hurdle in training these l... 详细信息
来源: 评论
Leaders and Collaborators: Addressing Sparse Reward Challenges in Multi-Agent Reinforcement Learning
收藏 引用
IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2025年 第2期9卷 1976-1989页
作者: Sun, Shaoqi Liu, Hui Xu, Kele Ding, Bo Natl Univ Def Technol Natl Key Lab Parallel & Distributed Proc Changsha 410003 Peoples R China
Cooperative multi-agent reinforcement learning (MARL) has emerged as an effective tool for addressing complex control tasks. However, sparse team rewards present significant challenges for MARL, leading to low explora... 详细信息
来源: 评论
A parallel Radial Basis Probabilistic Neural Network for scalable data mining in distributed memory machines
A Parallel Radial Basis Probabilistic Neural Network for sca...
收藏 引用
IEEE 24th International Conference on Tools with Artificial Intelligence (ICTAI)
作者: Kokkinos, Yiannis Margaritis, Konstantinos Univ Macedonia Parallel & Distributed Proc Lab Dept Appl Informat Thessaloniki 54006 Greece
This work presents scalable algorithms for basic construction of parallel Radial Basis Probabilistic Neural Networks. The final goal is to build a neural network that can efficiently be implemented in distributed memo... 详细信息
来源: 评论
Memory-efficient tensor parallelism for long-sequence Transformer training
收藏 引用
FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING 2025年 1-18页
作者: Liang, Peng Qiao, Linbo Shi, Yanqi Zheng, Hao Tang, Yu Li, Dongsheng Natl Univ Def Technol Coll Comp Sci & Technol Natl Key Lab Parallel & Distributed Comp Changsha 410073 Peoples R China
Transformer-based models like large language models (LLMs) have attracted significant attention in recent years due to their superior performance. A long sequence of input tokens is essential for industrial LLMs to pr... 详细信息
来源: 评论
DPFS: A distributed parallel File System
DPFS: A Distributed Parallel File System
收藏 引用
30th International Conference on parallel Processing (ICPP 01)
作者: Shen, XH Choudhary, A Northwestern Univ Dept Elect & Comp Engn Ctr Parallel & Distributed Comp Evanston IL 60208 USA
One of challenges brought by large-scale scientific applications is how to avoid remote storage access by collectively using enough local storage resources to hold huge amount of data generated by the simulation while... 详细信息
来源: 评论
A prediction-based parallel replication algorithm in distributed storage system
收藏 引用
4th International Conference on Grid and Cooperative Computing - GCC 2005
作者: Wang, Yijie Zhang, Xiaoming National Laboratory for Parallel and Distributed Processing Institute of Computer National University of Defense Technology Changsha 410073 China
Data replication can be used to reduce bandwidth consumption and access latency in the distributed system where users require remote access to large data objects. In this paper, according to the intrinsic characterist... 详细信息
来源: 评论
An intelligent mesh-smoothing method with graph neural networks
收藏 引用
Frontiers of Information Technology & Electronic Engineering 2025年 第3期26卷 367-384页
作者: Zhichao WANG Xinhai CHEN Junjun YAN Jie LIU Science and Technology on Parallel and Distributed Processing Laboratory National University of Defense TechnologyChangsha 410073China Laboratory of Digitizing Software for Frontier Equipment National University of Defense TechnologyChangsha 410073China
In computational fluid dynamics(CFD),mesh-smoothing methods are widely used to refine the mesh quality for achieving high-precision numerical ***,optimization-based smoothing is used for high-quality mesh smoothing,bu... 详细信息
来源: 评论
NSC-YOLOv8: A Small Target Detection Method for UAV-Acquired Images Based on Self-Adaptive Embedding
收藏 引用
ELECTRONICS 2025年 第8期14卷 1548-1548页
作者: Chen, Dongmin Chen, Danyang Zhong, Cheng Zhan, Feng Guangxi Univ Sch Comp Elect & Informat Nanning 530004 Peoples R China Key Lab Parallel Distributed & Intelligent Comp Gu Nanning 530004 Peoples R China
Existing drone image processing algorithms for small target detection in Unmanned Aerial Vehicle (UAV) aerial images struggle with challenges like missed detection of small objects, information loss from downsampling,... 详细信息
来源: 评论
Few-Shot Object Detection for Remote Sensing Images via Pseudo-Sample Generation and Feature Enhancement
收藏 引用
APPLIED SCIENCES-BASEL 2025年 第8期15卷 4477-4477页
作者: Huang, Zhaoguo Chen, Danyang Zhong, Cheng Guangxi Univ Sch Comp Elect & Informat Nanning 530004 Peoples R China Key Lab Parallel Distributed & lntelligent Comp Gu Nanning 530004 Peoples R China
Few-shot object detection (FSOD) based on fine-tuning is essential for analyzing optical remote sensing images. However, existing methods mainly focus on natural images and overlook the scale variations in remote sens... 详细信息
来源: 评论
Optimizing Fine-Tuning in Quantized Language Models:An In-Depth Analysis of Key Variables
收藏 引用
Computers, Materials & Continua 2025年 第1期82卷 307-325页
作者: Ao Shen Zhiquan Lai Dongsheng Li Xiaoyu Hu National Key Laboratory of Parallel and Distributed Computing National University of Defense TechnologyChangsha410073China Strategic Assessments and Consultation Institute Academy of Military ScienceBeijing100091China
Large-scale Language Models(LLMs)have achieved significant breakthroughs in Natural Language Processing(NLP),driven by the pre-training and fine-tuning *** this approach allows models to specialize in specific tasks w... 详细信息
来源: 评论