咨询与建议

限定检索结果

文献类型

  • 625 篇 会议
  • 576 篇 期刊文献

馆藏范围

  • 1,201 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 1,162 篇 工学
    • 929 篇 计算机科学与技术...
    • 430 篇 电气工程
    • 206 篇 软件工程
    • 179 篇 信息与通信工程
    • 45 篇 控制科学与工程
    • 34 篇 电子科学与技术(可...
    • 25 篇 材料科学与工程(可...
    • 21 篇 力学(可授工学、理...
    • 19 篇 动力工程及工程热...
    • 17 篇 核科学与技术
    • 15 篇 机械工程
    • 10 篇 仪器科学与技术
    • 10 篇 测绘科学与技术
    • 10 篇 生物工程
    • 9 篇 生物医学工程(可授...
    • 8 篇 石油与天然气工程
    • 5 篇 化学工程与技术
    • 5 篇 船舶与海洋工程
  • 154 篇 理学
    • 73 篇 物理学
    • 29 篇 数学
    • 25 篇 生物学
    • 18 篇 化学
    • 15 篇 地球物理学
    • 7 篇 系统科学
  • 51 篇 医学
    • 37 篇 临床医学
    • 13 篇 基础医学(可授医学...
  • 35 篇 管理学
    • 29 篇 管理科学与工程(可...
    • 6 篇 工商管理
    • 4 篇 图书情报与档案管...
  • 5 篇 教育学
    • 4 篇 教育学
  • 3 篇 经济学
  • 3 篇 文学
  • 3 篇 农学
  • 1 篇 艺术学

主题

  • 40 篇 deep learning
  • 36 篇 cloud computing
  • 24 篇 training
  • 24 篇 gpu
  • 23 篇 task analysis
  • 18 篇 fpga
  • 17 篇 blockchain
  • 16 篇 machine learning
  • 14 篇 computational mo...
  • 14 篇 feature extracti...
  • 13 篇 anomaly detectio...
  • 12 篇 throughput
  • 12 篇 parallel computi...
  • 11 篇 outlier detectio...
  • 11 篇 scalability
  • 11 篇 image classifica...
  • 11 篇 scheduling
  • 9 篇 reinforcement le...
  • 9 篇 neural networks
  • 9 篇 optimization

机构

  • 259 篇 natl univ def te...
  • 178 篇 natl univ def te...
  • 177 篇 natl univ def te...
  • 75 篇 natl univ def te...
  • 57 篇 natl lab paralle...
  • 45 篇 natl univ def te...
  • 44 篇 natl univ def te...
  • 38 篇 natl univ def te...
  • 34 篇 natl univ def te...
  • 33 篇 natl univ def te...
  • 33 篇 natl univ def te...
  • 28 篇 natl univ def te...
  • 28 篇 natl univ def te...
  • 22 篇 natl key lab par...
  • 20 篇 natl univ def te...
  • 18 篇 natl univ def te...
  • 17 篇 natl univ def te...
  • 17 篇 natl univ def te...
  • 16 篇 hunan univ coll ...
  • 15 篇 natl univ def te...

作者

  • 126 篇 dou yong
  • 95 篇 liu jie
  • 89 篇 li dongsheng
  • 77 篇 wang yijie
  • 72 篇 wang huaimin
  • 64 篇 luo zhigang
  • 56 篇 wang xiaodong
  • 49 篇 zhang xiang
  • 38 篇 zhou xingming
  • 37 篇 lan long
  • 37 篇 wang qinglin
  • 36 篇 xing zuocheng
  • 36 篇 zhang yang
  • 35 篇 su jinshu
  • 35 篇 wang ji
  • 34 篇 lv shaohe
  • 34 篇 xu kele
  • 33 篇 peng yuxing
  • 31 篇 wang tao
  • 29 篇 huang zhen

语言

  • 1,176 篇 英文
  • 21 篇 其他
  • 2 篇 中文
  • 1 篇 德文
  • 1 篇 法文
  • 1 篇 意大利文
检索条件"机构=Natl Key Lab Parallel & Distributed Proc"
1201 条 记 录,以下是101-110 订阅
排序:
LOFS: A Lightweight Online File Storage Strategy for Effective Data Deduplication at Network Edge
收藏 引用
IEEE TRANSACTIONS ON parallel AND distributed SYSTEMS 2022年 第10期33卷 2263-2276页
作者: Cheng, Geyao Guo, Deke Luo, Lailong Xia, Junxu Gu, Siyuan Natl Univ Def Technol Sci & Technol Informat Syst Engn Lab Changsha 410073 Hunan Peoples R China Natl Univ Def Technol Natl Lab Parallel & Distributed Proc Changsha 410073 Hunan Peoples R China
Edge computing responds to users' requests with low latency by storing the relevant files at the network edge. Various data deduplication technologies are currently employed at edge to eliminate redundant data chu... 详细信息
来源: 评论
High performance dilated convolutions on multi-core DSPs
收藏 引用
CCF TRANSACTIONS ON HIGH PERFORMANCE COMPUTING 2024年 第1期6卷 78-93页
作者: Wang, Yang Wang, Qinglin Pei, Xiangdong Mei, Songzhu Li, Rongchun Liu, Jie Natl Univ Def Technol Natl Key Lab Parallel & Distributed Comp Changsha 410073 Hunan Peoples R China Natl Univ Def Technol Coll Comp Changsha 410073 Hunan Peoples R China Beijing Inst Astronaut Syst Engn Beijing 100076 Peoples R China
Dilated convolutions are widely used to accomplish wide receptive fields while keeping the resolution of feature maps in deep learning applications, such as semantic segmentation and object detection. However, the dat... 详细信息
来源: 评论
EmbRace: Accelerating Sparse Communication for distributed Training of Deep Neural Networks  51
EmbRace: Accelerating Sparse Communication for Distributed T...
收藏 引用
51st International Conference on parallel processing (ICPP)
作者: Li, Shengwei Lai, Zhiquan Li, Dongsheng Zhang, Yiming Ye, Xiangyu Duan, Yabo Univ Def Technol Natl Key Lab Parallel & Distributed Proc Comp Coll Changsha Peoples R China Xiamen Univ Xiamen Peoples R China
distributed data-parallel training has been widely adopted for deep neural network (DNN) models. Although current deep learning (DL) frameworks scale well for dense models like image classification models, we find tha... 详细信息
来源: 评论
Deep-to-Bottom Weights Decay: A Systemic Knowledge Review Learning Technique for Transformer Layers in Knowledge Distillation  1
收藏 引用
15th International Conference on Knowledge Science, Engineering, and Management (KSEM)
作者: Wang, Ankun Liu, Feng Huang, Zhen Hu, Minghao Li, Dongsheng Chen, Yifan Xie, Xinjia Natl Univ Def Technol Natl Key Lab Parallel & Distributed Proc Changsha Peoples R China Informat Res Ctr Mil Sci Beijing Peoples R China
There are millions of parameters and huge computational power consumption behind the outstanding performance of pre-trained language models in natural language processing tasks. Knowledge distillation is considered as... 详细信息
来源: 评论
DPSS: Dynamic Parameter Selection for Outlier Detection on Data Streams  28
DPSS: Dynamic Parameter Selection for Outlier Detection on D...
收藏 引用
IEEE 28th International Conference on parallel and distributed Systems (IEEE ICPADS)
作者: Zhang, Ruyi Wang, Yijie Zhou, Haifang Li, Bin Xu, Hongzuo Natl Univ Def Technol Coll Comp Sci & Technol Parallel & Distributed Proc Lab Changsha Peoples R China
Outlier detection on data streams identifies unusual states to sense and alarm potential risks and faults of the target systems in both the cyber and physical world. As different parameter settings of machine learning... 详细信息
来源: 评论
Comprehensive Deadlock Prevention for GPU Collective Communication  25
Comprehensive Deadlock Prevention for GPU Collective Communi...
收藏 引用
20th European Conference on Computer Systems-EuroSys
作者: Pan, Lichen Liu, Juncheng Fu, Yongquan Yuan, Jinhui Zhang, Rongkai Li, Pengze Xiao, Zhen Peking Univ Sch Comp Sci Beijing Peoples R China OneFlow Res Stockholm Sweden Natl Univ Def Technol Coll Comp Sci & Technol Natl Key Lab Parallel & Distributed Comp Changsha Peoples R China
distributed deep neural network training necessitates efficient GPU collective communications, which are inherently susceptible to deadlocks. GPU collective deadlocks arise easily in distributed deep learning applicat... 详细信息
来源: 评论
ParTransgrid: A scalable parallel preprocessing tool for unstructured-grid cell-centered computational fluid dynamics applications
收藏 引用
SOFTWARE-PRACTICE & EXPERIENCE 2023年 第1期53卷 6-26页
作者: Zhang, Jian Liu, Jie Zhou, Naichun Tang, Jing He, Xie Chen, Jianqiang Natl Univ Def Technol Sci & Technol Parallel & Distributed Proc Lab Changsha Peoples R China China Aerodynam Res & Dev Ctr Computat Aerodynam Inst Mianyang Sichuan Peoples R China
The development of a basic scalable preprocessing tool is the key routine to accelerate the entire computational fluid dynamics (CFD) workflow toward the exascale computing era. In this work, a parallel preprocessing ... 详细信息
来源: 评论
The Doctrine of MEAN: Realizing Deduplication Storage at Unreliable Edge
收藏 引用
IEEE TRANSACTIONS ON parallel AND distributed SYSTEMS 2023年 第10期34卷 2811-2826页
作者: Xia, Junxu Cheng, Geyao Luo, Lailong Guo, Deke Lv, Pin Sun, Bowen Natl Univ Def Technol Sci & Technol Informat Syst Engn Lab Changsha 410073 Hunan Peoples R China Natl Univ Def Technol Sci & Technol Informat Syst Engn Lab Natl Lab Parallel & Distributed Proc Changsha 410073 Hunan Peoples R China Guangxi Univ Sch Comp Elect & Informat Nanning 530004 Guangxi Peoples R China
Placing popular data at the network edge helps reduce the retrieval latency, but it also brings challenges to the limited edge storage space. Currently, using available yet not necessarily reliable edge resources is c... 详细信息
来源: 评论
ST-PINN: A Self-Training Physics-Informed Neural Network for Partial Differential Equations
ST-PINN: A Self-Training Physics-Informed Neural Network for...
收藏 引用
International Joint Conference on Neural Networks (IJCNN)
作者: Yan, Junjun Chen, Xinhai Wang, Zhichao Zhoui, Enqiang Liu, Jie Natl Univ Def Technol Sci & Technol Parallel & Distributed Proc Lab Changsha 410073 Peoples R China Natl Univ Def Technol Lab Digitizing Software Frontier Equipment Changsha 410073 Peoples R China
Partial differential equations (PDEs) are an essential computational kernel in physics and engineering. With the advance of deep learning, physics-informed neural networks (PINNs), as a mesh-free method, have shown gr... 详细信息
来源: 评论
Local-to-Global Deep Clustering on Approximate Uniform Manifold
收藏 引用
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 2023年 第5期35卷 5035-5046页
作者: Wang, Tuo Zhang, Xiang Lan, Long Luo, Zhigang Natl Univ Def Technol Inst Quantum Informat Changsha 410073 Peoples R China Natl Univ Def Technol State Key Lab High Performance Comp Changsha 410073 Peoples R China Natl Univ Def Technol Coll Comp Changsha 410073 Peoples R China Natl Univ Def Technol Sci & Technol Parallel & Distributed Proc Changsha 410073 Peoples R China
Deep clustering usually treats the clustering assignments as supervisory signals to learn a more compact representation with deep neural networks, under the guidance of clustering-oriented losses. Nevertheless, we obs... 详细信息
来源: 评论