咨询与建议

限定检索结果

文献类型

  • 1,740 篇 会议
  • 23 篇 期刊文献
  • 7 册 图书

馆藏范围

  • 1,770 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 1,209 篇 工学
    • 1,099 篇 计算机科学与技术...
    • 696 篇 软件工程
    • 221 篇 信息与通信工程
    • 163 篇 电气工程
    • 122 篇 电子科学与技术(可...
    • 97 篇 控制科学与工程
    • 82 篇 生物工程
    • 61 篇 动力工程及工程热...
    • 49 篇 机械工程
    • 46 篇 生物医学工程(可授...
    • 31 篇 交通运输工程
    • 30 篇 光学工程
    • 29 篇 仪器科学与技术
    • 26 篇 环境科学与工程(可...
    • 24 篇 化学工程与技术
    • 20 篇 网络空间安全
    • 17 篇 力学(可授工学、理...
    • 17 篇 材料科学与工程(可...
  • 497 篇 理学
    • 320 篇 数学
    • 96 篇 生物学
    • 83 篇 系统科学
    • 78 篇 物理学
    • 72 篇 统计学(可授理学、...
    • 34 篇 化学
  • 271 篇 管理学
    • 184 篇 管理科学与工程(可...
    • 121 篇 图书情报与档案管...
    • 117 篇 工商管理
  • 62 篇 医学
    • 55 篇 临床医学
  • 36 篇 法学
    • 35 篇 社会学
  • 33 篇 经济学
    • 33 篇 应用经济学
  • 8 篇 教育学
  • 6 篇 农学
  • 3 篇 文学
  • 2 篇 军事学

主题

  • 198 篇 computer archite...
  • 80 篇 hardware
  • 79 篇 computational mo...
  • 66 篇 high performance...
  • 57 篇 cloud computing
  • 50 篇 grid computing
  • 41 篇 distributed comp...
  • 41 篇 field programmab...
  • 40 篇 bandwidth
  • 37 篇 kernel
  • 37 篇 graphics process...
  • 34 篇 resource managem...
  • 34 篇 computer network...
  • 33 篇 performance eval...
  • 32 篇 throughput
  • 31 篇 computer science
  • 31 篇 application soft...
  • 31 篇 analytical model...
  • 30 篇 program processo...
  • 29 篇 supercomputers

机构

  • 26 篇 college of compu...
  • 19 篇 university of ch...
  • 14 篇 school of comput...
  • 10 篇 college of compu...
  • 9 篇 school of comput...
  • 9 篇 college of compu...
  • 8 篇 school of comput...
  • 7 篇 school of data a...
  • 7 篇 school of comput...
  • 7 篇 institute of com...
  • 6 篇 computer network...
  • 6 篇 tsinghua univers...
  • 6 篇 school of cyber ...
  • 6 篇 changsha univers...
  • 5 篇 university of sc...
  • 5 篇 skl of computer ...
  • 5 篇 computer science...
  • 5 篇 school of comput...
  • 5 篇 hubei province k...
  • 5 篇 zhongguancun lab...

作者

  • 11 篇 duan yucong
  • 7 篇 zhang tao
  • 7 篇 li kenli
  • 7 篇 xu xiaolong
  • 6 篇 wang wei
  • 6 篇 gao guang r.
  • 5 篇 li peng
  • 5 篇 yunquan zhang
  • 5 篇 liu qin
  • 5 篇 wang dong
  • 5 篇 bader david a.
  • 5 篇 liu ruicheng
  • 5 篇 zhang rui
  • 5 篇 wan shouhong
  • 5 篇 wu jigang
  • 5 篇 panda dhabaleswa...
  • 5 篇 zhang jie
  • 5 篇 wang xiaoliang
  • 5 篇 chen long
  • 5 篇 li wei

语言

  • 1,717 篇 英文
  • 49 篇 其他
  • 3 篇 中文
  • 1 篇 波兰文
检索条件"任意字段=21st International Symposium on Computer Architecture and High Performance Computing"
1770 条 记 录,以下是31-40 订阅
排序:
MeHyper: Accelerating Hypergraph Neural Networks by Exploring Implicit Dataflows  31
MeHyper: Accelerating Hypergraph Neural Networks by Explorin...
收藏 引用
31st IEEE international symposium on high performance computer architecture, HPCA 2025
作者: Zhao, Wenju Yao, Pengcheng Chen, Dan Zheng, Long Liao, Xiaofei Wang, Qinggang Ma, Shaobo Li, Yu Liu, Haifeng Xiao, Wenjing Sun, Yufei Zhu, Bing Jin, Hai Xue, Jingling Huazhong University of Science and Technology National Engineering Research Center for Big Data Technology and System Services Computing Technology and System Lab Cluster and Grid Computing Lab School of Computer Science and Technology Wuhan430074 China National University of Singapore School of Computing 119077 Singapore Guangxi University School of Computer Electronics and Information NanNing530004 China University of New South Wales School of Computer Science and Engineering SydneyNSW2052 Australia
Hypergraph Neural Networks (HGNNs) are increasingly utilized to analyze complex inter-entity relationships. Traditional HGNN systems, based on a hyperedge-centric dataflow model, independently process aggregation task... 详细信息
来源: 评论
LSQCA: Resource-Efficient Load/store architecture for Limited-Scale Fault-Tolerant Quantum computing  31
LSQCA: Resource-Efficient Load/Store Architecture for Limite...
收藏 引用
31st IEEE international symposium on high performance computer architecture, HPCA 2025
作者: Kobori, Takumi Suzuki, Yasunari Ueno, Yosuke Tanimoto, Teruo Todo, Synge Tokunaga, Yuuki The University of Tokyo Japan NTT Corporation Japan RIKEN Japan Kyushu University Japan
Current fault-tolerant quantum computer (FTQC) architectures utilize several encoding techniques to enable reliable logical operations with restricted qubit connectivity. However, such logical operations demand additi... 详细信息
来源: 评论
Marching Page Walks: Batching and Concurrent Page Table Walks for Enhancing GPU Throughput  31
Marching Page Walks: Batching and Concurrent Page Table Walk...
收藏 引用
31st IEEE international symposium on high performance computer architecture, HPCA 2025
作者: Lee, Jiwon Ko, Gun Yoon, Myung Kuk Jeong, Ipoom Oh, Yunho Ro, Won Woo Ewha Womans University Department of Computer Science and Engineering Seoul Korea Republic of Yonsei University Department of System Semiconductor Engineering Seoul Korea Republic of Yonsei University Department of Electrical and Electronic Engineering Seoul Korea Republic of Korea University School of Electrical Engineering Seoul Korea Republic of
Virtual memory, with the support of address translation hardware, is a key technique in expanding programmability and memory management in GPUs. However, the nature of the GPU execution model heavily pressures its tra... 详细信息
来源: 评论
To Cross, or Not to Cross Pages for Prefetching?  31
To Cross, or Not to Cross Pages for Prefetching?
收藏 引用
31st IEEE international symposium on high performance computer architecture, HPCA 2025
作者: Vavouliotis, Georgios Torrents, Marti Grot, Boris Kalaitzidis, Kleovoulos Peled, Leeor Casas, Marc Barcelona Supercomputing Center Spain Computing Systems Lab Huawei Zurich Research Center Switzerland University of Edinburgh United Kingdom Boole Lab Huawei Tel-Aviv Research Center Israel Universitat Politecnica de Catalunya Spain
Despite processor vendors reporting that cache prefetchers operating with virtual addresses are permitted to cross page boundaries, academia is focused on optimizing cache prefetching for patterns within page boundari... 详细信息
来源: 评论
InstAttention: In-storage Attention Offloading for Cost-Effective Long-Context LLM Inference  31
InstAttention: In-Storage Attention Offloading for Cost-Effe...
收藏 引用
31st IEEE international symposium on high performance computer architecture, HPCA 2025
作者: Pan, Xiurui Li, Endian Li, Qiao Liang, Shengwen Shan, Yizhou Zhou, Ke Luo, Yingwei Wang, Xiaolin Zhang, Jie Peking University China University of Electronic Science and Technology of China China Institute of Computing Technology Chinese Academy of Sciences China Huawei Cloud China Wuhan National Laboratory for Optoelectronics Huazhong University of Science and Technology China
The widespread of Large Language Models (LLMs) marks a significant milestone in generative AI. Nevertheless, the increasing context length and batch size in offline LLM inference escalate the memory requirement of the... 详细信息
来源: 评论
HT-NoC: Reconfigurable high Throughput Network-on-Chip for AI Dataflow Accelerators  21st
HT-NoC: Reconfigurable High Throughput Network-on-Chip for ...
收藏 引用
21st international symposium on Applied Reconfigurable computing, ARC 2025
作者: Zhiri, Mohamed Amine Krichene, Hana Sandionigi, Chiara Pillement, Sébastien Université Paris-Saclay CEA LIST Palaiseau91120 France Université Grenoble Alpes CEA LIST Grenoble38000 France Nantes Université CNRS IETR UMR 6164 Nantes44000 France
Fully Connected (FC) layers are a bottleneck for many Deep Neural Networks (DNN) algorithms due to their high bandwidth requirements, which makes their hardware acceleration particularly challenging. In this paper, we... 详细信息
来源: 评论
MLPerf Power: Benchmarking the Energy Efficiency of Machine Learning Systems from μWatts to MWatts for Sustainable AI  31
MLPerf Power: Benchmarking the Energy Efficiency of Machine ...
收藏 引用
31st IEEE international symposium on high performance computer architecture, HPCA 2025
作者: Tschand, Arya Rajan, Arun Tejusve Raghunath Idgunji, Sachin Ghosh, Anirban Holleman, Jeremy Kiraly, Csaba Ambalkar, Pawan Borkar, Ritika Chukka, Ramesh Cockrell, Trevor Curtis, Oliver Fursin, Grigori Hodak, Miro Kassa, Hiwot Lokhmotov, Anton Miskovic, Dejan Pan, Yuechao Manmathan, Manu Prasad Raymond, Liz John, Tom st. Suresh, Arjun Taubitz, Rowan Zhan, Sean Wasson, Scott Kanter, David Reddi, Vijay Janapa Meta United States Harvard University United States NVIDIA United States UNC Charlotte / Syntiant United States Codex Dell United States Intel United States SMC Japan FlexAI / cTuning AMD United States KRAI Google United States Decompute GATE Overflow India MLCommons
Rapid adoption of machine learning (ML) technologies has led to a surge in power consumption across diverse systems, from tiny IoT devices to massive datacenter clusters. Benchmarking the energy efficiency of these sy... 详细信息
来源: 评论
LAD: Efficient Accelerator for Generative Inference of LLM with Locality Aware Decoding  31
LAD: Efficient Accelerator for Generative Inference of LLM w...
收藏 引用
31st IEEE international symposium on high performance computer architecture, HPCA 2025
作者: Wang, Haoran Li, Yuming Xu, Haobo Wang, Ying Liu, Liqi Yang, Jun Han, Yinhe Institute of Computing Technology Chinese Academy of Sciences China University of Chinese Academy of Sciences China
Large Language Models (LLMs) have emerged as the cornerstone of content generation applications due to their ability to capture relations between newly generated token and the full preceding context. However, this abi... 详细信息
来源: 评论
PROCA: Programmable Probabilistic Processing Unit architecture with Accept/Reject Prediction & Multicore Pipelining for Causal Inference  31
PROCA: Programmable Probabilistic Processing Unit Architectu...
收藏 引用
31st IEEE international symposium on high performance computer architecture, HPCA 2025
作者: Fu, Yihan Fan, Anjunyi Yue, Wenshuo Zhao, Hongxiao Shi, Daijing Wu, Qiuping Li, Jiayi Zhang, Xiangyu Tao, Yaoyu Yang, Yuchao Yan, Bonan Independent Researcher United States Peking University Beijing Advanced Innovation Center for Integrated Circuits School of Integrated Circuits Beijing China Peking University School of Electronic and Computer Engineering Shenzhen China Chinese Institute for Brain Research Beijing China
Causal inference is an important field in data science and cognitive artificial intelligence. It requires the construction of complex probabilistic models to describe the causal relationships between random variables.... 详细信息
来源: 评论
Piccolo: Large-Scale Graph Processing with Fine-Grained in-Memory Scatter-Gather  31
Piccolo: Large-Scale Graph Processing with Fine-Grained in-M...
收藏 引用
31st IEEE international symposium on high performance computer architecture, HPCA 2025
作者: Shin, Changmin Song, Jaeyong Jang, Hongsun Kim, Dogeun Sung, Jun Kwon, Taehee Ju, Jae Hyung Liu, Frank Choi, Yeonkyu Lee, Jinho Seoul National University Department of Electrical and Computer Engineering Korea Republic of Old Dominion Univeristy School of Data Science United States Samsung Electronics Korea Republic of
Graph processing requires irregular, fine-grained random access patterns incompatible with contemporary off-chip memory architecture, leading to inefficient data access. This inefficiency makes graph processing an ext... 详细信息
来源: 评论