咨询与建议

限定检索结果

文献类型

  • 32 篇 会议
  • 28 篇 期刊文献

馆藏范围

  • 60 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 49 篇 工学
    • 39 篇 计算机科学与技术...
    • 35 篇 软件工程
    • 10 篇 控制科学与工程
    • 8 篇 信息与通信工程
    • 5 篇 网络空间安全
    • 4 篇 生物工程
    • 2 篇 电气工程
    • 2 篇 土木工程
    • 1 篇 力学(可授工学、理...
    • 1 篇 机械工程
    • 1 篇 光学工程
    • 1 篇 材料科学与工程(可...
    • 1 篇 冶金工程
    • 1 篇 建筑学
    • 1 篇 交通运输工程
    • 1 篇 环境科学与工程(可...
  • 17 篇 管理学
    • 11 篇 图书情报与档案管...
    • 9 篇 管理科学与工程(可...
    • 2 篇 工商管理
  • 16 篇 理学
    • 12 篇 数学
    • 4 篇 生物学
    • 4 篇 统计学(可授理学、...
    • 2 篇 物理学
    • 2 篇 系统科学
    • 1 篇 海洋科学
  • 4 篇 法学
    • 3 篇 社会学
    • 1 篇 法学
  • 2 篇 医学
    • 2 篇 临床医学
  • 1 篇 经济学
    • 1 篇 应用经济学
  • 1 篇 教育学
    • 1 篇 教育学

主题

  • 4 篇 knowledge graph
  • 3 篇 anomaly detectio...
  • 3 篇 graph neural net...
  • 3 篇 machine learning
  • 2 篇 query processing
  • 2 篇 deep learning
  • 2 篇 sliding mode con...
  • 2 篇 semantics
  • 2 篇 image recognitio...
  • 2 篇 non-negative mat...
  • 1 篇 parallel process...
  • 1 篇 reinforcement le...
  • 1 篇 ensemble learnin...
  • 1 篇 incomplete data
  • 1 篇 data integration
  • 1 篇 probability dist...
  • 1 篇 decision support...
  • 1 篇 graph convolutio...
  • 1 篇 deep neural netw...
  • 1 篇 task analysis

机构

  • 14 篇 zhejiang key lab...
  • 11 篇 key lab of intel...
  • 8 篇 zhejiang univers...
  • 6 篇 school of softwa...
  • 6 篇 national univers...
  • 6 篇 college of compu...
  • 6 篇 alibaba group
  • 4 篇 the key lab of b...
  • 4 篇 zju-ant group jo...
  • 4 篇 college of compu...
  • 4 篇 ant group
  • 3 篇 department of co...
  • 3 篇 the state key la...
  • 3 篇 zhejiang lab
  • 2 篇 huazhong univers...
  • 2 篇 institute of blo...
  • 2 篇 washington state...
  • 2 篇 wuhan national l...
  • 2 篇 school of comput...
  • 2 篇 zhejiang police ...

作者

  • 13 篇 chen gang
  • 11 篇 chen huajun
  • 7 篇 zhang ningyu
  • 7 篇 chen ke
  • 6 篇 wang haobo
  • 6 篇 deng shumin
  • 6 篇 gang chen
  • 4 篇 zhang wen
  • 4 篇 hu tianlei
  • 4 篇 huang fei
  • 4 篇 gao yunjun
  • 3 篇 xie pengjun
  • 3 篇 wu sai
  • 3 篇 jiang dawei
  • 3 篇 ke chen
  • 3 篇 feng lei
  • 3 篇 zhao junbo
  • 3 篇 zhang yichi
  • 3 篇 wu fei
  • 3 篇 chen zhuo

语言

  • 49 篇 英文
  • 8 篇 其他
  • 3 篇 中文
检索条件"机构=The Key Lab of Big Data Intelligent Computing of Zhejiang Province"
60 条 记 录,以下是1-10 订阅
排序:
LeapGNN: Accelerating Distributed GNN Training Leveraging Feature-Centric Model Migration  23
LeapGNN: Accelerating Distributed GNN Training Leveraging Fe...
收藏 引用
23rd USENIX Conference on File and Storage Technologies, FAST 2025
作者: Chen, Weijian He, Shuibing Qu, Haoyang Zhang, Xuechen The State Key Laboratory of Blockchain and Data Security Zhejiang University China Zhejiang Lab China Institute of Blockchain and Data Security China Zhejiang Key Laboratory of Big Data Intelligent Computing China Washington State University Vancouver United States
Distributed training of graph neural networks (GNNs) has become a crucial technique for processing large graphs. Prevalent GNN frameworks are model-centric, necessitating the transfer of massive graph vertex features ... 详细信息
来源: 评论
GoPIM: GCN-Oriented Pipeline Optimization for PIM Accelerators  31
GoPIM: GCN-Oriented Pipeline Optimization for PIM Accelerato...
收藏 引用
31st IEEE International Symposium on High Performance Computer Architecture, HPCA 2025
作者: Yang, Siling He, Shuibing Wang, Wenjiong Yin, Yanlong Wu, Tong Chen, Weijian Zhang, Xuechen Sun, Xian-He Feng, Dan The State Key Laboratory of Blockchain and Data Security Zhejiang University China Zhejiang Lab China Institute of Blockchain and Data Security China Zhejiang Key Laboratory of Big Data Intelligent Computing China Washington State University Vancouver United States Illinois Institute of Technology United States Huazhong University of Science and Technology China Wuhan National Laboratory for Optoelectronics China
Graph convolutional networks (GCNs) are popular for a variety of graph learning tasks. ReRAM-based processing-in-memory (PIM) accelerators are promising to expedite GCN training owing to their in-situ computing capabi... 详细信息
来源: 评论
OmniThink: Expanding Knowledge Boundaries in Machine Writing through Thinking
arXiv
收藏 引用
arXiv 2025年
作者: Xi, Zekun Yin, Wenbiao Fang, Jizhan Wu, Jialong Fang, Runnan Zhang, Ningyu Jiang, Yong Xie, Pengjun Huang, Fei Chen, Huajun Zhejiang University China Tongyi Lab Alibaba Group China Zhejiang Key Laboratory of Big Data Intelligent Computing China
Machine writing with large language models often relies on retrieval-augmented generation. However, these approaches remain confined within the boundaries of the model’s predefined scope, limiting the generation of c... 详细信息
来源: 评论
LeapGNN: accelerating distributed GNN training leveraging feature-centric model migration  25
LeapGNN: accelerating distributed GNN training leveraging fe...
收藏 引用
Proceedings of the 23rd USENIX Conference on File and Storage Technologies
作者: Weijian Chen Shuibing He Haoyang Qu Xuechen Zhang The State Key Laboratory of Blockchain and Data Security Zhejiang University and Zhejiang Lab and Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security and Zhejiang Key Laboratory of Big Data Intelligent Computing Washington State University Vancouver
Distributed training of graph neural networks (GNNs) has become a crucial technique for processing large graphs. Prevalent GNN frameworks are model-centric, necessitating the transfer of massive graph vertex features ...
来源: 评论
How Do LLMs Acquire New Knowledge? A Knowledge Circuits Perspective on Continual Pre-Training
arXiv
收藏 引用
arXiv 2025年
作者: Ou, Yixin Yao, Yunzhi Zhang, Ningyu Jin, Hui Sun, Jiacheng Deng, Shumin Li, Zhenguo Chen, Huajun Zhejiang University China Huawei Noah’s Ark Lab Canada National University of Singapore NUS-NCS Joint Lab Singapore Zhejiang Key Laboratory of Big Data Intelligent Computing China
Despite exceptional capabilities in knowledge-intensive tasks, Large Language Models (LLMs) face a critical gap in understanding how they internalize new knowledge, particularly how to structurally embed acquired know... 详细信息
来源: 评论
GoPIM: GCN-Oriented Pipeline Optimization for PIM Accelerators
GoPIM: GCN-Oriented Pipeline Optimization for PIM Accelerato...
收藏 引用
IEEE Symposium on High-Performance Computer Architecture
作者: Siling Yang Shuibing He Wenjiong Wang Yanlong Yin Tong Wu Weijian Chen Xuechen Zhang Xian-He Sun Dan Feng The State Key Laboratory of Blockchain and Data Security Zhejiang University Zhejiang Lab Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security Zhejiang Key Laboratory of Big Data Intelligent Computing Washington State University Vancouver Illinois Institute of Technology Huazhong University of Science and Technology Wuhan National Laboratory for Optoelectronics
Graph convolutional networks (GCNs) are popular for a variety of graph learning tasks. ReRAM-based processing-in-memory (PIM) accelerators are promising to expedite GCN training owing to their in-situ computing capabi... 详细信息
来源: 评论
K-ON: Stacking Knowledge On the Head Layer of Large Language Model
arXiv
收藏 引用
arXiv 2025年
作者: Guo, Lingbing Zhang, Yichi Bo, Zhongpu Chen, Zhuo Sun, Mengshu Zhang, Zhiqiang Zhang, Wen Chen, Huajun College of Computer Science and Technology Zhejiang University China ZJU-Ant Group Joint Lab of Knowledge Graph China Ant Group China School of Software Technology Zhejiang University China Zhejiang Key Laboratory of Big Data Intelligent Computing China
Recent advancements in large language models (LLMs) have significantly improved various natural language processing (NLP) tasks. Typically, LLMs are trained to predict the next token, aligning well with many NLP tasks... 详细信息
来源: 评论
Incomplete data management: a survey
收藏 引用
Frontiers of Computer Science 2018年 第1期12卷 4-25页
作者: Xiaoye MIAO Yunjun GAO Su GUO Wanqi LIU College of Computer Science Zhejiang University Hangzhou 310027 China The Key Lab of Big Data Intelligent Computing of Zhejiang Province Zhejiang University Hangzhou 310027 China
Incomplete data accompanies our life processes and covers almost all fields of scientific studies, as a result of delivery failure, no power of battery, accidental loss, etc. However, how to model, index, and query in... 详细信息
来源: 评论
SMILE: A Cost-Effective System for Serving Massive Pretrained Language Models in the Cloud  23
SMILE: A Cost-Effective System for Serving Massive Pretraine...
收藏 引用
2023 ACM/SIGMOD International Conference on Management of data, SIGMOD 2023
作者: Wang, Jue Chen, Ke Shou, Lidan Jiang, Dawei Chen, Gang Key Lab of Intelligent Computing Based Big Data of Zhejiang Province Zhejiang University Hangzhou China
Deep learning models, particularly pre-trained language models (PLMs), have become increasingly important for a variety of applications that require text/language processing. However, these models are resource-intensi... 详细信息
来源: 评论
Dual Enhancement for Multi-label Learning with Missing labels  21
Dual Enhancement for Multi-Label Learning with Missing Label...
收藏 引用
4th International Conference on Machine Learning and Machine Intelligence, MLMI 2021
作者: Liu, Shengyuan Wang, Haobo Hu, Tianlei Chen, Ke Key Lab of Intelligent Computing Based Big Data of Zhejiang Province College of Computer Science and Technology Zhejiang University China
The goal of multi-label learning with missing labels (MLML) is assigning each testing instance multiple labels given training instances that have a partial set of labels. The most challenging issue is to complete the ... 详细信息
来源: 评论