咨询与建议

限定检索结果

文献类型

  • 149 篇 期刊文献
  • 79 篇 会议

馆藏范围

  • 228 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 157 篇 工学
    • 127 篇 计算机科学与技术...
    • 111 篇 软件工程
    • 28 篇 控制科学与工程
    • 21 篇 光学工程
    • 21 篇 信息与通信工程
    • 21 篇 生物工程
    • 19 篇 生物医学工程(可授...
    • 8 篇 化学工程与技术
    • 7 篇 电气工程
    • 6 篇 机械工程
    • 5 篇 电子科学与技术(可...
    • 5 篇 安全科学与工程
    • 4 篇 力学(可授工学、理...
    • 4 篇 仪器科学与技术
    • 4 篇 材料科学与工程(可...
  • 84 篇 理学
    • 55 篇 数学
    • 24 篇 生物学
    • 18 篇 统计学(可授理学、...
    • 17 篇 系统科学
    • 14 篇 物理学
    • 8 篇 化学
  • 31 篇 管理学
    • 18 篇 图书情报与档案管...
    • 14 篇 管理科学与工程(可...
    • 4 篇 工商管理
  • 10 篇 医学
    • 10 篇 临床医学
    • 8 篇 基础医学(可授医学...
    • 8 篇 药学(可授医学、理...
  • 4 篇 法学
    • 4 篇 社会学
  • 4 篇 教育学
    • 4 篇 教育学
  • 2 篇 经济学
    • 2 篇 应用经济学
  • 2 篇 文学
  • 1 篇 艺术学

主题

  • 12 篇 machine learning
  • 8 篇 training
  • 7 篇 contrastive lear...
  • 7 篇 semantics
  • 6 篇 computational li...
  • 5 篇 object detection
  • 5 篇 reinforcement le...
  • 5 篇 task analysis
  • 5 篇 neuroimaging
  • 5 篇 benchmarking
  • 5 篇 stochastic syste...
  • 4 篇 deep learning
  • 4 篇 distillation
  • 4 篇 iterative method...
  • 4 篇 learning algorit...
  • 4 篇 visualization
  • 3 篇 semantic segment...
  • 3 篇 deep neural netw...
  • 3 篇 training data
  • 3 篇 adversarial mach...

机构

  • 80 篇 miit key laborat...
  • 66 篇 college of compu...
  • 27 篇 college of compu...
  • 16 篇 miit key laborat...
  • 13 篇 miit key laborat...
  • 10 篇 collaborative in...
  • 8 篇 college of compu...
  • 6 篇 college of compu...
  • 6 篇 nanjing universi...
  • 5 篇 jd ai research
  • 5 篇 department of el...
  • 5 篇 the college of c...
  • 4 篇 chongqing jiaoto...
  • 4 篇 riken center for...
  • 4 篇 nanyang technolo...
  • 4 篇 department of ma...
  • 4 篇 college of compu...
  • 4 篇 school of comput...
  • 4 篇 school of comput...
  • 4 篇 college of compu...

作者

  • 47 篇 chen songcan
  • 23 篇 li piji
  • 23 篇 huang sheng-jun
  • 18 篇 huang feihu
  • 18 篇 zhang daoqiang
  • 16 篇 sheng-jun huang
  • 16 篇 liang dong
  • 15 篇 songcan chen
  • 11 篇 tan xiaoyang
  • 9 篇 geng chuanxing
  • 9 篇 wang xinrui
  • 9 篇 daoqiang zhang
  • 8 篇 li shao-yuan
  • 7 篇 wei mingqiang
  • 7 篇 li zhongnian
  • 7 篇 ming-kun xie
  • 6 篇 xie ming-kun
  • 6 篇 wang renzhi
  • 6 篇 li weikai
  • 6 篇 tao lue

语言

  • 216 篇 英文
  • 10 篇 其他
  • 5 篇 中文
检索条件"机构=MIIT Key Laboratory of Pattern Analysis and Machine Intelligence"
228 条 记 录,以下是101-110 订阅
排序:
Semantic are Beacons: A Semantic Perspective for Unveiling Parameter-Efficient Fine-Tuning in Knowledge Learning
arXiv
收藏 引用
arXiv 2024年
作者: Wang, Renzhi Li, Piji College of Computer Science and Technology Nanjing University of Aeronautics and Astronautics China MIIT Key Laboratory of Pattern Analysis and Machine Intelligence Nanjing China
Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of Large Language Models (LLMs) to various downstream applications. However, the effectiveness of the PEFT diminishes notably when downstream ... 详细信息
来源: 评论
Knowledge distillation of heterogeneous teacher-student model with intermediate layer loss  23
收藏 引用
23rd Chinese National Conference on Computational Linguistics, CCL 2024
作者: Zhai, Feiyan Wang, Renzhi Li, Piji College of Computer Science and Technology Nanjing University of Aeronautics and Astronautics MIIT Key Laboratory of Pattern Analysis and Machine Intelligence Nanjing211106 China
As a cutting-edge model compression strategy in the era of large language models, knowledge distillation technology significantly reduces the parameter scale and computational cost of models by effectively transferrin... 详细信息
来源: 评论
ALiPy: Active learning in python
arXiv
收藏 引用
arXiv 2019年
作者: Tang, Ying-Peng Li, Guo-Xiang Huang, Sheng-Jun College of Computer Science and Technology Nanjing University of Aeronautics and Astronautics MIIT Key Laboratory of Pattern Analysis and Machine Intelligence Nanjing211106 China
Supervised machine learning methods usually require a large set of labeled examples for model training. However, in many real applications, there are plentiful unlabeled data but limited labeled data;and the acquisiti... 详细信息
来源: 评论
uChecker: Masked Pretrained Language Models as Unsupervised Chinese Spelling Checkers
arXiv
收藏 引用
arXiv 2022年
作者: Li, Piji College of Computer Science and Technology Nanjing University of Aeronautics and Astronautics MIIT Key Laboratory of Pattern Analysis and Machine Intelligence Jiangsu Nanjing China
The task of Chinese Spelling Check (CSC) is aiming to detect and correct spelling errors that can be found in the text. While manually annotating a high-quality dataset is expensive and time-consuming, thus the scale ... 详细信息
来源: 评论
Noise-Robust Bidirectional Learning with Dynamic Sample Reweighting
arXiv
收藏 引用
arXiv 2022年
作者: Zong, Chen-Chen Cao, Zheng-Tao Guo, Hong-Tao Du, Yun Xie, Ming-Kun Li, Shao-Yuan Huang, Sheng-Jun College of Computer Science and Technology Nanjing University of Aeronautics and Astronautics MIIT Key Laboratory of Pattern Analysis and Machine Intelligence Nanjing211106 China
Deep neural networks trained with standard cross-entropy loss are more prone to memorize noisy labels, which degrades their performance. Negative learning using complementary labels is more robust when noisy labels in... 详细信息
来源: 评论
LEMoE: Advanced Mixture of Experts Adaptor for Lifelong Model Editing of Large Language Models
arXiv
收藏 引用
arXiv 2024年
作者: Wang, Renzhi Li, Piji College of Computer Science and Technology Nanjing University of Aeronautics and Astronautics China MIIT Key Laboratory of Pattern Analysis and Machine Intelligence Nanjing China
Large language models (LLMs) require continual knowledge updates to stay abreast of the ever-changing world facts, prompting the formulation of lifelong model editing task. While recent years have witnessed the develo... 详细信息
来源: 评论
A Multi-Scale Multi-Hop Graph Convolution Network for Predicting Fluid intelligence via Functional Connectivity
A Multi-Scale Multi-Hop Graph Convolution Network for Predic...
收藏 引用
IEEE International Conference on Bioinformatics and Biomedicine (BIBM)
作者: Xuyun Wen Qumei Cao Daoqiang Zhang College of Computer Science and Technology MIIT Key Laboratory of Pattern Analysis and Machine Intelligence Nanjing University of Aeronautics and Astronautics Nanjing Jiangsu China
Predicting fluid intelligence via neuroimaging data is important to understand neural mechanisms underlying diverse complex cognitive tasks in human brain. Functional connectivity (FC) reflects interactions among brai... 详细信息
来源: 评论
5W1H Extraction With Large Language Models
arXiv
收藏 引用
arXiv 2024年
作者: Cao, Yang Lan, Yangsong Zhai, Feiyan Li, Piji College of Computer Science and Technology Nanjing University of Aeronautics and Astronautics China MIIT Key Laboratory of Pattern Analysis and Machine Intelligence Nanjing China
The extraction of essential news elements through the 5W1H framework (What, When, Where, Why, Who, and How) is critical for event extraction and text summarization. The advent of Large language models (LLMs) such as C... 详细信息
来源: 评论
Tumor Micro-Environment Interactions Guided Graph Learning for Survival analysis of Human Cancers from Whole-Slide Pathological Images
Tumor Micro-Environment Interactions Guided Graph Learning f...
收藏 引用
Conference on Computer Vision and pattern Recognition (CVPR)
作者: Wei Shao YangYang Shi Daoqiang Zhang JunJie Zhou Peng Wan Ministry of Education College of Computer Science and Technology Nanjing University of Aeronautics and Astronautics Key Laboratory of Brain-Machine Intelligence Technology Nanjing China MIIT Key Laboratory of Pattern Analysis and Machine Intelligence Nanjing China
The recent advance of deep learning technology brings the possibility of assisting the pathologist to predict the patients' survival from whole-slide pathological images (WSIs). However, most of the prevalent meth... 详细信息
来源: 评论
MEMoE: Enhancing Model Editing with Mixture of Experts Adaptors
arXiv
收藏 引用
arXiv 2024年
作者: Wang, Renzhi Li, Piji College of Computer Science and Technology Nanjing University of Aeronautics and Astronautics China MIIT Key Laboratory of Pattern Analysis and Machine Intelligence Nanjing China
Model editing aims to efficiently alter the behavior of Large Language Models (LLMs) within a desired scope, while ensuring no adverse impact on other inputs. Recent years have witnessed various model editing methods ...
来源: 评论