咨询与建议

限定检索结果

文献类型

  • 2 篇 会议

馆藏范围

  • 2 篇 电子文献
  • 0 种 纸本馆藏

日期分布

学科分类号

  • 2 篇 工学
    • 2 篇 计算机科学与技术...
    • 2 篇 软件工程

主题

  • 2 篇 ai for se
  • 2 篇 pre-training of ...
  • 1 篇 cross-task trans...
  • 1 篇 few-shot learnin...

机构

  • 2 篇 nanjing univ sta...
  • 1 篇 univ texas dalla...
  • 1 篇 univ texas dalla...

作者

  • 2 篇 li chuanyi
  • 2 篇 luo bin
  • 2 篇 niu changan
  • 2 篇 ng vincent
  • 1 篇 ge jidong
  • 1 篇 chen dongxiao

语言

  • 2 篇 英文
检索条件"主题词=Pre-training of source code"
2 条 记 录,以下是1-10 订阅
排序:
An Empirical Comparison of pre-Trained Models of source code  23
An Empirical Comparison of Pre-Trained Models of Source Code
收藏 引用
45th IEEE/ACM International Conference on Software Engineering (ICSE)
作者: Niu, Changan Li, Chuanyi Ng, Vincent Chen, Dongxiao Ge, Jidong Luo, Bin Nanjing Univ State Key Lab Novel Software Technol Nanjing Peoples R China Univ Texas Dallas Human Language Technol Res Inst Richardson TX 75080 USA
While a large number of pre-trained models of source code have been successfully developed and applied to a variety of software engineering (SE) tasks in recent years, our understanding of these pre-trained models is ... 详细信息
来源: 评论
CrosscodeBench: Benchmarking Cross-Task Generalization of source code Models  23
CrossCodeBench: Benchmarking Cross-Task Generalization of So...
收藏 引用
45th IEEE/ACM International Conference on Software Engineering (ICSE)
作者: Niu, Changan Li, Chuanyi Ng, Vincent Luo, Bin Nanjing Univ State Key Lab Novel Software Technol Nanjing Peoples R China Univ Texas Dallas Human Language Technol Res Inst Richardson TX USA
Despite the recent advances showing that a model pre-trained on large-scale source code data is able to gain appreciable generalization capability, it still requires a sizeable amount of data on the target task for fi... 详细信息
来源: 评论