咨询与建议

限定检索结果

文献类型

  • 14,463 篇 会议
  • 653 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,257 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,943 篇 工学
    • 10,283 篇 计算机科学与技术...
    • 5,409 篇 软件工程
    • 1,461 篇 信息与通信工程
    • 953 篇 电气工程
    • 879 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,417 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 758 篇 管理科学与工程(可...
    • 240 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,534 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 741 篇 semantics
  • 680 篇 machine learning
  • 609 篇 deep learning
  • 520 篇 natural language...
  • 347 篇 computational mo...
  • 336 篇 training
  • 333 篇 accuracy
  • 331 篇 sentiment analys...
  • 329 篇 large language m...
  • 320 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 261 篇 speech recogniti...
  • 252 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 language technol...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,663 篇 英文
  • 481 篇 其他
  • 105 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15258 条 记 录,以下是461-470 订阅
排序:
TUTOR-ICL: Guiding Large language Models for Improved In-Context Learning Performance
TUTOR-ICL: Guiding Large Language Models for Improved In-Con...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cho, Ikhyun Kwon, Gaeul Hockenmaier, Julia University of Illinois Urbana-Champaign United States Grinnell College United States
There has been a growing body of work focusing on the in-context learning (ICL) abilities of large language models (LLMs).However, it is an open question how effective ICL can *** paper presents TUTOR-ICL, a simple pr... 详细信息
来源: 评论
We Are What We Repeatedly Do: Inducing and Deploying Habitual Schemas in Persona-Based Responses
We Are What We Repeatedly Do: Inducing and Deploying Habitua...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Kane, Benjamin Schubert, Lenhart Univ Rochester Rochester NY 14627 USA
Many practical applications of dialogue technology require the generation of responses according to a particular developer-specified persona. While a variety of personas can be elicited from recent large language mode... 详细信息
来源: 评论
Towards Building More Robust NER Datasets: An empirical Study on NER Dataset Bias from a Dataset Difficulty View
Towards Building More Robust NER Datasets: An Empirical Stud...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Ma, Ruotian Wang, Xiaolei Zhou, Xin Zhang, Qi Huang, Xuanjing Fudan Univ Sch Comp Sci Shanghai Peoples R China Int Human Phenome Inst Shanghai Peoples R China
Recently, many studies have illustrated the robustness problem of Named Entity Recognition (NER) systems: the NER models often rely on superficial entity patterns for predictions, without considering evidence from the...
来源: 评论
Improving Diversity of Commonsense Generation by Large language Models via In-Context Learning
Improving Diversity of Commonsense Generation by Large Langu...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Tianhui Peng, Bei Bollegala, Danushka University of Liverpool United Kingdom Amazon United States
Generative Commonsense Reasoning (GCR) requires a model to reason about a situation using commonsense knowledge, while generating coherent *** the quality of the generated sentences is crucial, the diversity of the ge... 详细信息
来源: 评论
Let's Ask GNN: Empowering Large language Model for Graph In-Context Learning
Let's Ask GNN: Empowering Large Language Model for Graph In-...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Hu, Zhengyu Li, Yichuan Chen, Zhengyu Wang, Jingang Liu, Han Lee, Kyumin Ding, Kaize Northwestern University United States Worcester Polytechnic Institute United States MeiTuan China
Textual Attributed Graphs (TAGs) are crucial for modeling complex real-world systems, yet leveraging large language models (LLMs) for TAGs presents unique challenges due to the gap between sequential text processing a... 详细信息
来源: 评论
Multilingual estimation of political-party positioning: From label aggregation to long-input Transformers
Multilingual estimation of political-party positioning: From...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Nikolaev, Dmitry Ceron, Tanise Pado, Sebastian Univ Stuttgart Inst Nat Language Proc Stuttgart Germany
Scaling analysis is a technique in computational political science that assigns a political actor (e.g. politician or party) a score on a predefined scale based on a (typically long) body of text (e.g. a parliamentary...
来源: 评论
SLANG: New Concept Comprehension of Large language Models
SLANG: New Concept Comprehension of Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Mei, Lingrui Liu, Shenghua Wang, Yiwei Bi, Baolong Cheng, Xueqi CAS Key Laboratory of AI Safety Institute of Computing Technology CAS China University of California Los Angeles United States University of California Merced United States University of Chinese Academy of Sciences China
The dynamic nature of language, particularly evident in the realm of slang and memes on the Internet, poses serious challenges to the adaptability of Large language Models (LLMs). Traditionally anchored to static data... 详细信息
来源: 评论
The Program Testing Ability of Large language Models for Code
The Program Testing Ability of Large Language Models for Cod...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Xiong, Weimin Guo, Yiwen Chen, Hao National Key Laboratory for Multimedia Information Processing School of Computer Science Peking University China Tencent Security Big Data Lab China UC Davis United States
Recent development of large language models (LLMs) for code like CodeX and CodeT5+ shows promise in achieving code intelligence. Their ability of synthesizing program targeting a pre-defined algorithmic coding task ha... 详细信息
来源: 评论
MIRRORSTORIES: Reflecting Diversity through Personalized Narrative Generation with Large language Models
MIRRORSTORIES: Reflecting Diversity through Personalized Nar...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yunusov, Sarfaroz Sidat, Hamza Emami, Ali Brock University St. Catharines Canada
This study explores the effectiveness of Large language Models (LLMs) in creating personalized "mirror stories" that reflect and resonate with individual readers' identities, addressing the significant l... 详细信息
来源: 评论
Can Large language Models Always Solve Easy Problems if They Can Solve Harder Ones?
Can Large Language Models Always Solve Easy Problems if They...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yang, Zhe Zhang, Yichang Liu, Tianyu Yang, Jian Lin, Junyang Zhou, Chang Sui, Zhifang State Key Laboratory of Multimedia Information Processing School of Computer Science Peking University China Alibaba Group China
Large language models (LLMs) have demonstrated impressive capabilities, but still suffer from inconsistency issues (e.g. LLMs can react differently to disturbances like rephrasing or inconsequential order change). In ... 详细信息
来源: 评论