咨询与建议

限定检索结果

文献类型

  • 14,463 篇 会议
  • 653 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,257 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,943 篇 工学
    • 10,283 篇 计算机科学与技术...
    • 5,409 篇 软件工程
    • 1,461 篇 信息与通信工程
    • 953 篇 电气工程
    • 879 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,417 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 758 篇 管理科学与工程(可...
    • 240 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,534 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 741 篇 semantics
  • 680 篇 machine learning
  • 609 篇 deep learning
  • 520 篇 natural language...
  • 347 篇 computational mo...
  • 336 篇 training
  • 333 篇 accuracy
  • 331 篇 sentiment analys...
  • 329 篇 large language m...
  • 320 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 261 篇 speech recogniti...
  • 252 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 language technol...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,663 篇 英文
  • 481 篇 其他
  • 105 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15258 条 记 录,以下是181-190 订阅
排序:
Background Summarization of Event Timelines
Background Summarization of Event Timelines
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Pratapa, Adithya Small, Kevin Dreyer, Markus Carnegie Mellon Univ Language Technol Inst Pittsburgh PA 15213 USA Amazon Seattle WA USA
Generating concise summaries of news events is a challenging natural language processing task. While journalists often curate timelines to highlight key sub-events, newcomers to a news event face challenges in catchin... 详细信息
来源: 评论
Failures Pave the Way: Enhancing Large language Models through Tuning-free Rule Accumulation
Failures Pave the Way: Enhancing Large Language Models throu...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Yang, Zeyuan Li, Peng Liu, Yang Tsinghua Univ Inst AI Dept Comp Sci & Tech Beijing Peoples R China Tsinghua Univ Inst AI Ind Res AIR Beijing Peoples R China Shanghai Artificial Intelligence Lab Shanghai Peoples R China
Large language Models (LLMs) have show-cased impressive performance. However, due to their inability to capture relationships among samples, these frozen LLMs inevitably keep repeating similar mistakes. In this work, ... 详细信息
来源: 评论
Exploring the Compositional Deficiency of Large language Models in Mathematical Reasoning Through Trap Problems
Exploring the Compositional Deficiency of Large Language Mod...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhao, Jun Tong, Jingqi Mou, Yurong Zhang, Ming Zhang, Qi Huang, Xuanjing School of Computer Science Fudan University China Shanghai Key Laboratory of Intelligent Information Processing Fudan University China
Human cognition exhibits systematic compositionality, the algebraic ability to generate infinite novel combinations from finite learned components, which is the key to understanding and reasoning about complex logic. ... 详细信息
来源: 评论
Large language Models are Complex Table Parsers
Large Language Models are Complex Table Parsers
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Zhao, Bowen Ji, Changkai Zhang, Yuejie He, Wen Wang, Yingwen Wang, Qing Feng, Rui Zhang, Xiaobo Fudan Univ Sch Comp Sci Shanghai Key Lab Intelligent Informat Proc Shanghai 200433 Peoples R China Fudan Univ Acad Engn & Technol Shanghai Peoples R China Fudan Univ Natl Childrens Med Ctr Childrens Hosp Shanghai Peoples R China
With the Generative Pre-trained Transformer 3.5 (GPT-3.5) exhibiting remarkable reasoning and comprehension abilities in natural language processing (NLP), most Question Answering (QA) research has primarily centered ... 详细信息
来源: 评论
LLM-enhanced Self-training for Cross-domain Constituency Parsing
LLM-enhanced Self-training for Cross-domain Constituency Par...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Li, Jianling Zhang, Meishan Guo, Peiming Zhang, Min Zhang, Yue Tianjin Univ Sch New Media & Commun Tianjin Peoples R China Harbin Inst Technol Shenzhen Inst Comp & Intelligence Shenzhen Peoples R China Westlake Univ Sch Engn Hangzhou Peoples R China
Self-training has proven to be an effective approach for cross-domain tasks, and in this study, we explore its application to cross-domain constituency parsing. Traditional self-training methods rely on limited and po... 详细信息
来源: 评论
LLMLingua: Compressing Prompts for Accelerated Inference of Large language Models
LLMLingua: Compressing Prompts for Accelerated Inference of ...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Jiang, Huiqiang Wu, Qianhui Lin, Chin-Yew Yang, Yuqing Qiu, Lili Microsoft Corp Redmond WA 98052 USA
Large language models (LLMs) have been applied in various applications due to their astonishing capabilities. With advancements in technologies such as chain-of-thought (CoT) prompting and in-context learning (ICL), t... 详细信息
来源: 评论
Do We Need language-Specific Fact-Checking Models? The Case of Chinese
Do We Need Language-Specific Fact-Checking Models? The Case ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Caiqi Guo, Zhijiang Vlachos, Andreas Language Technology Lab University of Cambridge United Kingdom Department of Computer Science and Technology University of Cambridge United Kingdom
This paper investigates the potential benefits of language-specific fact-checking models, focusing on the case of Chinese using CHEF dataset. To better reflect real-world fact-checking, we first develop a novel Chines... 详细信息
来源: 评论
GDPO: Learning to Directly Align language Models with Diversity Using GFlowNets
GDPO: Learning to Directly Align Language Models with Divers...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kwon, Oh Joon Matsunaga, Daiki E. Kim, Kee-Eung KAIST AI Seoul Korea Republic of
A critical component of the current generation of language models is preference alignment, which aims to precisely control the model's behavior to meet human needs and values. The most notable among such methods i... 详细信息
来源: 评论
Are Compressed language Models Less Subgroup Robust?
Are Compressed Language Models Less Subgroup Robust?
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Gee, Leonidas Zugarini, Andrea Quadrianto, Novi Univ Sussex Predict Analyt Lab Brighton E Sussex England BCAM Severo Ochoa Strateg Lab Trustworthy Machine Bilbao Spain Monash Univ Banten Indonesia Expert Ai Siena Italy
To reduce the inference cost of large language models, model compression is increasingly used to create smaller scalable models. However, little is known about their robustness to minority subgroups defined by the lab... 详细信息
来源: 评论
An Inversion Attack Against Obfuscated Embedding Matrix in language Model Inference
An Inversion Attack Against Obfuscated Embedding Matrix in L...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lin, Yu Zhang, Qizhi Cai, Quanwei Hong, Jue Wu, Ye Liu, Huiqi Duan, Bing Bytedance China
With the rapidly-growing deployment of large language model (LLM) inference services, privacy concerns have arisen regarding to the user input data. Recent studies are exploring transforming user inputs to obfuscated ... 详细信息
来源: 评论