咨询与建议

限定检索结果

文献类型

  • 7,582 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,703 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,480 篇 工学
    • 6,252 篇 计算机科学与技术...
    • 3,600 篇 软件工程
    • 748 篇 信息与通信工程
    • 507 篇 控制科学与工程
    • 271 篇 电气工程
    • 213 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 85 篇 电子科学与技术(可...
    • 76 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,524 篇 管理学
    • 1,167 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,472 篇 文学
    • 1,465 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,447 篇 理学
    • 775 篇 数学
    • 352 篇 物理学
    • 250 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 165 篇 法学
    • 153 篇 社会学
  • 130 篇 医学
    • 94 篇 临床医学
    • 76 篇 基础医学(可授医学...
  • 112 篇 教育学
    • 106 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,183 篇 natural language...
  • 872 篇 computational li...
  • 621 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 106 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 30 篇 university of ch...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 carnegie mellon ...
  • 25 篇 gaoling school o...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 20 篇 wen ji-rong
  • 20 篇 zhang qi

语言

  • 6,985 篇 英文
  • 689 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7704 条 记 录,以下是421-430 订阅
排序:
MORL-Prompt: An empirical Analysis of Multi-Objective Reinforcement Learning for Discrete Prompt Optimization
MORL-Prompt: An Empirical Analysis of Multi-Objective Reinfo...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Jafari, Yasaman Mekala, Dheeraj Yu, Rose Berg-Kirkpatrick, Taylor University of California San Diego United States
RL-based techniques can be employed to search for prompts that, when fed into a target language model, maximize a set of user-specified reward ***, in many target applications, the natural reward functions are in tens... 详细信息
来源: 评论
DTS-SQL: Decomposed Text-to-SQL with Small Large language Models
DTS-SQL: Decomposed Text-to-SQL with Small Large Language Mo...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Pourreza, Mohammadreza Rafiei, Davood University of Alberta Canada
Leading models for the text-to-SQL task heavily rely on proprietary Large language Models (LLMs), posing concerns over data *** the performance gap between small open-source models and large proprietary models is cruc... 详细信息
来源: 评论
The Lou Dataset Exploring the Impact of Gender-Fair language in German Text Classification
The Lou Dataset Exploring the Impact of Gender-Fair Language...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Waldis, Andreas Birrer, Joel Lauscher, Anne Gurevych, Iryna Technical University of Darmstadt Germany Information Systems Research Lab Lucerne University of Applied Sciences and Arts Switzerland Data Science Group University of Hamburg Germany
Gender-fair language, an evolving German linguistic variation, fosters inclusion by addressing all genders or using neutral forms. Nevertheless, there is a significant lack of resources to assess the impact of this li... 详细信息
来源: 评论
MetaGPT: Merging Large language Models Using Model Exclusive Task Arithmetic
MetaGPT: Merging Large Language Models Using Model Exclusive...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhou, Yuyan Song, Liang Wang, Bingning Chen, Weipeng Baichuan Inc. China
The advent of large language models (LLMs) like GPT-4 has catalyzed the exploration of multi-task learning (MTL), in which a single model demonstrates proficiency across diverse tasks. Task arithmetic has emerged as a... 详细信息
来源: 评论
Formality is Favored: Unraveling the Learning Preferences of Large language Models on Data with Conflicting Knowledge
Formality is Favored: Unraveling the Learning Preferences of...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Jiahuan Cao, Yiqing Huang, Shujian Chen, Jiajun National Key Laboratory for Novel Software Technology Nanjing University China
Having been trained on massive pretraining data, large language models have shown excellent performance on many knowledge-intensive tasks. However, pretraining data tends to contain misleading and even conflicting inf... 详细信息
来源: 评论
Learning to Extract Structured Entities Using language Models
Learning to Extract Structured Entities Using Language Model...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wu, Haolun Yuan, Ye Mikaelyan, Liana Meulemans, Alexander Liu, Xue Hensman, James Mitra, Bhaskar McGill University Canada Mila - Quebec AI Institute Canada Microsoft Research United States ETH Zürich Switzerland
Recent advances in machine learning have significantly impacted the field of information extraction, with language Models (LMs) playing a pivotal role in extracting structured information from unstructured text. Prior... 详细信息
来源: 评论
Chain-of-Dictionary Prompting Elicits Translation in Large language Models
Chain-of-Dictionary Prompting Elicits Translation in Large L...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lu, Hongyuan Yang, Haoran Huang, Haoyang Zhang, Dongdong Lam, Wai Wei, Furu The Chinese University of Hong Kong Hong Kong Microsoft Corporation United States
Large language models (LLMs) have shown surprisingly good performance in multilingual neural machine translation (MNMT) even if not being trained explicitly for translation. Yet, they still struggle with translating l... 详细信息
来源: 评论
Robust Prompt Optimization for Large language Models Against Distribution Shifts
Robust Prompt Optimization for Large Language Models Against...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Li, Moxin Wang, Wenjie Feng, Fuli Cao, Yixin Zhang, Jizhi Chua, Tat-Seng Natl Univ Singapore Singapore Singapore Univ Sci & Technol China Hefei Peoples R China Inst Dataspace Hefei Anhui Peoples R China Singapore Management Univ Singapore Singapore
Large language Model (LLM) has demonstrated significant ability in various natural language processing tasks. However, their effectiveness is highly dependent on the phrasing of the task prompt, leading to research on... 详细信息
来源: 评论
Are Large language Models In-Context Personalized Summarizers? Get an iCOPERNICUS Test Done!
Are Large Language Models In-Context Personalized Summarizer...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Patel, Divya Patel, Pathik Chander, Ankush Dasgupta, Sourish Chakraborty, Tanmoy KDM Lab Dhirubhai Ambani Institute of Information & Communication Technology India Indian Institute of Technology Delhi India
Large language Models (LLMs) have succeeded considerably in In-Context-Learning (ICL) based summarization. However, saliency is subject to the users' specific preference histories. Hence, we need reliable In-Conte... 详细信息
来源: 评论
ShadowLLM: Predictor-based Contextual Sparsity for Large language Models
ShadowLLM: Predictor-based Contextual Sparsity for Large Lan...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Akhauri, Yash AbouElhamayed, Ahmed F. Dotzel, Jordan Zhang, Zhiru Rush, Alexander M. Huda, Safeen Abdelfattah, Mohamed S. Cornell University United States Google United States
The high power consumption and latency-sensitive deployments of large language models (LLMs) have motivated efficiency techniques like quantization and *** sparsity, where the sparsity pattern is input-dependent, is c... 详细信息
来源: 评论