咨询与建议

限定检索结果

文献类型

  • 7,585 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,706 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,483 篇 工学
    • 6,256 篇 计算机科学与技术...
    • 3,577 篇 软件工程
    • 748 篇 信息与通信工程
    • 535 篇 控制科学与工程
    • 272 篇 电气工程
    • 212 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 86 篇 电子科学与技术(可...
    • 74 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,522 篇 管理学
    • 1,165 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,471 篇 文学
    • 1,464 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,446 篇 理学
    • 776 篇 数学
    • 352 篇 物理学
    • 249 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 164 篇 法学
    • 153 篇 社会学
  • 129 篇 医学
    • 93 篇 临床医学
    • 75 篇 基础医学(可授医学...
  • 111 篇 教育学
    • 105 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,181 篇 natural language...
  • 872 篇 computational li...
  • 619 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 105 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 31 篇 university of ch...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 gaoling school o...
  • 26 篇 carnegie mellon ...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 21 篇 zhang qi
  • 20 篇 wen ji-rong

语言

  • 6,955 篇 英文
  • 722 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7707 条 记 录,以下是271-280 订阅
排序:
Make Some Noise: Unlocking language Model Parallel Inference Capability through Noisy Training
Make Some Noise: Unlocking Language Model Parallel Inference...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Yixuan Luo, Xianzhen Wei, Fuxuan Liu, Yijun Zhu, Qingfu Zhang, Xuanyu Yang, Qing Xu, Dongliang Che, Wanxiang Harbin Institute of Technology Harbin China Science Technology Co. Ltd. China
Existing speculative decoding methods typically require additional model structure and training processes to assist the model for draft token generation. This makes the migration of acceleration methods to the new mod... 详细信息
来源: 评论
LUQ: Long-text Uncertainty Quantification for LLMs
LUQ: Long-text Uncertainty Quantification for LLMs
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Caiqi Liu, Fangyu Basaldella, Marco Collier, Nigel Language Technology Lab University of Cambridge United Kingdom Amazon Alexa United Kingdom
Large language Models (LLMs) have demonstrated remarkable capability in a variety of NLP tasks. However, LLMs are also prone to generate nonfactual content. Uncertainty Quantification (UQ) is pivotal in enhancing our ... 详细信息
来源: 评论
Structured Object language Modeling (SoLM): Native Structured Objects Generation Conforming to Complex Schemas with Self-Supervised Denoising
Structured Object Language Modeling (SoLM): Native Structure...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tavanaei, Amir Koo, Kee Kiat Ceker, Hayreddin Jiang, Shaobai Li, Qi Han, Julien Bouyarmane, Karim Amazon Seattle United States
In this paper, we study the problem of generating structured objects that conform to a complex schema, with intricate dependencies between the different components (facets) of the object. The facets of the object (att... 详细信息
来源: 评论
Consistent Bidirectional language Modelling: Expressive Power and Representational Conciseness
Consistent Bidirectional Language Modelling: Expressive Powe...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Shopov, Georgi Gerdjikov, Stefan IICT Bulgarian Academy of Sciences Bulgaria FMI Sofia University Bulgaria
The inability to utilise future contexts and the pre-determined left-to-right generation order are major limitations of unidirectional language models. Bidirectionality has been introduced to address those deficiencie... 详细信息
来源: 评论
AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for Memory-Efficient Large language Models Fine-Tuning
AdaZeta: Adaptive Zeroth-Order Tensor-Train Adaption for Mem...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yang, Yifan Zhen, Kai Banijamali, Ershad Mouchtaris, Athanasios Zhang, Zheng University of California Santa Barbara United States Amazon AGI United States
Fine-tuning large language models (LLMs) has achieved remarkable performance across various natural language processing tasks, yet it demands more and more memory as model sizes keep growing. To address this issue, th... 详细信息
来源: 评论
User Inference Attacks on Large language Models
User Inference Attacks on Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kandpal, Nikhil Pillutla, Krishna Oprea, Alina Kairouz, Peter Choquette-Choo, Christopher A. Xu, Zheng University of Toronto Vector Institute Canada Madras India Northeastern University United States Google United States
Text written by humans makes up the vast majority of the data used to pre-train and fine-tune large language models (LLMs).Many sources of this data-like code, forum posts, personal websites, and books-are easily attr... 详细信息
来源: 评论
Contextualized Sequence Likelihood: Enhanced Confidence Scores for natural language Generation
Contextualized Sequence Likelihood: Enhanced Confidence Scor...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lin, Zhen Trivedi, Shubhendu Sun, Jimeng University of Illinois Urbana-Champaign United States Carle's Illinois College of Medicine University of Illinois Urbana-Champaign United States
The advent of large language models (LLMs) has dramatically advanced the state-of-the-art in numerous natural language generation tasks. For LLMs to be applied reliably, it is essential to have an accurate measure of ... 详细信息
来源: 评论
We are Who We Cite: Bridges of Influence Between natural language processing and Other Academic Fields
We are Who We Cite: Bridges of Influence Between Natural Lan...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wahle, Jan Philip Ruas, Terry Abdalla, Mohamed Gipp, Bela Mohammad, Saif M. Natl Res Council Canada Ottawa ON Canada Univ Gottingen Gottingen Germany Inst Better Hlth Toronto ON Canada
natural language processing (NLP) is poised to substantially influence the world. However, significant progress comes hand-in-hand with substantial risks. Addressing them requires broad engagement with various fields ... 详细信息
来源: 评论
Annotator-Centric Active Learning for Subjective NLP Tasks
Annotator-Centric Active Learning for Subjective NLP Tasks
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: van der Meer, Michiel Falk, Neele Murukannaiah, Pradeep K. Liscio, Enrico Idiap Research Institute Switzerland Leiden Institute of Advanced Computer Science Leiden University Netherlands Institute for Natural Language Processing University of Stuttgart Germany Interactive Intelligence TU Delft Netherlands
Active Learning (AL) addresses the high costs of collecting human annotations by strategically annotating the most informative ***, for subjective NLP tasks, incorporating a wide range of perspectives in the annotatio... 详细信息
来源: 评论
DA3: A Distribution-Aware Adversarial Attack against language Models
DA3: A Distribution-Aware Adversarial Attack against Languag...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Yibo Dong, Xiangjue Caverlee, James Yu, Philip S. University of Illinois Chicago United States Texas A&M University United States
language models can be manipulated by adversarial attacks, which introduce subtle perturbations to input data. While recent attack methods can achieve a relatively high attack success rate (ASR), we've observed th... 详细信息
来源: 评论