咨询与建议

限定检索结果

文献类型

  • 7,585 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,706 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,483 篇 工学
    • 6,256 篇 计算机科学与技术...
    • 3,577 篇 软件工程
    • 748 篇 信息与通信工程
    • 535 篇 控制科学与工程
    • 272 篇 电气工程
    • 212 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 86 篇 电子科学与技术(可...
    • 74 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,522 篇 管理学
    • 1,165 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,471 篇 文学
    • 1,464 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,446 篇 理学
    • 776 篇 数学
    • 352 篇 物理学
    • 249 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 164 篇 法学
    • 153 篇 社会学
  • 129 篇 医学
    • 93 篇 临床医学
    • 75 篇 基础医学(可授医学...
  • 111 篇 教育学
    • 105 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,181 篇 natural language...
  • 872 篇 computational li...
  • 619 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 105 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 31 篇 university of ch...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 gaoling school o...
  • 26 篇 carnegie mellon ...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 21 篇 zhang qi
  • 20 篇 wen ji-rong

语言

  • 6,955 篇 英文
  • 722 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7707 条 记 录,以下是131-140 订阅
排序:
Categorial Grammar Supertagging via Large language Models
Categorial Grammar Supertagging via Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhao, Jinman Penn, Gerald Department of Computer Science University of Toronto Toronto Canada
Supertagging is an essential task in Categorical grammar parsing and is crucial for dissecting sentence structures. Our research explores the capacity of Large language Models (LLMs) in supertagging for both Combinato... 详细信息
来源: 评论
Working Memory Identifies Reasoning Limits in language Models
Working Memory Identifies Reasoning Limits in Language Model...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Chunhui Jian, Yiren Ouyang, Zhongyu Vosoughi, Soroush Department of Computer Science Dartmouth College United States
This study explores the inherent limitations of Large language Models (LLMs) from a scaling perspective, focusing on the upper bounds of their cognitive capabilities. We integrate insights from cognitive science to qu... 详细信息
来源: 评论
GDTB: Genre Diverse Data for English Shallow Discourse Parsing across Modalities, Text Types, and Domains
GDTB: Genre Diverse Data for English Shallow Discourse Parsi...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Liu, Yang Janet Aoyama, Tatsuya Scivetti, Wesley Zhu, Yilun Behzad, Shabnam Levine, Lauren Elizabeth Lin, Jessica Tiwari, Devika Zeldes, Amir Corpling Lab Georgetown University United States MaiNLP Center for Information and Language Processing LMU Munich Germany Germany
Work on shallow discourse parsing in English has focused on the Wall Street Journal corpus, the only large-scale dataset for the language in the PDTB framework. However, the data is not openly available, is restricted... 详细信息
来源: 评论
CUTE: Measuring LLMs' Understanding of Their Tokens
CUTE: Measuring LLMs' Understanding of Their Tokens
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Edman, Lukas Schmid, Helmut Fraser, Alexander Center for Information and Language Processing LMU Munich Germany School of Computation Information and Technology TU Munich Germany Munich Center for Machine Learning Germany Munich Data Science Institute Germany
Large language Models (LLMs) show remarkable performance on a wide variety of tasks. Most LLMs split text into multi-character tokens and process them as atomic units without direct access to individual characters. Th... 详细信息
来源: 评论
Just Rewrite It Again: A Post-processing Method for Enhanced Semantic Similarity and Privacy Preservation of Differentially Private Rewritten Text  24
Just Rewrite It Again: A Post-Processing Method for Enhanced...
收藏 引用
19th International conference on Availability, Reliability, and Security (ARES)
作者: Meisenbacher, Stephen Matthes, Florian Tech Univ Munich Sch Computat Informat & Technol Dept Comp Sci Garching Germany
The study of Differential Privacy (DP) in natural language processing often views the task of text privatization as a rewriting task, in which sensitive input texts are rewritten to hide explicit or implicit private i... 详细信息
来源: 评论
Deciphering the Interplay of Parametric and Non-parametric Memory in Retrieval-augmented language Models
Deciphering the Interplay of Parametric and Non-parametric M...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Farahani, Mehrdad Johansson, Richard Chalmers University of Technology University of Gothenburg Sweden
Generative language models often struggle with specialized or less-discussed knowledge. A potential solution is found in Retrieval-Augmented Generation (RAG) models which act like retrieving information before generat... 详细信息
来源: 评论
Generative Dictionary: Improving language Learner Understanding with Contextual Definitions
Generative Dictionary: Improving Language Learner Understand...
收藏 引用
2024 conference on empirical methods in natural language processing: System Demonstrations, EMNLP 2024
作者: Tuan, Kevin Tu, Hai-Lun Chang, Jason S. Department of Computer Science National Tsing Hua University Taiwan Department of Library and Information Science Fu Jen Catholic University Taiwan
We introduce GenerativeDictionary, a novel dictionary system that generates word sense interpretations based on the given context. Our approach involves transforming context sentences to highlight the meaning of targe... 详细信息
来源: 评论
VerifyMatch: A Semi-Supervised Learning Paradigm for natural language Inference with Confidence-Aware MixUp
VerifyMatch: A Semi-Supervised Learning Paradigm for Natural...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Park, Seo Yeon Caragea, Cornelia Korea Republic of University of Illinois Chicago United States
While natural language inference (NLI) has emerged as a prominent task for evaluating a model's capability to perform natural language understanding, creating large benchmarks for training deep learning models imp... 详细信息
来源: 评论
TensorOpera Router: A Multi-Model Router for Efficient LLM Inference
TensorOpera Router: A Multi-Model Router for Efficient LLM I...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Stripelis, Dimitris Hu, Zijian Zhang, Jipeng Xu, Zhaozhuo Shah, Alay Dilipbhai Jin, Han Yao, Yuhang Zhang, Tong Avestimehr, Salman He, Chaoyang TensorOpera Inc. Palo AltoCA United States
With the rapid growth of Large language Models (LLMs) across various domains, numerous new LLMs have emerged, each possessing domain-specific expertise. This proliferation has highlighted the need for quick, high-qual... 详细信息
来源: 评论
Leveraging Large language Models for NLG Evaluation: Advances and Challenges
Leveraging Large Language Models for NLG Evaluation: Advance...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Zhen Xu, Xiaohan Shen, Tao Xu, Can Gu, Jia-Chen Lai, Yuxuan Tao, Chongyang Ma, Shuai WICT Peking University China The University of Hong Kong Hong Kong UTS Australia Microsoft United States UCLA United States The Open University of China China SKLSDE Lab Beihang University China
In the rapidly evolving domain of natural language Generation (NLG) evaluation, introducing Large language Models (LLMs) has opened new avenues for assessing generated content quality, e.g., coherence, creativity, and... 详细信息
来源: 评论