咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是991-1000 订阅
排序:
Correct after Answer: Enhancing Multi-Span Question Answering with Post-processing Method
Correct after Answer: Enhancing Multi-Span Question Answerin...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lin, Jiayi Zhang, Chenyang Tong, Haibo Zhang, Dongyu Hong, Qingqing Hou, Bingxuan Wang, Junli Key Laboratory of Embedded System and Service Computing Tongji University Ministry of Education Shanghai201804 China Collaborative Innovation Center for Financial Network Security Tongji University Shanghai201804 China
Multi-Span Question Answering (MSQA) requires models to extract one or multiple answer spans from a given context to answer a question. Prior work mainly focuses on designing specific methods or applying heuristic str... 详细信息
来源: 评论
Is Child-Directed Speech Effective Training Data for language Models?
Is Child-Directed Speech Effective Training Data for Languag...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Feng, Steven Y. Goodman, Noah D. Frank, Michael C. Stanford University United States
While high-performing language models are typically trained on hundreds of billions of words, human children become fluent language users with a much smaller amount of data. What are the features of the data they rece... 详细信息
来源: 评论
An empirical Investigation of Implicit and Explicit Knowledge-Enhanced methods for Ad Hoc Dataset Retrieval
An Empirical Investigation of Implicit and Explicit Knowledg...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Luo, Weiqing Chen, Qiaosheng Zhang, Zhiyang Huang, Zixian Cheng, Gong Nanjing Univ State Key Lab Novel Software Technol Nanjing Peoples R China
Ad hoc dataset retrieval has become an important way of finding data on the Web, where the underlying problem is how to measure the relevance of a dataset to a query. State-of-the-art solutions for this task are still... 详细信息
来源: 评论
Evaluating Moral Beliefs across LLMs through a Pluralistic Framework
Evaluating Moral Beliefs across LLMs through a Pluralistic F...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Liu, Xuelin Zhu, Yanfei Zhu, Shucheng Liu, Pengyuan Liu, Ying Yu, Dong School of Information Science Beijing Language and Culture University Beijing China School of Humanities Tsinghua University Beijing China National Print Media Language Resources Monitoring & Research Center Beijing Language and Culture University Beijing China
Proper moral beliefs are fundamental for language models, yet assessing these beliefs poses a significant challenge. This study introduces a novel three-module framework to evaluate the moral beliefs of four prominent... 详细信息
来源: 评论
Small language Models Fine-tuned to Coordinate Larger language Models improve Complex Reasoning
Small Language Models Fine-tuned to Coordinate Larger Langua...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Juneja, Gurusha Dutta, Subhabrata Chakrabarti, Soumen Manchhanda, Sunny Chakraborty, Tanmoy IIT Delhi Delhi India Indian Inst Technol Bombay Maharashtra India DYSL AI Bengaluru India
Large language Models (LLMs) prompted to generate chain-of-thought (CoT) exhibit impressive reasoning capabilities. Recent attempts at prompt decomposition toward solving complex, multi-step reasoning problems depend ... 详细信息
来源: 评论
Text encoders bottleneck compositionality in contrastive vision-language models
Text encoders bottleneck compositionality in contrastive vis...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Kamath, Amita Hessel, Jack Chang, Kai-Wei Univ Calif Los Angeles Los Angeles CA 90024 USA Allen Inst AI Seattle WA USA
Performant vision-language (VL) models like CLIP represent captions using a single vector. How much information about language is lost in this bottleneck? We first curate CompPrompts, a set of increasingly composition... 详细信息
来源: 评论
Query Rewriting for Retrieval-Augmented Large language Models
Query Rewriting for Retrieval-Augmented Large Language Model...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Ma, Xinbei Gong, Yeyun He, Pengcheng Zhao, Hai Duan, Nan Shanghai Jiao Tong Univ Dept Comp Sci & Engn Shanghai Peoples R China Shanghai Jiao Tong Univ Key Lab Shanghai Educ Commiss Intelligent Interac Shanghai Peoples R China Microsoft Res Asia Beijing Peoples R China Microsoft Azure AI Redmond WA USA
Large language Models (LLMs) play powerful, black-box readers in the retrieve-thenread pipeline, making remarkable progress in knowledge-intensive tasks. This work introduces a new framework, Rewrite-Retrieve-Read ins... 详细信息
来源: 评论
Large language Models Can Not Perform Well in Understanding and Manipulating natural language at Both Character and Word Levels?
Large Language Models Can Not Perform Well in Understanding ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Yidan He, Zhenan College of Computer Science Sichuan University China
Despite their promising performance across various tasks, recent studies reveal that Large language models (LLMs) still exhibit significant deficiencies in handling several word-level and character-level tasks, e.g., ... 详细信息
来源: 评论
A Pretrained language Model for Cyber Threat Intelligence
A Pretrained Language Model for Cyber Threat Intelligence
收藏 引用
2023 conference on empirical methods in natural language processing, EMNLP 2023
作者: Park, Youngja You, Weiqiu IBM T. J. Watson Research Center Yorktown HeightsNY United States University of Pennsylvania PhiladelphiaPA United States
We present a new BERT model for the cybersecurity domain, CTI-BERT, which can improve the accuracy of cyber threat intelligence (CTI) extraction, enabling organizations to better defend against potential cyber threats... 详细信息
来源: 评论
Optimizing language Models with Fair and Stable Reward Composition in Reinforcement Learning
Optimizing Language Models with Fair and Stable Reward Compo...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Jiahui Zhang, Hanlin Zhang, Fengda Chang, Tai-Wei Kuang, Kun Chen, Long Zhou, Jun Zhejiang University China Ant Group China HKUST Hong Kong
Reinforcement learning from human feedback (RLHF) and AI-generated feedback (RLAIF) have become prominent techniques that significantly enhance the functionality of pre-trained language models (LMs). These methods har... 详细信息
来源: 评论