咨询与建议

限定检索结果

文献类型

  • 14,549 篇 会议
  • 662 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,352 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,015 篇 工学
    • 10,349 篇 计算机科学与技术...
    • 5,460 篇 软件工程
    • 1,467 篇 信息与通信工程
    • 956 篇 电气工程
    • 892 篇 控制科学与工程
    • 447 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 177 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 101 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,486 篇 理学
    • 1,156 篇 数学
    • 654 篇 物理学
    • 520 篇 生物学
    • 394 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,427 篇 管理学
    • 1,756 篇 图书情报与档案管...
    • 759 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,762 篇 文学
    • 1,710 篇 外国语言文学
    • 184 篇 中国语言文学
  • 515 篇 医学
    • 303 篇 临床医学
    • 286 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 279 篇 法学
    • 249 篇 社会学
  • 239 篇 教育学
    • 226 篇 教育学
  • 100 篇 农学
  • 96 篇 经济学
  • 10 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,552 篇 natural language...
  • 1,789 篇 natural language...
  • 953 篇 computational li...
  • 741 篇 semantics
  • 683 篇 machine learning
  • 612 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 334 篇 large language m...
  • 334 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 255 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 alibaba grp peop...
  • 31 篇 school of artifi...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 26 篇 wen ji-rong
  • 26 篇 liu zhiyuan
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,307 篇 英文
  • 930 篇 其他
  • 114 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15353 条 记 录,以下是1191-1200 订阅
排序:
Toward Improving Robustness of Coreference Resolution for Thai language  6
Toward Improving Robustness of Coreference Resolution for Th...
收藏 引用
6th International conference on natural language processing (ICNLP)
作者: Suwannapichat, Poomphob Tarnpradab, Sansiri Prom-on, Santitham King Mongkuts Univ Technol Thonburi Dept Comp Engn Bangkok Thailand
Coreference resolution aims to identify expressions in a text that refer to the same entity and establish connections between them. This paper presents an improved method for Thai coreference resolution, extending the... 详细信息
来源: 评论
SciER: An Entity and Relation Extraction Dataset for Datasets, methods, and Tasks in Scientific Documents
SciER: An Entity and Relation Extraction Dataset for Dataset...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Qi Chen, Zhijia Pan, Huitong Caragea, Cornelia Latecki, Longin Jan Dragut, Eduard Temple University United States University of Illinois Chicago United States
Scientific information extraction (SciIE) is critical for converting unstructured knowledge from scholarly articles into structured data (entities and relations). Several datasets have been proposed for training and v... 详细信息
来源: 评论
Control Large language Models via Divide and Conquer
Control Large Language Models via Divide and Conquer
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Bingxuan Wang, Yiwei Meng, Tao Chang, Kai-Wei Peng, Nanyun University of California Los Angeles United States University of California Merced United States
This paper investigates controllable generation for large language models (LLMs) with prompt-based control, focusing on Lexically Constrained Generation (LCG). We systematically evaluate the performance of LLMs on sat... 详细信息
来源: 评论
Don't Just Say "I don't know"! Self-aligning Large language Models for Responding to Unknown Questions with Explanations
Don't Just Say "I don't know"! Self-aligning Large Language ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Deng, Yang Zhao, Yong Li, Moxin Ng, See-Kiong Chua, Tat-Seng Singapore Management University Singapore National University of Singapore Singapore
Despite the remarkable abilities of Large language Models (LLMs) to answer questions, they often display a considerable level of overconfidence even when the question does not have a definitive answer. To avoid provid... 详细信息
来源: 评论
MuMath-Code: Combining Tool-Use Large language Models with Multi-perspective Data Augmentation for Mathematical Reasoning
MuMath-Code: Combining Tool-Use Large Language Models with M...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yin, Shuo You, Weihao Ji, Zhilong Zhong, Guoqiang Bai, Jinfeng Tomorrow Advancing Life China College of Computer Science and Technology Ocean University of China China
The tool-use Large language Models (LLMs) that integrate with external Python interpreters have significantly enhanced mathematical reasoning capabilities for open-source LLMs, while tool-free methods chose another tr... 详细信息
来源: 评论
Repairs in a Block World: A New Benchmark for Handling User Corrections with Multi-Modal language Models
Repairs in a Block World: A New Benchmark for Handling User ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chiyah-Garcia, Javier Suglia, Alessandro Eshghi, Arash Heriot-Watt University Edinburgh United Kingdom
In dialogue, the addressee may initially misunderstand the speaker and respond erroneously, often prompting the speaker to correct the misunderstanding in the next turn with a Third Position Repair (TPR). The ability ... 详细信息
来源: 评论
Show and Guide: Instructional-Plan Grounded Vision and language Model
Show and Guide: Instructional-Plan Grounded Vision and Langu...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Glória-Silva, Diogo Semedo, David Magalhães, João NOVA LINCS NOVA School of Science and Technology Portugal
Guiding users through complex procedural plans is an inherently multimodal task in which having visually illustrated plan steps is crucial to deliver an effective plan guidance. However, existing works on plan-followi... 详细信息
来源: 评论
CELLO : Causal Evaluation of Large Vision-language Models
CELLO : Causal Evaluation of Large Vision-Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Meiqi Peng, Bo Zhang, Yan Lu, Chaochao State Key Laboratory of General Artificial Intelligence Peking University Beijing China School of Intelligence Science and Technology Peking University China Shanghai Jiao Tong University China Shanghai Artificial Intelligence Laboratory China
Causal reasoning is fundamental to human intelligence and crucial for effective decision-making in real-world environments. Despite recent advancements in large vision-language models (LVLMs), their ability to compreh... 详细信息
来源: 评论
TransferCVLM: Transferring Cross-Modal Knowledge for Vision-language Modeling
TransferCVLM: Transferring Cross-Modal Knowledge for Vision-...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Choi, Dongha Kim, Jung-Jae Lee, Hyunju GIST Artificial Intelligence Graduate School Gwangju Korea Republic of A*STAR Singapore
Recent large vision-language multimodal models pre-trained with huge amount of image-text pairs show remarkable performances in downstream tasks. However, the multimodal pre-training has limitations in terms of resour... 详细信息
来源: 评论
MIXTURE-OF-SKILLS: Learning to Optimize Data Usage for Fine-Tuning Large language Models
MIXTURE-OF-SKILLS: Learning to Optimize Data Usage for Fine-...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wu, Minghao Vu, Thuy-Trang Qu, Lizhen Haffari, Gholamreza Monash University Australia
Large language models (LLMs) are typically fine-tuned on diverse and extensive datasets sourced from various origins to develop a comprehensive range of skills, such as writing, reasoning, chatting, coding, and more. ... 详细信息
来源: 评论