咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是821-830 订阅
排序:
Translation Canvas: An Explainable Interface to Pinpoint and Analyze Translation Systems
Translation Canvas: An Explainable Interface to Pinpoint and...
收藏 引用
2024 conference on empirical methods in natural language processing: System Demonstrations, EMNLP 2024
作者: Dandekar, Chinmay Xu, Wenda Xu, Xi Ouyang, Siqi Li, Lei University of California Santa Barbara United States Carnegie Mellon University United States
With the rapid advancement of machine translation research, evaluation toolkits have become essential for benchmarking system progress. Tools like COMET and SacreBLEU offer single quality score assessments that are ef... 详细信息
来源: 评论
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large language Models
LLM-Adapters: An Adapter Family for Parameter-Efficient Fine...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Hu, Zhiqiang Wang, Lei Lan, Yihuai Xu, Wanyu Lim, Ee-Peng Bing, Lidong Xu, Xing Poria, Soujanya Lee, Roy Ka-Wei Singapore Univ Technol & Design Singapore Singapore Singapore Management Univ Singapore Singapore Alibaba Grp DAMO Acad Singapore Singapore Southwest Jiaotong Univ Chengdu Peoples R China Univ Elect Sci & Technol China Chengdu Peoples R China
The success of large language models (LLMs), like GPT-4 and ChatGPT, has led to the development of numerous cost-effective and accessible alternatives that are created by finetuning open-access LLMs with task-specific... 详细信息
来源: 评论
README++: Benchmarking Multilingual language Models for Multi-Domain Readability Assessment
README++: Benchmarking Multilingual Language Models for Mult...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Naous, Tarek Ryan, Michael J. Lavrouk, Anton Chandra, Mohit Xu, Wei College of Computing Georgia Institute of Technology United States
We present a comprehensive evaluation of large language models for multilingual readability assessment. Existing evaluation resources lack domain and language diversity, limiting the ability for cross-domain and cross... 详细信息
来源: 评论
Generating Vehicular Icon Descriptions and Indications Using Large Vision-language Models
Generating Vehicular Icon Descriptions and Indications Using...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Fletcher, James Dehnen, Nicholas Bathaie, Seyed Nima Tayarani An, Aijun Davoudi, Heidar Di Carlantonio, Ron Farmaner, Gary York University Toronto Canada Ontario Tech University Oshawa Canada iNAGO Co Toronto Canada
To enhance a question-answering system for automotive drivers, we tackle the problem of automatic generation of icon image descriptions. The descriptions can match the driver’s query about the icon appearing on the d... 详细信息
来源: 评论
Walking in Others' Shoes: How Perspective-Taking Guides Large language Models in Reducing Toxicity and Bias
Walking in Others' Shoes: How Perspective-Taking Guides Larg...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Xu, Rongwu Zhou, Zi'an Zhang, Tianwei Qi, Zehan Yao, Su Xu, Ke Xu, Wei Qiu, Han Tsinghua Universty China Nanyang Technological University Singapore
The common toxicity and societal bias in contents generated by large language models (LLMs) necessitate strategies to reduce harm. Present solutions often demand white-box access to the model or substantial training, ... 详细信息
来源: 评论
Can Large language Models Understand DL-Lite Ontologies? An empirical Study
Can Large Language Models Understand DL-Lite Ontologies? An ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Keyu Qi, Guilin Li, Jiaqi Zhai, Songlin School of Computer Science and Engineering Southeast University Nanjing China Ministry of Education China
Large language models (LLMs) have shown significant achievements in solving a wide range of tasks. Recently, LLMs' capability to store, retrieve and infer with symbolic knowledge has drawn a great deal of attentio... 详细信息
来源: 评论
Consistent Document-Level Relation Extraction via Counterfactuals
Consistent Document-Level Relation Extraction via Counterfac...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Modarressi, Ali Köksal, Abdullatif Schütze, Hinrich Center for Information and Language Processing LMU Munich Germany Munich Center for Machine Learning Germany
Many datasets have been developed to train and evaluate document-level relation extraction (RE) models. Most of these are constructed using real-world data. It has been shown that RE models trained on real-world data ... 详细信息
来源: 评论
Transforming EFL Teaching with AI: A Systematic Review of empirical Studies
收藏 引用
INTERNATIONAL JOURNAL OF ARTIFICIAL INTELLIGENCE IN EDUCATION 2025年 1-34页
作者: Kundu, Arnab Bej, Tripti Inst Educ Res & Policy Bankura India
This systematic review explores the integration and impact of Artificial Intelligence in English as a Foreign language teaching in schools, evaluating the effectiveness, challenges, and pedagogical implications of AI-... 详细信息
来源: 评论
Efficient Temporal Extrapolation of Multimodal Large language Models with Temporal Grounding Bridge
Efficient Temporal Extrapolation of Multimodal Large Languag...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Yuxuan Wang, Yueqian Wu, Pengfei Liang, Jianxin Zhao, Dongyan Liu, Yang Zheng, Zilong Beijing China Wangxuan Institute of Computer Technology Peking University Beijing China State Key Laboratory of General Artificial Intelligence Beijing China
Despite progress in multimodal large language models (MLLMs), the challenge of interpreting long-form videos in response to linguistic queries persists, largely due to the inefficiency in temporal grounding and limite... 详细信息
来源: 评论
TreePiece: Faster Semantic Parsing via Tree Tokenization
TreePiece: Faster Semantic Parsing via Tree Tokenization
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wang, Sid Shrivastava, Akshat Livshits, Aleksandr Meta Inc Menlo Pk CA 94025 USA
Autoregressive (AR) encoder-decoder neural networks have proved successful in many NLP problems, including Semantic Parsing - a task that translates natural language to machine-readable parse trees. However, the seque... 详细信息
来源: 评论