咨询与建议

限定检索结果

文献类型

  • 14,463 篇 会议
  • 654 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,258 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,944 篇 工学
    • 10,283 篇 计算机科学与技术...
    • 5,408 篇 软件工程
    • 1,463 篇 信息与通信工程
    • 954 篇 电气工程
    • 880 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 142 篇 电子科学与技术(可...
    • 101 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,416 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 757 篇 管理科学与工程(可...
    • 239 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 283 篇 基础医学(可授医学...
    • 111 篇 公共卫生与预防医...
  • 276 篇 法学
    • 248 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 96 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,535 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 740 篇 semantics
  • 681 篇 machine learning
  • 609 篇 deep learning
  • 520 篇 natural language...
  • 347 篇 computational mo...
  • 338 篇 training
  • 333 篇 accuracy
  • 331 篇 sentiment analys...
  • 329 篇 large language m...
  • 321 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 252 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 language technol...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,662 篇 英文
  • 482 篇 其他
  • 106 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15259 条 记 录,以下是711-720 订阅
排序:
Comparative Study of Explainability methods for Legal Outcome Prediction  6
Comparative Study of Explainability Methods for Legal Outcom...
收藏 引用
6th natural Legal language processing Workshop 2024, NLLP 2024, co-located with the 2024 conference on empirical methods in natural language processing
作者: Staliunaite, Ieva Raminta Valvoda, Josef Satoh, Ken University of Cambridge United Kingdom University of Copenhagen Denmark National Institute of Informatics Japan
This paper investigates explainability in natural Legal language processing (NLLP). We study the task of legal outcome prediction of the European Court of Human Rights cases in a ternary classification setup, where a ... 详细信息
来源: 评论
Knowledge-based Consistency Testing of Large language Models
Knowledge-based Consistency Testing of Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Rajan, Sai Sathiesh Soremekun, Ezekiel Chattopadhyay, Sudipta Singapore University of Technology and Design Singapore Royal Holloway University of London United Kingdom
In this work, we systematically expose and measure the inconsistency and knowledge gaps of Large language Models (LLMs).Specifically, we propose an automated testing framework (called KONTEST) which leverages a knowle... 详细信息
来源: 评论
Experimental Contexts Can Facilitate Robust Semantic Property Inference in language Models, but Inconsistently
Experimental Contexts Can Facilitate Robust Semantic Propert...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Misra, Kanishka Ettinger, Allyson Mahowald, Kyle The University of Texas Austin United States Allen Institute for Artificial Intelligence United States
Recent zero-shot evaluations have highlighted important limitations in the abilities of language models (LMs) to perform meaning extraction. However, it is now well known that LMs can demonstrate radical improvements ... 详细信息
来源: 评论
Don’t Be My Doctor! Recognizing Healthcare Advice In Large language Models
Don’t Be My Doctor! Recognizing Healthcare Advice In Large ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cheng, Kellen Tan Gentile, Anna Lisa Li, Pengyuan DeLuca, Chad Ren, Guang-Jie Princeton University United States IBM Research United States
Large language models (LLMs) have seen increasing popularity in daily use, with their widespread adoption by many corporations as virtual assistants, chatbots, predictors, and many more. Their growing influence raises... 详细信息
来源: 评论
Large language Model-based Human-Agent Collaboration for Complex Task Solving
Large Language Model-based Human-Agent Collaboration for Com...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Feng, Xueyang Chen, Zhi-Yuan Qin, Yujia Lin, Yankai Chen, Xu Liu, Zhiyuan Wen, Ji-Rong Gaoling School of Artificial Intelligence Renmin University of China Beijing China Beijing Key Laboratory of Big Data Management and Analysis Methods Beijing China Department of Computer Science and Technology Tsinghua University Beijing China
In recent developments within the research community, the integration of Large language Models (LLMs) in creating fully autonomous agents has garnered significant interest. Despite this, LLM-based agents frequently de... 详细信息
来源: 评论
We Need to Talk About Reproducibility in NLP Model Comparison
We Need to Talk About Reproducibility in NLP Model Compariso...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Xue, Yan Cao, Xuefei Yang, Xingli Wang, Yu Wang, Ruibo Li, Jihong Shanxi Univ Sch Comp & Informat Technol Taiyuan 030006 Peoples R China Shanxi Univ Sch Automat & Software Engn Taiyuan 030006 Peoples R China Shanxi Univ Sch Math Sci Taiyuan 030006 Peoples R China Shanxi Univ Sch Modern Educ Technol Taiyuan 030006 Peoples R China
NLPers frequently face reproducibility crisis in a comparison of various models of a real-world NLP task. Many studies have empirically showed that the standard splits tend to produce low reproducible and unreliable c... 详细信息
来源: 评论
Targeted Multilingual Adaptation for Low-resource language Families
Targeted Multilingual Adaptation for Low-resource Language F...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Downey, C.M. Blevins, Terra Serai, Dhwani Parikh, Dwija Steinert-Threlkeld, Shane Department of Linguistics University of Rochester United States Goergen Institute for Data Science University of Rochester United States Paul G. Allen School of Computer Science & Engineering University of Washington United States Department of Linguistics University of Washington United States
Massively multilingual models are known to have limited utility in any one language, and to perform particularly poorly on low-resource languages. By contrast, targeted multinguality has been shown to benefit low-reso... 详细信息
来源: 评论
EchoSight: Advancing Visual-language Models with Wiki Knowledge
EchoSight: Advancing Visual-Language Models with Wiki Knowle...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yan, Yibin Xie, Weidi School of Artificial Intelligence Shanghai Jiao Tong University China
Knowledge-based Visual Question Answering (KVQA) tasks require answering questions about images using extensive background knowledge. Despite significant advancements, the large generative visual-language models often... 详细信息
来源: 评论
Tokenization Falling Short: On Subword Robustness in Large language Models
Tokenization Falling Short: On Subword Robustness in Large L...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chai, Yekun Fang, Yewei Peng, Qiwei Li, Xuhong Baidu China ModelBest China University of Copenhagen Denmark
language models typically tokenize raw text into sequences of subword identifiers from a predefined vocabulary, a process inherently sensitive to typographical errors, length variations, and largely oblivious to the i... 详细信息
来源: 评论
How Susceptible are Large language Models to Ideological Manipulation?
How Susceptible are Large Language Models to Ideological Man...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Kai He, Zihao Yan, Jun Shi, Taiwei Lerman, Kristina Department of Computer Science University of Southern California United States Information Sciences Institute University of Southern California United States
Large language Models (LLMs) possess the potential to exert substantial influence on public perceptions and interactions with information. This raises concerns about the societal impact that could arise if the ideolog... 详细信息
来源: 评论