咨询与建议

限定检索结果

文献类型

  • 14,463 篇 会议
  • 653 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,257 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,943 篇 工学
    • 10,283 篇 计算机科学与技术...
    • 5,409 篇 软件工程
    • 1,461 篇 信息与通信工程
    • 953 篇 电气工程
    • 879 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,417 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 758 篇 管理科学与工程(可...
    • 240 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,534 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 741 篇 semantics
  • 680 篇 machine learning
  • 609 篇 deep learning
  • 520 篇 natural language...
  • 347 篇 computational mo...
  • 336 篇 training
  • 333 篇 accuracy
  • 331 篇 sentiment analys...
  • 329 篇 large language m...
  • 320 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 261 篇 speech recogniti...
  • 252 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 language technol...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,663 篇 英文
  • 481 篇 其他
  • 105 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15258 条 记 录,以下是601-610 订阅
排序:
Zero-Resource Hallucination Prevention for Large language Models
Zero-Resource Hallucination Prevention for Large Language Mo...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Luo, Junyu Xiao, Cao Ma, Fenglong The Pennsylvania State University United States GE Healthcare United States
The prevalent use of large language models (LLMs) in various domains has drawn attention to the issue of "hallucination", which refers to instances where LLMs generate factually inaccurate or ungrounded info... 详细信息
来源: 评论
A Comprehensive Survey of Hallucination in Large language, Image, Video and Audio Foundation Models
A Comprehensive Survey of Hallucination in Large Language, I...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Sahoo, Pranab Meharia, Prabhash Ghosh, Akash Saha, Sriparna Jain, Vinija Chadha, Aman Department of Computer Science and Engineering Indian Institute of Technology Patna India Stanford University United States Amazon GenAI United States
The rapid advancement of foundation models (FMs) across language, image, audio, and video domains has shown remarkable capabilities in diverse tasks. However, the proliferation of FMs brings forth a critical challenge... 详细信息
来源: 评论
TAIL: A Toolkit for Automatic and Realistic Long-Context Large language Model Evaluation
TAIL: A Toolkit for Automatic and Realistic Long-Context Lar...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Gu, Gefei Zhao, Yilun Ning, Ruoxi Zheng, Yanan Cohan, Arman Yale University United States University of Warterloo Canada Allen Institute for AI India
As long-context large language models (LLMs) gain increasing attention for their ability to handle extensive inputs, the demand for effective evaluation methods has become critical. Existing evaluation methods, howeve... 详细信息
来源: 评论
The Generation Gap: Exploring Age Bias in the Value Systems of Large language Models
The Generation Gap: Exploring Age Bias in the Value Systems ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Liu, Siyang Maturi, Trisha Yi, Bowen Shen, Siqi Mihalcea, Rada The LIT Group Department of Computer Science and Engineering University of Michigan Ann Arbor United States
We explore the alignment of values in Large language Models (LLMs) with specific age groups, leveraging data from the World Value Survey across thirteen *** a diverse set of prompts tailored to ensure response robustn... 详细信息
来源: 评论
DA3: A Distribution-Aware Adversarial Attack against language Models
DA3: A Distribution-Aware Adversarial Attack against Languag...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Yibo Dong, Xiangjue Caverlee, James Yu, Philip S. University of Illinois Chicago United States Texas A&M University United States
language models can be manipulated by adversarial attacks, which introduce subtle perturbations to input data. While recent attack methods can achieve a relatively high attack success rate (ASR), we've observed th... 详细信息
来源: 评论
YesBut: A High-Quality Annotated Multimodal Dataset for evaluating Satire Comprehension capability of Vision-language Models
YesBut: A High-Quality Annotated Multimodal Dataset for eval...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Nandy, Abhilash Agarwal, Yash Patwa, Ashish Das, Millon Madhur Bansal, Aman Raj, Ankit Goyal, Pawan Ganguly, Niloy Indian Institute of Technology Kharagpur India University of Massachusetts Amherst United States Haldia Institute of Technology India
Understanding satire and humor is a challenging task for even current Vision-language models. In this paper, we propose the challenging tasks of Satirical Image Detection (detecting whether an image is satirical), Und... 详细信息
来源: 评论
Structural Priming Demonstrates Abstract Grammatical Representations in Multilingual language Models
Structural Priming Demonstrates Abstract Grammatical Represe...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Michaelov, James A. Arnett, Catherine Chang, Tyler A. Bergen, Benjamin K. Univ Calif San Diego Dept Cognit Sci La Jolla CA 92093 USA Univ Calif San Diego Dept Linguist La Jolla CA 92093 USA
The Abstract grammatical knowledge-of parts of speech and grammatical patterns-is key to the capacity for linguistic generalization in humans. But how abstract is grammatical knowledge in large language models? In the... 详细信息
来源: 评论
Target-Aware language Modeling via Granular Data Sampling
Target-Aware Language Modeling via Granular Data Sampling
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chang, Ernie Lin, Pin-Jie Li, Yang Zhao, Changsheng Kim, Daeil Rabatin, Rastislav Liu, Zechun Shi, Yangyang Chandra, Vikas AI at Meta United States Virginia Tech United States Iowa State University United States
language model pretraining generally targets a broad range of use cases and incorporates data from diverse sources. However, there are instances where we desire a model that excels in specific areas without markedly c... 详细信息
来源: 评论
Temporally Consistent Factuality Probing for Large language Models
Temporally Consistent Factuality Probing for Large Language ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Bajpai, Ashutosh Goyal, Aaryan Anwer, Atif Chakraborty, Tanmoy Indian Institute of Technology Delhi India Wipro Research India
The prolific use of Large language Models (LLMs) as an alternate knowledge base requires them to be factually consistent, necessitating both correctness and consistency traits for paraphrased queries. Recently, signif... 详细信息
来源: 评论
GPT-RE: In-context Learning for Relation Extraction using Large language Models
GPT-RE: In-context Learning for Relation Extraction using La...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wan, Zhen Cheng, Fei Mao, Zhuoyuan Liu, Qianying Song, Haiyue Li, Jiwei Kurohashi, Sadao Kyoto Univ Kyoto Japan Zhejiang Univ Hangzhou Peoples R China
In spite of the potential for ground-breaking achievements offered by large language models (LLMs) (e.g., GPT-3) via in-context learning (ICL), they still lag significantly behind fully-supervised baselines (e.g., fin... 详细信息
来源: 评论