咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是781-790 订阅
排序:
POSTMARK: A Robust Blackbox Watermark for Large language Models
POSTMARK: A Robust Blackbox Watermark for Large Language Mod...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chang, Yapei Krishna, Kalpesh Houmansadr, Amir Wieting, John Iyyer, Mohit University of Massachusetts Amherst United States Google United States
The most effective techniques to detect LLM-generated text rely on inserting a detectable signature-or watermark-during the model's decoding process. Most existing watermarking methods require access to the underl... 详细信息
来源: 评论
MQUAKE: Assessing Knowledge Editing in language Models via Multi-Hop Questions
MQUAKE: Assessing Knowledge Editing in Language Models via M...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Zhong, Zexuan Wu, Zhengxuan Manning, Christopher D. Potts, Christopher Chen, Danqi Princeton Univ Princeton NJ 08544 USA Stanford Univ Stanford CA 94305 USA
The information stored in large language models (LLMs) falls out of date quickly, and retraining from scratch is often not an option. This has recently given rise to a range of techniques for injecting new facts throu... 详细信息
来源: 评论
ZGUL: Zero-shot Generalization to Unseen languages using Multi-source Ensembling of language Adapters
ZGUL: Zero-shot Generalization to Unseen Languages using Mul...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Rathore, Vipul Dhingra, Rajdeep Singla, Parag Mausam Indian Inst Technol New Delhi India
We tackle the problem of zero-shot crosslingual transfer in NLP tasks via the use of language adapters (LAs). Most of the earlier works have explored training with adapter of a single source (often English), and testi... 详细信息
来源: 评论
PaCoST: Paired Confidence Significance Testing for Benchmark Contamination Detection in Large language Models
PaCoST: Paired Confidence Significance Testing for Benchmark...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Huixuan Lin, Yun Wan, Xiaojun Wangxuan Institute of Computer Technology Peking University China School of Foreign Languages Peking University China
Large language models (LLMs) are known to be trained on vast amounts of data, which may unintentionally or intentionally include data from commonly used benchmarks. This inclusion can lead to cheatingly high scores on... 详细信息
来源: 评论
Updating knowledge in Large language Models: an empirical Evaluation
Updating knowledge in Large Language Models: an Empirical Ev...
收藏 引用
IEEE conference on Evolving and Adaptive Intelligent Systems (IEEE EAIS)
作者: Marinelli, Alberto Roberto Carta, Antonio Passaro, Lucia C. Univ Pisa Dept Comp Sci Pisa Italy
natural language processing (NLP) has witnessed a paradigm shift with Large language Models (LLMs), yet the static knowledge from pre-training can lead to knowledge obsolescence. This study focuses on the dynamic rela... 详细信息
来源: 评论
Mitigating the language Mismatch and Repetition Issues in LLM-based Machine Translation via Model Editing
Mitigating the Language Mismatch and Repetition Issues in LL...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Weichuan Li, Zhaoyi Lian, Defu Ma, Chen Song, Linqi Wei, Ying City University of Hong Kong Hong Kong University of Science and Technology of China China City University of Hong Kong Shenzhen Research Institute Hong Kong Zhejiang University China
Large language Models (LLMs) have recently revolutionized the NLP field, while they still fall short in some specific down-stream tasks. In the work, we focus on utilizing LLMs to perform machine translation, where we... 详细信息
来源: 评论
FastAdaSP: Multitask-Adapted Efficient Inference for Large Speech language Model
FastAdaSP: Multitask-Adapted Efficient Inference for Large S...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lu, Yichen Song, Jiaqi Yang, Chao-Han Huck Watanabe, Shinji Carnegie Mellon University United States NVIDIA Research United States
In this study, we aim to explore Multitask Speech language Model (SpeechLM) efficient inference via token reduction. Unlike other modalities such as vision or text, speech has unique temporal dependencies, making prev... 详细信息
来源: 评论
Struct-XLM: A Structure Discovery Multilingual language Model for Enhancing Cross-lingual Transfer through Reinforcement Learning
Struct-XLM: A Structure Discovery Multilingual Language Mode...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wu, Linjuan Lu, Weiming Zhejiang Univ Coll Comp Sci & Technol Hangzhou Peoples R China Alibaba Zhejiang Univ Joint Res Inst Frontier Technol Hangzhou Peoples R China
Cross-lingual transfer learning heavily relies on well-aligned cross-lingual representations. The syntactic structure is recognized as beneficial for cross-lingual transfer, but limited researches utilize it for align... 详细信息
来源: 评论
Revealing the Parallel Multilingual Learning within Large language Models
Revealing the Parallel Multilingual Learning within Large La...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Mu, Yongyu Feng, Peinan Cao, Zhiquan Wu, Yuzhang Li, Bei Wang, Chenglong Xiao, Tong Song, Kai Liu, Tongran Zhang, Chunliang Zhu, Jingbo NLP Lab School of Computer Science and Engineering Northeastern University Shenyang China NiuTrans Research Shenyang China Bytedance Seattle United States CAS Key Laboratory of Behavioral Science Institute of Psychology CAS Beijing China
Large language models (LLMs) can handle multilingual and cross-lingual text within a single input;however, previous works leveraging multilingualism in LLMs primarily focus on using English as the pivot language to en... 详细信息
来源: 评论
Do language Models Have a Common Sense regarding Time? Revisiting Temporal Commonsense Reasoning in the Era of Large language Models
Do Language Models Have a Common Sense regarding Time? Revis...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Jain, Raghav Sojitra, Daivik Acharya, Arkadeep Saha, Sriparna Jatowt, Adam Dandapat, Sandipan Indian Inst Technol Patna Dept Comp Sci & Engn Patna Bihar India Univ Innsbruck Innsbruck Austria Microsoft Chennai Tamil Nadu India
Temporal reasoning represents a vital component of human communication and understanding, yet remains an underexplored area within the context of Large language Models (LLMs). Despite LLMs demonstrating significant pr... 详细信息
来源: 评论