咨询与建议

限定检索结果

文献类型

  • 7,582 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,703 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,480 篇 工学
    • 6,252 篇 计算机科学与技术...
    • 3,600 篇 软件工程
    • 748 篇 信息与通信工程
    • 507 篇 控制科学与工程
    • 271 篇 电气工程
    • 213 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 85 篇 电子科学与技术(可...
    • 76 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,524 篇 管理学
    • 1,167 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,472 篇 文学
    • 1,465 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,447 篇 理学
    • 775 篇 数学
    • 352 篇 物理学
    • 250 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 165 篇 法学
    • 153 篇 社会学
  • 130 篇 医学
    • 94 篇 临床医学
    • 76 篇 基础医学(可授医学...
  • 112 篇 教育学
    • 106 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,183 篇 natural language...
  • 872 篇 computational li...
  • 621 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 106 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 30 篇 university of ch...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 carnegie mellon ...
  • 25 篇 gaoling school o...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 20 篇 wen ji-rong
  • 20 篇 zhang qi

语言

  • 6,985 篇 英文
  • 689 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7704 条 记 录,以下是331-340 订阅
排序:
A Probability-Quality Trade-off in Aligned language Models and its Relation to Sampling Adaptors
A Probability-Quality Trade-off in Aligned Language Models a...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tan, Naaman Valvoda, Josef Liu, Tianyu Svete, Anej Qin, Yanxia Min-Yen, Kan Cotterell, Ryan National University of Singapore Singapore University of Copenhagen Denmark ETH Zürich Switzerland
The relationship between the quality of a string, as judged by a human reader, and its probability, p(y) under a language model undergirds the development of better language models. For example, many popular algorithm... 详细信息
来源: 评论
Universal Vulnerabilities in Large language Models: Backdoor Attacks for In-context Learning
Universal Vulnerabilities in Large Language Models: Backdoor...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhao, Shuai Jia, Meihuizi Tuan, Luu Anh Pan, Fengjun Wen, Jinming Nanyang Technological University Singapore Guangzhou University Guangzhou China Beijing Institute of Technology Beijing China
In-context learning, a paradigm bridging the gap between pre-training and fine-tuning, has demonstrated high efficacy in several NLP tasks, especially in few-shot settings. Despite being widely applied, in-context lea... 详细信息
来源: 评论
Investigating Efficiently Extending Transformers for Long Input Summarization
Investigating Efficiently Extending Transformers for Long In...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Phang, Jason Zhao, Yao Liu, Peter J. NYU New York NY 10003 USA Google Res Brain Team Mountain View CA USA Google Mountain View CA USA
While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs still poses a significant challenge. One such task is long input summarization, wh... 详细信息
来源: 评论
Images Speak Louder than Words: Understanding and Mitigating Bias in Vision-language Model from a Causal Mediation Perspective
Images Speak Louder than Words: Understanding and Mitigating...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Weng, Zhaotian Gao, Zijun Andrews, Jerone Zhao, Jieyu University of Southern California United States Sony AI
Vision-language models (VLMs) pre-trained on extensive datasets can inadvertently learn biases by correlating gender information with specific objects or scenarios. Current methods, which focus on modifying inputs and... 详细信息
来源: 评论
language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in language Models
Language Models as Compilers: Simulating Pseudocode Executio...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chae, Hyungjoo Kim, Yeonghyeon Kim, Seungone Ong, Kai Tzu-Iunn Kwak, Beong-Woo Kim, Moohyeon Kim, Seonghwan Kwon, Taeyoon Moon, Seungjun Chung, Jiwan Yu, Youngjae Yeo, Jinyoung Yonsei University Korea Republic of Carnegie Mellon University United States
Algorithmic reasoning tasks that involve complex logical patterns, such as completing Dyck language, pose challenges for large language models (LLMs), despite their recent success. Prior work has used LLMs to generate... 详细信息
来源: 评论
Unlocking Memorization in Large language Models with Dynamic Soft Prompting
Unlocking Memorization in Large Language Models with Dynamic...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Zhepeng Bao, Runxue Wu, Yawen Taylor, Jackson Xiao, Cao Zheng, Feng Jiang, Weiwen Gao, Shangqian Zhang, Yanfu George Mason University United States GE Healthcare United States University of Pittsburgh United States William and Mary United States Southern University of Science and Technology China Florida State University United States
Pretrained large language models (LLMs) have excelled in a variety of natural language processing (NLP) tasks, including summarization, question answering, and translation. However, LLMs pose significant security risk... 详细信息
来源: 评论
Scaling Parameter-Constrained language Models with Quality Data
Scaling Parameter-Constrained Language Models with Quality D...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chang, Ernie Paltenghi, Matteo Li, Yang Lin, Pin-Jie Zhao, Changsheng Huber, Patrick Liu, Zechun Rabatin, Rastislav Shi, Yangyang Chandra, Vikas AI at Meta United States Iowa State University United States Virginia Tech United States
Scaling laws in language modeling traditionally quantify training loss as a function of dataset size and model parameters, providing compute-optimal estimates but often neglecting the impact of data quality on model g... 详细信息
来源: 评论
On the In-context Generation of language Models
On the In-context Generation of Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Jiang, Zhongtao Zhang, Yuanzhe Luo, Kun Yuan, Xiaowei Zhao, Jun Liu, Kang The Key Laboratory of Cognition and Decision Intelligence for Complex Systems Institute of Automation Chinese Academy of Sciences China School of Artificial Intelligence University of Chinese Academy of Sciences China Beijing Academy of Artificial Intelligence China Shanghai Artificial Intelligence Laboratory China
Large language models (LLMs) are found to have the ability of in-context generation (ICG): when they are fed with an in-context prompt concatenating a few somehow similar examples, they can implicitly recognize the pa... 详细信息
来源: 评论
Whispers that Shake Foundations: Analyzing and Mitigating False Premise Hallucinations in Large language Models
Whispers that Shake Foundations: Analyzing and Mitigating Fa...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yuan, Hongbang Cao, Pengfei Jin, Zhuoran Chen, Yubo Zeng, Daojian Liu, Kang Zhao, Jun The Key Laboratory of Cognition and Decision Intelligence for Complex Systems Institute of Automation Chinese Academy of Sciences Beijing China School of Artificial Intelligence University of Chinese Academy of Sciences Beijing China Hunan Normal University Changsha China
Large language Models (LLMs) have shown impressive capabilities but still suffer from the issue of hallucinations. A significant type of this issue is the false premise hallucination, which we define as the phenomenon...
来源: 评论
LONGAGENT: Achieving Question Answering for 128k-Token-Long Documents through Multi-Agent Collaboration
LONGAGENT: Achieving Question Answering for 128k-Token-Long ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhao, Jun Zu, Can Xu, Hao Lu, Yi He, Wei Ding, Yiwen Gui, Tao Zhang, Qi Huang, Xuanjing School of Computer Science Fudan University China Shanghai Key Laboratory of Intelligent Information Processing Fudan University China Institute of Modern Languages and Linguistics Fudan University China
Large language models (LLMs) have achieved tremendous success in understanding language and processing text. However, question-answering (QA) on lengthy documents faces challenges of resource constraints and a high pr... 详细信息
来源: 评论