咨询与建议

限定检索结果

文献类型

  • 7,585 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,706 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,483 篇 工学
    • 6,256 篇 计算机科学与技术...
    • 3,577 篇 软件工程
    • 748 篇 信息与通信工程
    • 535 篇 控制科学与工程
    • 272 篇 电气工程
    • 212 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 86 篇 电子科学与技术(可...
    • 74 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,522 篇 管理学
    • 1,165 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,471 篇 文学
    • 1,464 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,446 篇 理学
    • 776 篇 数学
    • 352 篇 物理学
    • 249 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 164 篇 法学
    • 153 篇 社会学
  • 129 篇 医学
    • 93 篇 临床医学
    • 75 篇 基础医学(可授医学...
  • 111 篇 教育学
    • 105 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,181 篇 natural language...
  • 872 篇 computational li...
  • 619 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 105 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 31 篇 university of ch...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 gaoling school o...
  • 26 篇 carnegie mellon ...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 21 篇 zhang qi
  • 20 篇 wen ji-rong

语言

  • 6,955 篇 英文
  • 722 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7707 条 记 录,以下是191-200 订阅
排序:
RevMUX: Data Multiplexing with Reversible Adapters for Efficient LLM Batch Inference
RevMUX: Data Multiplexing with Reversible Adapters for Effic...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Xu, Yige Guo, Xu Zeng, Zhiwei Miao, Chunyan Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly Singapore College of Computing and Data Science Nanyang Technological University Singapore
Large language models (LLMs) have brought a great breakthrough to the natural language processing (NLP) community, while leading the challenge of handling concurrent customer queries due to their high throughput deman... 详细信息
来源: 评论
Scaling Laws Across Model Architectures: A Comparative Analysis of Dense and MoE Models in Large language Models
Scaling Laws Across Model Architectures: A Comparative Analy...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Siqi Chen, Zhengyu Li, Bei He, Keqing Zhang, Min Wang, Jingang Meituan Inc. China The University of Hong Kong Hong Kong
The scaling of large language models (LLMs) is a critical research area for the efficiency and effectiveness of model training and deployment. Our work investigates the transferability and discrepancies of scaling law... 详细信息
来源: 评论
Do Large language Models Know How Much They Know?
Do Large Language Models Know How Much They Know?
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Prato, Gabriele Huang, Jerry Parthasarathi, Prasannna Sodhani, Shagun Chandar, Sarath Chandar Research Lab Canada Mila - Quebec AI Institute Canada Université de Montréal Canada Meta FAIR United States Polytechnique Montréal Canada
Large language Models (LLMs) have emerged as highly capable systems and are increasingly being integrated into various uses. Nevertheless, the rapid advancement in their deployment trails a comprehensive understanding... 详细信息
来源: 评论
CoBa: Convergence Balancer for Multitask Finetuning of Large language Models
CoBa: Convergence Balancer for Multitask Finetuning of Large...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Gong, Zi Yu, Hang Liao, Cong Liu, Bingchang Chen, Chaoyu Li, Jianguo Ant Group China
Multi-task learning (MTL) benefits the finetuning of large language models (LLMs) by providing a single model with improved performance and generalization ability across tasks, presenting a resource-efficient alternat... 详细信息
来源: 评论
Predicting Rewards Alongside Tokens: Non-disruptive Parameter Insertion for Efficient Inference Intervention in Large language Model
Predicting Rewards Alongside Tokens: Non-disruptive Paramete...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yuan, Chenhan Huang, Fei Peng, Ru Lu, Keming Yu, Bowen Zhou, Chang Zhou, Jingren The University of Manchester Manchester United Kingdom Alibaba Group Hangzhou China
Transformer-based large language models (LLMs) exhibit limitations such as generating unsafe responses, unreliable reasoning, etc. Existing inference intervention approaches attempt to mitigate these issues by finetun... 详细信息
来源: 评论
Null-Shot Prompting: Rethinking Prompting Large language Models With Hallucination
Null-Shot Prompting: Rethinking Prompting Large Language Mod...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Taveekitworachai, Pittawat Abdullah, Febri Thawonmas, Ruck Graduate School Ritsumeikan University Osaka Ibaraki Japan College of Information Science and Engineering Ritsumeikan University Osaka Ibaraki Japan
This paper investigates an interesting phenomenon where we observe performance increases in large language models (LLMs) when providing a prompt that causes and exploits hallucination. We propose null-shot prompting, ... 详细信息
来源: 评论
CHIQ: Contextual History Enhancement for Improving Query Rewriting in Conversational Search
CHIQ: Contextual History Enhancement for Improving Query Rew...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Mo, Fengran Ghaddar, Abbas Mao, Kelong Rezagholizadeh, Mehdi Chen, Boxing Liu, Qun Nie, Jian-Yun DIRO Université de Montréal QC Canada Huawei Noah's Ark Lab Montreal Research Center Canada Renmin University of China China
In this paper, we study how open-source large language models (LLMs) can be effectively deployed for improving query rewriting in conversational search, especially for ambiguous queries. We introduce CHIQ, a two-step ... 详细信息
来源: 评论
What Are the Odds? language Models Are Capable of Probabilistic Reasoning
What Are the Odds? Language Models Are Capable of Probabilis...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Paruchuri, Akshay Garrison, Jake Liao, Shun Hernandez, John Sunshine, Jacob Althoff, Tim Liu, Xin McDuff, Daniel Google United States
language models (LM) are capable of remarkably complex linguistic tasks;however, numerical reasoning is an area in which they frequently struggle. An important but rarely evaluated form of reasoning is understanding p... 详细信息
来源: 评论
Puzzle Solving using Reasoning of Large language Models: A Survey
Puzzle Solving using Reasoning of Large Language Models: A S...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Giadikiaroglou, Panagiotis Lymperaiou, Maria Filandrianos, Giorgos Stamou, Giorgos Artificial Intelligence and Learning Systems Laboratory School of Electrical and Computer Engineering National Technical University of Athens Greece
Exploring the capabilities of Large language Models (LLMs) in puzzle solving unveils critical insights into their potential and challenges in AI, marking a significant step towards understanding their applicability in... 详细信息
来源: 评论
Mathador-LM: A Dynamic Benchmark for Mathematical Reasoning on Large language Models
Mathador-LM: A Dynamic Benchmark for Mathematical Reasoning ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kurtic, Eldar Moeini, Amir Alistarh, Dan ISTA & Neural Magic Inc. Austria ISTA Austria
We introduce Mathador-LM, a new benchmark for evaluating the mathematical reasoning on large language models (LLMs), combining ruleset interpretation, planning, and problem-solving. This benchmark is inspired by the M... 详细信息
来源: 评论