咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是901-910 订阅
排序:
Are Large language Models In-Context Personalized Summarizers? Get an iCOPERNICUS Test Done!
Are Large Language Models In-Context Personalized Summarizer...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Patel, Divya Patel, Pathik Chander, Ankush Dasgupta, Sourish Chakraborty, Tanmoy KDM Lab Dhirubhai Ambani Institute of Information & Communication Technology India Indian Institute of Technology Delhi India
Large language Models (LLMs) have succeeded considerably in In-Context-Learning (ICL) based summarization. However, saliency is subject to the users' specific preference histories. Hence, we need reliable In-Conte... 详细信息
来源: 评论
On Diversified Preferences of Large language Model Alignment
On Diversified Preferences of Large Language Model Alignment
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zeng, Dun Dai, Yong Cheng, Pengyu Wang, Longyue Hu, Tianhao Chen, Wanshun Du, Nan Xu, Zenglin Tencent AI Lab China HiThink Research Singapore Alibaba Group China Peng Cheng Lab China
Aligning large language models (LLMs) with human preferences has been recognized as the key to improving LLMs' interaction ***, in this pluralistic world, human preferences can be diversified due to annotators'... 详细信息
来源: 评论
Evaluating the Instruction-Following Robustness of Large language Models to Prompt Injection
Evaluating the Instruction-Following Robustness of Large Lan...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Zekun Peng, Baolin He, Pengcheng Yan, Xifeng University of California Santa Barbara United States Microsoft Research Redmond United States Zoom United States
Large language Models (LLMs) have demonstrated exceptional proficiency in instruction-following, making them increasingly integral to various applications. However, this capability introduces the risk of prompt inject... 详细信息
来源: 评论
In-Context Compositional Generalization for Large Vision-language Models
In-Context Compositional Generalization for Large Vision-Lan...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Chuanhao Jing, Chenchen Li, Zhen Zhai, Mingliang Wu, Yuwei Jia, Yunde Beijing Key Laboratory of Intelligent Information Technology School of Computer Science & Technology Beijing Institute of Technology China Guangdong Laboratory of Machine Perception and Intelligent Computing Shenzhen MSU-BIT University China School of Computer Science Zhejiang University Hangzhou China
Recent work has revealed that in-context learning for large language models exhibits compositional generalization capacity, which can be enhanced by selecting in-context demonstrations similar to test cases to provide... 详细信息
来源: 评论
Symbolic Working Memory Enhances language Models for Complex Rule Application
Symbolic Working Memory Enhances Language Models for Complex...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Siyuan Wei, Zhongyu Choi, Yejin Ren, Xiang University of Southern California United States Fudan University China University of Washington United States Allen Institute for Artificial Intelligence United States
Large language Models (LLMs) have shown remarkable reasoning performance but struggle with multi-step deductive reasoning involving a series of rule application steps, especially when rules are presented non-sequentia... 详细信息
来源: 评论
Formality is Favored: Unraveling the Learning Preferences of Large language Models on Data with Conflicting Knowledge
Formality is Favored: Unraveling the Learning Preferences of...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Jiahuan Cao, Yiqing Huang, Shujian Chen, Jiajun National Key Laboratory for Novel Software Technology Nanjing University China
Having been trained on massive pretraining data, large language models have shown excellent performance on many knowledge-intensive tasks. However, pretraining data tends to contain misleading and even conflicting inf... 详细信息
来源: 评论
How do Large language Models Learn In-Context? Query and Key Matrices of In-Context Heads are Two Towers for Metric Learning
How do Large Language Models Learn In-Context? Query and Key...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yu, Zeping Ananiadou, Sophia Department of Computer Science National Centre for Text Mining The University of Manchester United Kingdom
We investigate the mechanism of in-context learning (ICL) on sentence classification tasks with semantically-unrelated labels ("foo"/"bar"). We find intervening in only 1% heads (named "in-con... 详细信息
来源: 评论
Relation labeling in product knowledge graphs with large language models for e-commerce
收藏 引用
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS 2024年 第12期15卷 5725-5743页
作者: Chen, Jiao Ma, Luyi Li, Xiaohan Xu, Jianpeng Cho, Jason H. D. Nag, Kaushiki Korpeoglu, Evren Kumar, Sushant Achan, Kannan Walmart Global Tech Personalizat Team Sunnyvale CA 94086 USA
Product Knowledge Graphs (PKGs) play a crucial role in enhancing e-commerce system performance by providing structured information about entities and their relationships, such as complementary or substitutable relatio... 详细信息
来源: 评论
CompoundPiece: Evaluating and Improving Decompounding Performance of language Models
CompoundPiece: Evaluating and Improving Decompounding Perfor...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Minixhofer, Benjamin Pfeiffer, Jonas Vulic, Ivan Univ Cambridge Cambridge England Google DeepMind London England
While many languages possess processes of joining two or more words to create compound words, previous studies have been typically limited only to languages with excessively productive compound formation (e.g., German... 详细信息
来源: 评论
ShadowLLM: Predictor-based Contextual Sparsity for Large language Models
ShadowLLM: Predictor-based Contextual Sparsity for Large Lan...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Akhauri, Yash AbouElhamayed, Ahmed F. Dotzel, Jordan Zhang, Zhiru Rush, Alexander M. Huda, Safeen Abdelfattah, Mohamed S. Cornell University United States Google United States
The high power consumption and latency-sensitive deployments of large language models (LLMs) have motivated efficiency techniques like quantization and *** sparsity, where the sparsity pattern is input-dependent, is c... 详细信息
来源: 评论