咨询与建议

限定检索结果

文献类型

  • 14,463 篇 会议
  • 653 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,257 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,943 篇 工学
    • 10,283 篇 计算机科学与技术...
    • 5,409 篇 软件工程
    • 1,461 篇 信息与通信工程
    • 953 篇 电气工程
    • 879 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,417 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 758 篇 管理科学与工程(可...
    • 240 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,534 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 741 篇 semantics
  • 680 篇 machine learning
  • 609 篇 deep learning
  • 520 篇 natural language...
  • 347 篇 computational mo...
  • 336 篇 training
  • 333 篇 accuracy
  • 331 篇 sentiment analys...
  • 329 篇 large language m...
  • 320 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 261 篇 speech recogniti...
  • 252 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 language technol...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,663 篇 英文
  • 481 篇 其他
  • 105 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15258 条 记 录,以下是581-590 订阅
排序:
Tending Towards Stability: Convergence Challenges in Small language Models
Tending Towards Stability: Convergence Challenges in Small L...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Martinez, Richard Diehl Lesci, Pietro Buttery, Paula University of Cambridge United Kingdom
Increasing the number of parameters in language models is a common strategy to enhance their performance. However, smaller language models remain valuable due to their lower operational costs. Despite their advantages... 详细信息
来源: 评论
Learning the Visualness of Text Using Large Vision-language Models
Learning the Visualness of Text Using Large Vision-Language ...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Verma, Gaurav Rossi, Ryan A. Tensmeyer, Christopher Gu, Jiuxiang Nenkova, Ani Georgia Inst Technol Atlanta GA 30332 USA Adobe Res San Jose CA 95110 USA
Visual text evokes an image in a person's mind, while non-visual text fails to do so. A method to automatically detect visualness in text will enable text-to-image retrieval and generation models to augment text w... 详细信息
来源: 评论
Do LLMs Overcome Shortcut Learning? An Evaluation of Shortcut Challenges in Large language Models
Do LLMs Overcome Shortcut Learning? An Evaluation of Shortcu...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yuan, Yu Zhao, Lili Zhang, Kai Zheng, Guangting Liu, Qi State Key Lab of Cognitive Intelligence University of Science and Technology of China China School of Computer Science and Technology University of Science and Technology of China China
Large language Models (LLMs) have shown remarkable capabilities in various natural language processing tasks. However, LLMs may rely on dataset biases as shortcuts for prediction, which can significantly impair their ... 详细信息
来源: 评论
Consistent Bidirectional language Modelling: Expressive Power and Representational Conciseness
Consistent Bidirectional Language Modelling: Expressive Powe...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Shopov, Georgi Gerdjikov, Stefan IICT Bulgarian Academy of Sciences Bulgaria FMI Sofia University Bulgaria
The inability to utilise future contexts and the pre-determined left-to-right generation order are major limitations of unidirectional language models. Bidirectionality has been introduced to address those deficiencie... 详细信息
来源: 评论
Enhancing Tool Retrieval with Iterative Feedback from Large language Models
Enhancing Tool Retrieval with Iterative Feedback from Large ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Xu, Qiancheng Li, Yongqi Xia, Heming Li, Wenjie Department of Computing The Hong Kong Polytechnic University China
Tool learning aims to enhance and expand large language models' (LLMs) capabilities with external tools, which has gained significant attention *** methods have shown that LLMs can effectively handle a certain amo... 详细信息
来源: 评论
METAREFLECTION: Learning Instructions for language Agents using Past Reflections
METAREFLECTION: Learning Instructions for Language Agents us...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Gupta, Priyanshu Kirtania, Shashank Singha, Ananya Gulwani, Sumit Radhakrishna, Arjun Shi, Sherry Soares, Gustavo Microsoft United States
The popularity of Large language Models (LLMs) have unleashed a new age of language Agents for solving a diverse range of tasks. While contemporary frontier LLMs are capable enough to power reasonably good language ag... 详细信息
来源: 评论
Solving for X and Beyond: Can Large language Models Solve Complex Math Problems with More-Than-Two Unknowns?
Solving for X and Beyond: Can Large Language Models Solve Co...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kao, Kuei-Chun Wang, Ruochen Hsieh, Cho-Jui Department of Computer Science University of California Los Angeles United States
Large language Models (LLMs) have demonstrated remarkable performance in solving math problems, a hallmark of human intelligence. Despite high success rates on current benchmarks;however, these often feature simple pr...
来源: 评论
Dynamic Rewarding with Prompt Optimization Enables Tuning-free Self-Alignment of language Models
Dynamic Rewarding with Prompt Optimization Enables Tuning-fr...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Singla, Somanshu Wang, Zhen Liu, Tianyang Ashfaq, Abdullah Hu, Zhiting Xing, Eric P. UC San Diego United States MBZUAI United Arab Emirates CMU United States
Aligning Large language Models (LLMs) traditionally relies on costly training and human preference annotations. Self-alignment aims to reduce these expenses by aligning models by themselves. To further minimize the co... 详细信息
来源: 评论
Improving Zero-shot LLM Re-Ranker with Risk Minimization
Improving Zero-shot LLM Re-Ranker with Risk Minimization
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yuan, Xiaowei Yang, Zhao Wang, Yequan Zhao, Jun Liu, Kang The Key Laboratory of Cognition and Decision Intelligence for Complex Systems Institute of Automation Chinese Academy of Sciences China School of Artificial Intelligence University of Chinese Academy of Sciences China Beijing Academy of Artificial Intelligence Beijing China
In the Retrieval-Augmented Generation (RAG) system, advanced Large language Models (LLMs) have emerged as effective Query Likelihood Models (QLMs) in an unsupervised way, which re-rank documents based on the probabili... 详细信息
来源: 评论
From Local Concepts to Universals: Evaluating the Multicultural Understanding of Vision-language Models
From Local Concepts to Universals: Evaluating the Multicultu...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Bhatia, Mehar Ravi, Sahithya Chinchure, Aditya Hwang, Eunjeong Shwartz, Vered University of British Columbia Vector Institute for AI Canada
Despite recent advancements in vision-language models, their performance remains suboptimal on images from non-western cultures, due to underrepresentation in training datasets. Various benchmarks have been proposed t... 详细信息
来源: 评论