咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是871-880 订阅
排序:
SELF-EXPLORE: Enhancing Mathematical Reasoning in language Models with Fine-grained Rewards
SELF-EXPLORE: Enhancing Mathematical Reasoning in Language M...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Hwang, Hyeonbin Kim, Doyoung Kim, Seungone Ye, Seonghyeon Seo, Minjoon KAIST AI Korea Republic of Carnegie Mellon University United States
Training on large amounts of rationales (i.e., CoT Fine-tuning) has been found effective for improving mathematical reasoning of large language models (LLMs). However, acquiring human-authored solutions or augmenting ... 详细信息
来源: 评论
SCIAGENT: Tool-augmented language Models for Scientific Reasoning
SCIAGENT: Tool-augmented Language Models for Scientific Reas...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Ma, Yubo Gou, Zhibin Hao, Junheng Xu, Ruochen Wang, Shuohang Pan, Liangming Yang, Yujiu Cao, Yixin Sun, Aixin Nanyang Technological University Singapore Tsinghua University China Microsoft United States University of Arizona United States Fudan University China
Scientific reasoning poses an excessive challenge for even the most advanced Large language Models (LLMs). To make this task more practical and solvable for LLMs, we introduce a new task setting named tool-augmented s... 详细信息
来源: 评论
Collaborative Performance Prediction for Large language Models
Collaborative Performance Prediction for Large Language Mode...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Qiyuan Lyu, Fuyuan Liu, Xue Ma, Chen City University of Hong Kong Hong Kong MILA Canada McGill University Canada
Comprehensively understanding and accurately predicting the performance of large language models across diverse downstream tasks has emerged as a pivotal challenge in NLP research. The pioneering scaling law on downst... 详细信息
来源: 评论
On the Reliability of Psychological Scales on Large language Models
On the Reliability of Psychological Scales on Large Language...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Huang, Jen-Tse Jiao, Wenxiang Lam, Man Ho Li, Eric John Wang, Wenxuan Lyu, Michael R. The Chinese University of Hong Kong Hong Kong Tencent AI Lab China
Recent research has focused on examining Large language Models' (LLMs) characteristics from a psychological standpoint, acknowledging the necessity of understanding their behavioral characteristics. The administra... 详细信息
来源: 评论
Learn Beyond The Answer: Training language Models with Reflection for Mathematical Reasoning
Learn Beyond The Answer: Training Language Models with Refle...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Zhihan Ge, Tao Liang, Zhenwen Yu, Wenhao Yu, Dian Jia, Mengzhao Yu, Dong Jiang, Meng University of Notre Dame United States Tencent AI Lab Seattle United States
Supervised fine-tuning enhances the problem-solving abilities of language models across various mathematical reasoning tasks. To maximize such benefits, existing research focuses on broadening the training set with va... 详细信息
来源: 评论
Kiss up, Kick down: Exploring Behavioral Changes in Multi-modal Large language Models with Assigned Visual Personas
Kiss up, Kick down: Exploring Behavioral Changes in Multi-mo...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Sun, Seungjong Lee, Eungu Baek, Seo Yeon Hwang, Seunghyun Lee, Wonbyung Nan, Dongyan Jansen, Bernard J. Kim, Jang Hyun Department of Human-Artificial Intelligence Interaction Sungkyunkwan University Korea Republic of College of Computing and Informatics Sungkyunkwan University Korea Republic of Qatar Computing Research Institute Hamad Bin Khalifa University Qatar
This study is the first to explore whether multi-modal large language models (LLMs) can align their behaviors with visual personas, addressing a significant gap in the literature that predominantly focuses on text-bas... 详细信息
来源: 评论
Extrinsic Evaluation of Cultural Competence in Large language Models
Extrinsic Evaluation of Cultural Competence in Large Languag...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Bhatt, Shaily Diaz, Fernando Carnegie Mellon University United States
Productive interactions between diverse users and language technologies require outputs from the latter to be culturally relevant and sensitive. Prior works have evaluated models' knowledge of cultural norms, valu... 详细信息
来源: 评论
Is GPT-4V (ision) All You Need for Automating Academic Data Visualization? Exploring Vision-language Models' Capability in Reproducing Academic Charts
Is GPT-4V (ision) All You Need for Automating Academic Data ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Zhehao Ma, Weicheng Vosoughi, Soroush Department of Computer Science Dartmouth College United States
While effective data visualization is crucial to present complex information in academic research, its creation demands significant expertise in both data management and graphic *** explore the potential of using Visi... 详细信息
来源: 评论
A Multi-Perspective Analysis of Memorization in Large language Models
A Multi-Perspective Analysis of Memorization in Large Langua...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Bowen Han, Namgi Miyao, Yusuke Department of Computer Science The University of Tokyo Japan
Large language Models (LLMs) can generate the same sequences contained in the pre-train corpora, known as memorization. Previous research studied it at a macro level, leaving micro yet important questions under-explor...
来源: 评论
Democratizing Large language Models via Personalized Parameter-Efficient Fine-tuning
Democratizing Large Language Models via Personalized Paramet...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tan, Zhaoxuan Zeng, Qingkai Tian, Yijun Liu, Zheyuan Yin, Bing Jiang, Meng University of Notre Dame United States *** Inc. United States
Personalization in large language models (LLMs) is increasingly important, aiming to align the LLMs' interactions, content, and recommendations with individual user preferences. Recent advances have highlighted ef... 详细信息
来源: 评论