咨询与建议

限定检索结果

文献类型

  • 14,463 篇 会议
  • 654 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,258 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,944 篇 工学
    • 10,283 篇 计算机科学与技术...
    • 5,408 篇 软件工程
    • 1,463 篇 信息与通信工程
    • 954 篇 电气工程
    • 880 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 142 篇 电子科学与技术(可...
    • 101 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,416 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 757 篇 管理科学与工程(可...
    • 239 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 283 篇 基础医学(可授医学...
    • 111 篇 公共卫生与预防医...
  • 276 篇 法学
    • 248 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 96 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,535 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 740 篇 semantics
  • 681 篇 machine learning
  • 609 篇 deep learning
  • 520 篇 natural language...
  • 347 篇 computational mo...
  • 338 篇 training
  • 333 篇 accuracy
  • 331 篇 sentiment analys...
  • 329 篇 large language m...
  • 321 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 252 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 language technol...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,662 篇 英文
  • 482 篇 其他
  • 106 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15259 条 记 录,以下是721-730 订阅
排序:
Efficient Sequential Decision Making with Large language Models
Efficient Sequential Decision Making with Large Language Mod...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Dingyang Zhang, Qi Zhu, Yinglun University of South Carolina United States University of California Riverside United States
This paper focuses on extending the success of large language models (LLMs) to sequential decision making. Existing efforts either (i) re-train or finetune LLMs for decision making, or (ii) design prompts for pretrain...
来源: 评论
First Heuristic Then Rational: Dynamic Use of Heuristics in language Model Reasoning
First Heuristic Then Rational: Dynamic Use of Heuristics in ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Aoki, Yoichi Kudo, Keito Kuribayashi, Tatsuki Sone, Shusaku Taniguchi, Masaya Sakaguchi, Keisuke Inui, Kentaro Tohoku University Japan RIKEN Japan MBZUAI United Arab Emirates
Multi-step reasoning instruction, such as chain-of-thought prompting, is widely adopted to explore better language models (LMs) performance. We report on the systematic strategy that LMs employ in such a multi-step re... 详细信息
来源: 评论
BioT5: Enriching Cross-modal Integration in Biology with Chemical Knowledge and natural language Associations
BioT5: Enriching Cross-modal Integration in Biology with Che...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Pei, Qizhi Zhang, Wei Zhu, Jinhua Wu, Kehan Gao, Kaiyuan Wu, Lijun Xia, Yingce Yan, Rui Renmin Univ China Gaoling Sch Artificial Intelligence Beijing Peoples R China Univ Sci & Technol China Hefei Peoples R China Huazhong Univ Sci & Technol Wuhan Peoples R China Microsoft Res Redmond WA 98052 USA Minist Educ Engn Res Ctr Next Generat Intelligent Search & Re Beijing Peoples R China Beijing Key Lab Big Data Management & Anal Method Beijing Peoples R China
Recent advancements in biological research leverage the integration of molecules, proteins, and natural language to enhance drug discovery. However, current models exhibit several limitations, such as the generation o... 详细信息
来源: 评论
Can Large language Models Enhance Predictions of Disease Progression? Investigating Through Disease Network Link Prediction
Can Large Language Models Enhance Predictions of Disease Pro...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lu, Haohui Naseem, Usman Faculty of Engineering The University of Sydney Sydney Australia School of Computing Macquarie University Sydney Australia
Large language Models (LLMs) have made significant strides in various tasks, yet their effectiveness in predicting disease progression remains relatively unexplored. To fill this gap, we use LLMs and employ advanced g... 详细信息
来源: 评论
EconLogicQA: A Question-Answering Benchmark for Evaluating Large language Models in Economic Sequential Reasoning
EconLogicQA: A Question-Answering Benchmark for Evaluating L...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Quan, Yinzhu Liu, Zefang Georgia Institute of Technology AtlantaGA30332 United States
In this paper, we introduce EconLogicQA, a rigorous benchmark designed to assess the sequential reasoning capabilities of large language models (LLMs) within the intricate realms of economics, business, and supply cha... 详细信息
来源: 评论
STAR: SocioTechnical Approach to Red Teaming language Models
STAR: SocioTechnical Approach to Red Teaming Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Weidinger, Laura Mellor, John Pegueroles, Bernat Guillén Marchal, Nahema Kumar, Ravin Lum, Kristian Akbulut, Canfer Diaz, Mark Bergman, Stevie Rodriguez, Mikel Rieser, Verena Isaac, William Google DeepMind United Kingdom Google United States Google Labs United States
This research introduces STAR, a sociotechnical framework that improves on current best practices for red teaming safety of large language models. STAR makes two key contributions: it enhances steerability by generati... 详细信息
来源: 评论
CliMedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large language Models in Clinical Scenarios
CliMedBench: A Large-Scale Chinese Benchmark for Evaluating ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Ouyang, Zetian Qiu, Yishuai Wang, Linlin de Melo, Gerard Zhang, Ya Wang, Yanfeng He, Liang East China Normal University China Hasso Plattner Institute Germany Shanghai Jiao Tong University China
With the proliferation of Large language Models (LLMs) in diverse domains, there is a particular need for unified evaluation standards in Chinese clinical medical scenarios, where models need to be examined very thoro... 详细信息
来源: 评论
BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned language Models
BEEAR: Embedding-based Adversarial Removal of Safety Backdoo...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zeng, Yi Sun, Weiyu Huynh, Tran Ngoc Song, Dawn Li, Bo Jia, Ruoxi Virginia Tech United States Georgia Tech United States University of California Berkeley United States University of Chicago United States
Safety backdoor attacks in large language models (LLMs) enable the stealthy triggering of unsafe behaviors while evading detection during normal interactions. The high dimensionality of potential triggers in the token... 详细信息
来源: 评论
Encoding Spreadsheets for Large language Models
Encoding Spreadsheets for Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Dong, Haoyu Zhao, Jianbo Tian, Yuzhang Xiong, Junyu Zhou, Mengyu Lin, Yun Cambronero, José He, Yeye Han, Shi Zhang, Dongmei Microsoft Corporation United States
Spreadsheets are characterized by their extensive two-dimensional grids, flexible layouts, and varied formatting options, which pose significant challenges for large language models (LLMs). In response, we introduce S... 详细信息
来源: 评论
FanLoRA: Fantastic LoRAs and Where to Find Them in Large language Model Fine-tuning
FanLoRA: Fantastic LoRAs and Where to Find Them in Large Lan...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tian, Aaron Xuxiang Zhao, Yi Yin, Congrui Zhu, Wei Tian, Xing Ge, Yi Carnegie Mellon University Pittsburgh United States University of Pennsylvania United States University of Minnesota United States University of Hong Kong Hong Kong
Full-parameter fine-tuning is computationally prohibitive for large language models (LLMs), making parameter-efficient fine-tuning (PEFT) methods like low-rank adaptation (LoRA) increasingly popular. However, LoRA and... 详细信息
来源: 评论