咨询与建议

限定检索结果

文献类型

  • 7,585 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,706 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,483 篇 工学
    • 6,256 篇 计算机科学与技术...
    • 3,577 篇 软件工程
    • 748 篇 信息与通信工程
    • 535 篇 控制科学与工程
    • 272 篇 电气工程
    • 212 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 86 篇 电子科学与技术(可...
    • 74 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,522 篇 管理学
    • 1,165 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,471 篇 文学
    • 1,464 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,446 篇 理学
    • 776 篇 数学
    • 352 篇 物理学
    • 249 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 164 篇 法学
    • 153 篇 社会学
  • 129 篇 医学
    • 93 篇 临床医学
    • 75 篇 基础医学(可授医学...
  • 111 篇 教育学
    • 105 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,181 篇 natural language...
  • 872 篇 computational li...
  • 619 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 105 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 31 篇 university of ch...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 gaoling school o...
  • 26 篇 carnegie mellon ...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 21 篇 zhang qi
  • 20 篇 wen ji-rong

语言

  • 6,955 篇 英文
  • 722 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7707 条 记 录,以下是301-310 订阅
排序:
Efficient Sequential Decision Making with Large language Models
Efficient Sequential Decision Making with Large Language Mod...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Dingyang Zhang, Qi Zhu, Yinglun University of South Carolina United States University of California Riverside United States
This paper focuses on extending the success of large language models (LLMs) to sequential decision making. Existing efforts either (i) re-train or finetune LLMs for decision making, or (ii) design prompts for pretrain...
来源: 评论
Can Large language Models Enhance Predictions of Disease Progression? Investigating Through Disease Network Link Prediction
Can Large Language Models Enhance Predictions of Disease Pro...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lu, Haohui Naseem, Usman Faculty of Engineering The University of Sydney Sydney Australia School of Computing Macquarie University Sydney Australia
Large language Models (LLMs) have made significant strides in various tasks, yet their effectiveness in predicting disease progression remains relatively unexplored. To fill this gap, we use LLMs and employ advanced g... 详细信息
来源: 评论
First Heuristic Then Rational: Dynamic Use of Heuristics in language Model Reasoning
First Heuristic Then Rational: Dynamic Use of Heuristics in ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Aoki, Yoichi Kudo, Keito Kuribayashi, Tatsuki Sone, Shusaku Taniguchi, Masaya Sakaguchi, Keisuke Inui, Kentaro Tohoku University Japan RIKEN Japan MBZUAI United Arab Emirates
Multi-step reasoning instruction, such as chain-of-thought prompting, is widely adopted to explore better language models (LMs) performance. We report on the systematic strategy that LMs employ in such a multi-step re... 详细信息
来源: 评论
Enhancing Training Data Attribution for Large language Models with Fitting Error Consideration
Enhancing Training Data Attribution for Large Language Model...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wu, Kangxi Pang, Liang Shen, Huawei Cheng, Xueqi Key Laboratory of AI Safety Chinese Academy of Sciences Institute of Computing Technology CAS China University of Chinese Academy of Sciences China
The black-box nature of large language models (LLMs) poses challenges in interpreting results, impacting issues such as data intellectual property protection and hallucination tracing. Training data attribution (TDA) ... 详细信息
来源: 评论
FanLoRA: Fantastic LoRAs and Where to Find Them in Large language Model Fine-tuning
FanLoRA: Fantastic LoRAs and Where to Find Them in Large Lan...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tian, Aaron Xuxiang Zhao, Yi Yin, Congrui Zhu, Wei Tian, Xing Ge, Yi Carnegie Mellon University Pittsburgh United States University of Pennsylvania United States University of Minnesota United States University of Hong Kong Hong Kong
Full-parameter fine-tuning is computationally prohibitive for large language models (LLMs), making parameter-efficient fine-tuning (PEFT) methods like low-rank adaptation (LoRA) increasingly popular. However, LoRA and... 详细信息
来源: 评论
Improving Spoken language Modeling with Phoneme Classification: A Simple Fine-tuning Approach
Improving Spoken Language Modeling with Phoneme Classificati...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Poli, Maxime Chemla, Emmanuel Dupoux, Emmanuel ENS PSL EHESS CNRS France Meta FAIR United States
Recent progress in Spoken language Modeling has shown that learning language directly from speech is feasible. Generating speech through a pipeline that operates at the text level typically loses nuances, intonations,... 详细信息
来源: 评论
CliMedBench: A Large-Scale Chinese Benchmark for Evaluating Medical Large language Models in Clinical Scenarios
CliMedBench: A Large-Scale Chinese Benchmark for Evaluating ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Ouyang, Zetian Qiu, Yishuai Wang, Linlin de Melo, Gerard Zhang, Ya Wang, Yanfeng He, Liang East China Normal University China Hasso Plattner Institute Germany Shanghai Jiao Tong University China
With the proliferation of Large language Models (LLMs) in diverse domains, there is a particular need for unified evaluation standards in Chinese clinical medical scenarios, where models need to be examined very thoro... 详细信息
来源: 评论
BEEAR: Embedding-based Adversarial Removal of Safety Backdoors in Instruction-tuned language Models
BEEAR: Embedding-based Adversarial Removal of Safety Backdoo...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zeng, Yi Sun, Weiyu Huynh, Tran Ngoc Song, Dawn Li, Bo Jia, Ruoxi Virginia Tech United States Georgia Tech United States University of California Berkeley United States University of Chicago United States
Safety backdoor attacks in large language models (LLMs) enable the stealthy triggering of unsafe behaviors while evading detection during normal interactions. The high dimensionality of potential triggers in the token... 详细信息
来源: 评论
STAR: SocioTechnical Approach to Red Teaming language Models
STAR: SocioTechnical Approach to Red Teaming Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Weidinger, Laura Mellor, John Pegueroles, Bernat Guillén Marchal, Nahema Kumar, Ravin Lum, Kristian Akbulut, Canfer Diaz, Mark Bergman, Stevie Rodriguez, Mikel Rieser, Verena Isaac, William Google DeepMind United Kingdom Google United States Google Labs United States
This research introduces STAR, a sociotechnical framework that improves on current best practices for red teaming safety of large language models. STAR makes two key contributions: it enhances steerability by generati... 详细信息
来源: 评论
Scaling Laws for Linear Complexity language Models
Scaling Laws for Linear Complexity Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Shen, Xuyang Li, Dong Leng, Ruitao Qin, Zhen Sun, Weigao Zhong, Yiran OpenNLPLab Australian National University Australia TapTap
The interest in linear complexity models for large language models is on the rise, although their scaling capacity remains uncertain. In this study, we present the scaling laws for linear complexity language models to... 详细信息
来源: 评论