咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是1031-1040 订阅
排序:
LongAlign: A Recipe for Long Context Alignment of Large language Models
LongAlign: A Recipe for Long Context Alignment of Large Lang...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Bai, Yushi Lv, Xin Zhang, Jiajie He, Yuze Qi, Ji Hou, Lei Tang, Jie Dong, Yuxiao Li, Juanzi Tsinghua University China Zhipu.AI China
Extending large language models to effectively handle long contexts requires instruction fine-tuning on input sequences of similar length. To address this, we present LongAlign-a recipe of the instruction data, traini... 详细信息
来源: 评论
PALS: Personalized Active Learning for Subjective Tasks in NLP
PALS: Personalized Active Learning for Subjective Tasks in N...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Kanclerz, Kamil Karanowski, Konrad Bielaniewicz, Julita Gruza, Marcin Milkowski, Piotr Kocon, Jan Kazienko, Przemyslaw Wroclaw Univ Sci & Technol Wroclaw Poland
For subjective NLP problems, such as classification of hate speech, aggression, or emotions, personalized solutions can be exploited. Then, the learned models infer about the perception of the content independently fo... 详细信息
来源: 评论
EHRAgent: Code Empowers Large language Models for Few-shot Complex Tabular Reasoning on Electronic Health Records
EHRAgent: Code Empowers Large Language Models for Few-shot C...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Shi, Wenqi Xu, Ran Zhuang, Yuchen Yu, Yue Zhang, Jieyu Wu, Hang Zhu, Yuanda Ho, Joyce Yang, Carl Wang, May D. Georgia Institute of Technology United States Emory University United States University of Washington United States
Clinicians often rely on data engineers to retrieve complex patient information from electronic health record (EHR) systems, a process that is both inefficient and time-consuming. We propose EHRAgent, a large language... 详细信息
来源: 评论
By My Eyes: Grounding Multimodal Large language Models with Sensor Data via Visual Prompting
By My Eyes: Grounding Multimodal Large Language Models with ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yoon, Hyungjun Tolera, Biniyam Aschalew Gong, Taesik Lee, Kimin Lee, Sung-Ju KAIST Korea Republic of UNIST Korea Republic of
Large language models (LLMs) have demonstrated exceptional abilities across various domains. However, utilizing LLMs for ubiquitous sensing applications remains challenging as existing text-prompt methods show signifi... 详细信息
来源: 评论
Evolutionary Contrastive Distillation for language Model Alignment
Evolutionary Contrastive Distillation for Language Model Ali...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Katz-Samuels, Julian Li, Zheng Yun, Hyokun Nigam, Priyanka Xu, Yi Petricek, Vaclav Yin, Bing Chilimbi, Trishul Amazon United States
The ability of large language models (LLMs) to execute complex instructions is essential for their real-world applications. However, several recent studies indicate that LLMs struggle with challenging instructions (Zh... 详细信息
来源: 评论
Advancing Vision-language Models with Adapter Ensemble Strategies
Advancing Vision-Language Models with Adapter Ensemble Strat...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Bai, Yue Zhao, Handong Lin, Zhe Kale, Ajinkya Gu, Jiuxiang Yu, Tong Kim, Sungchul Fu, Yun Northeastern University United States Adobe United States
CLIP (Radford et al., 2021) revolutes vision-language pretraining by using contrastive learning on paired web data. However, the sheer size of these pretrained models makes full-model finetuning exceedingly costly. On... 详细信息
来源: 评论
Free your mouse! Command Large language Models to Generate Code to Format Word Documents
Free your mouse! Command Large Language Models to Generate C...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Rao, Shihao Li, Liang Liu, Jiapeng Guan, Weinxin Gao, Xiyan Li, Bing Ma, Can Institute of Information Engineering Chinese Academy of Sciences Beijing China School of Cyber Security University of Chinese Academy of Sciences Beijing China
Recently, LLMs have significantly improved code generation, making it increasingly accessible to users. As a result, LLM-powered code generation applications have sprung up, vastly boosting user productivity. This pap...
来源: 评论
A New Pipeline for Knowledge Graph Reasoning Enhanced by Large language Models Without Fine-Tuning
A New Pipeline for Knowledge Graph Reasoning Enhanced by Lar...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Zhongwu Bai, Long Li, Zixuan Huang, Zhen Jin, Xiaolong Dou, Yong National Key Laboratory of Parallel and Distributed Computing National University of Defense Technology China CAS Key Laboratory of Network Data Science and Technology Institute of Computing Technology Chinese Academy of Sciences China
Conventional Knowledge Graph Reasoning (KGR) models learn the embeddings of KG components over the structure of KGs, but their performances are limited when the KGs are severely incomplete. Recent LLM-enhanced KGR mod... 详细信息
来源: 评论
ChatGPT Doesn't Trust Chargers Fans: Guardrail Sensitivity in Context
ChatGPT Doesn't Trust Chargers Fans: Guardrail Sensitivity i...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Victoria R. Chen, Yida Saphra, Naomi Harvard University United States Kempner Institute for the Study of Natural and Artificial Intelligence Harvard University United States
While the biases of language models in production are extensively documented, the biases of their guardrails have been neglected. This paper studies how contextual information about the user influences the likelihood ... 详细信息
来源: 评论
Large language Models Know What is Key Visual Entity: An LLM-assisted Multimodal Retrieval for VQA
Large Language Models Know What is Key Visual Entity: An LLM...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Jian, Pu Yu, Donglei Zhang, Jiajun Institute of Automation Chinese Academy of Sciences China School of Artificial Intelligence University of Chinese Academy of Sciences China Wuhan AI Research China Shanghai Artificial Intelligence Laboratory China
Visual question answering (VQA) tasks, often performed by visual language model (VLM), face challenges with long-tail knowledge. Recent retrieval-augmented VQA (RA-VQA) systems address this by retrieving and integrati... 详细信息
来源: 评论