咨询与建议

限定检索结果

文献类型

  • 7,582 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,703 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,480 篇 工学
    • 6,252 篇 计算机科学与技术...
    • 3,600 篇 软件工程
    • 748 篇 信息与通信工程
    • 507 篇 控制科学与工程
    • 271 篇 电气工程
    • 213 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 85 篇 电子科学与技术(可...
    • 76 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,524 篇 管理学
    • 1,167 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,472 篇 文学
    • 1,465 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,447 篇 理学
    • 775 篇 数学
    • 352 篇 物理学
    • 250 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 165 篇 法学
    • 153 篇 社会学
  • 130 篇 医学
    • 94 篇 临床医学
    • 76 篇 基础医学(可授医学...
  • 112 篇 教育学
    • 106 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,183 篇 natural language...
  • 872 篇 computational li...
  • 621 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 106 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 30 篇 university of ch...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 carnegie mellon ...
  • 25 篇 gaoling school o...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 20 篇 wen ji-rong
  • 20 篇 zhang qi

语言

  • 6,985 篇 英文
  • 689 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7704 条 记 录,以下是261-270 订阅
排序:
COOL, a Context Outlooker, and Its Application to Question Answering and Other natural language processing Tasks  32
COOL, a Context Outlooker, and Its Application to Question A...
收藏 引用
32nd International Joint conference on Artificial Intelligence (IJCAI)
作者: Zhu, Fangyi Ng, See-Kiong Bressan, Stephane Natl Univ Singapore Singapore Singapore
Vision outlooker improves the performance of vision transformers, which implements a self-attention mechanism by adding an outlook attention, a form of local attention. In natural language processing, as has been the ... 详细信息
来源: 评论
Dynamic Rewarding with Prompt Optimization Enables Tuning-free Self-Alignment of language Models
Dynamic Rewarding with Prompt Optimization Enables Tuning-fr...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Singla, Somanshu Wang, Zhen Liu, Tianyang Ashfaq, Abdullah Hu, Zhiting Xing, Eric P. UC San Diego United States MBZUAI United Arab Emirates CMU United States
Aligning Large language Models (LLMs) traditionally relies on costly training and human preference annotations. Self-alignment aims to reduce these expenses by aligning models by themselves. To further minimize the co... 详细信息
来源: 评论
Improving Zero-shot LLM Re-Ranker with Risk Minimization
Improving Zero-shot LLM Re-Ranker with Risk Minimization
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yuan, Xiaowei Yang, Zhao Wang, Yequan Zhao, Jun Liu, Kang The Key Laboratory of Cognition and Decision Intelligence for Complex Systems Institute of Automation Chinese Academy of Sciences China School of Artificial Intelligence University of Chinese Academy of Sciences China Beijing Academy of Artificial Intelligence Beijing China
In the Retrieval-Augmented Generation (RAG) system, advanced Large language Models (LLMs) have emerged as effective Query Likelihood Models (QLMs) in an unsupervised way, which re-rank documents based on the probabili... 详细信息
来源: 评论
Fine-Tuning Large language Models for Stock Return Prediction Using Newsflow
Fine-Tuning Large Language Models for Stock Return Predictio...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Guo, Tian Hauptmann, Emmanuel Systematic Equities Team RAM Active Investments Geneva Switzerland
Large language models (LLMs) and their fine-tuning techniques have demonstrated superior performance in various language understanding and generation tasks. This paper explores fine-tuning LLMs for predicting stock re... 详细信息
来源: 评论
Hierarchical Pretraining on Multimodal Electronic Health Records
Hierarchical Pretraining on Multimodal Electronic Health Rec...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wang, Xiaochen Luo, Junyu Wang, Jiaqi Yin, Ziyi Cui, Suhan Zhong, Yuan Wang, Yaqing Ma, Fenglong Penn State Univ University Pk PA 16802 USA Google Res Mountain View CA 94043 USA
Pretraining has proven to be a powerful technique in natural language processing (NLP), exhibiting remarkable success in various NLP downstream tasks. However, in the medical domain, existing pretrained models on elec... 详细信息
来源: 评论
Uncertainty in language Models: Assessment through Rank-Calibration
Uncertainty in Language Models: Assessment through Rank-Cali...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Huang, Xinmeng Li, Shuo Yu, Mengxin Sesia, Matteo Hassani, Hamed Lee, Insup Bastani, Osbert Dobriban, Edgar University of Pennsylvania PhiladelphiaPA United States University of Southern California Los AngelesCA United States
language Models (LMs) have shown promising performance in natural language generation. However, as LMs often generate incorrect or hallucinated responses, it is crucial to correctly quantify their uncertainty in respo... 详细信息
来源: 评论
TAIL: A Toolkit for Automatic and Realistic Long-Context Large language Model Evaluation
TAIL: A Toolkit for Automatic and Realistic Long-Context Lar...
收藏 引用
2024 conference on empirical methods in natural language processing: System Demonstrations, EMNLP 2024
作者: Gu, Gefei Zhao, Yilun Ning, Ruoxi Zheng, Yanan Cohan, Arman Yale University United States University of Warterloo Canada Allen Institute for AI India
As long-context large language models (LLMs) gain increasing attention for their ability to handle extensive inputs, the demand for effective evaluation methods has become critical. Existing evaluation methods, howeve... 详细信息
来源: 评论
An Effective Deployment of Diffusion LM for Data Augmentation in Low-Resource Sentiment Classification
An Effective Deployment of Diffusion LM for Data Augmentatio...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Zhuowei Wang, Lianxi Wu, Yuben Liao, Xinfeng Tian, Yujia Zhong, Junyang Guangdong University of Foreign Studies Guangzhou China Guangzhou Key Laboratory of Multilingual Intelligent Processing Guangzhou China
Sentiment classification (SC) often suffers from low-resource challenges such as domain-specific contexts, imbalanced label distributions, and few-shot scenarios. The potential of the diffusion language model (LM) for... 详细信息
来源: 评论
Themis: A Reference-free NLG Evaluation language Model with Flexibility and Interpretability
Themis: A Reference-free NLG Evaluation Language Model with ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Hu, Xinyu Lin, Li Gao, Mingqi Yin, Xunjian Wan, Xiaojun Wangxuan Institute of Computer Technology Peking University China
The evaluation of natural language generation (NLG) tasks is a significant and longstanding research area. With the recent emergence of powerful large language models (LLMs), some studies have turned to LLM-based auto... 详细信息
来源: 评论
Discovering Biases in Information Retrieval Models Using Relevance Thesaurus as Global Explanation
Discovering Biases in Information Retrieval Models Using Rel...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kim, Youngwoo Rahimi, Razieh Allan, James University of Massachusetts Amherst United States
Most efforts in interpreting neural relevance models have focused on local explanations, which explain the relevance of a document to a query but are not useful in predicting the model's behavior on unseen query-d... 详细信息
来源: 评论