咨询与建议

限定检索结果

文献类型

  • 7,585 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,706 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,483 篇 工学
    • 6,256 篇 计算机科学与技术...
    • 3,577 篇 软件工程
    • 748 篇 信息与通信工程
    • 535 篇 控制科学与工程
    • 272 篇 电气工程
    • 212 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 86 篇 电子科学与技术(可...
    • 74 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,522 篇 管理学
    • 1,165 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,471 篇 文学
    • 1,464 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,446 篇 理学
    • 776 篇 数学
    • 352 篇 物理学
    • 249 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 164 篇 法学
    • 153 篇 社会学
  • 129 篇 医学
    • 93 篇 临床医学
    • 75 篇 基础医学(可授医学...
  • 111 篇 教育学
    • 105 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,181 篇 natural language...
  • 872 篇 computational li...
  • 619 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 105 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 31 篇 university of ch...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 gaoling school o...
  • 26 篇 carnegie mellon ...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 21 篇 zhang qi
  • 20 篇 wen ji-rong

语言

  • 6,955 篇 英文
  • 722 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7707 条 记 录,以下是101-110 订阅
排序:
Conversing with databases: Practical natural language Querying
Conversing with databases: Practical Natural Language Queryi...
收藏 引用
2023 conference on empirical methods in natural language processing, EMNLP 2023
作者: Kochedykov, Denis Yin, Fenglin Khatravath, Sreevidya JPMorgan ML CoE
In this work, we designed, developed and released in production DataQue - a hybrid NLQ (natural language Querying) system for conversational DB querying. We address multiple practical problems that are not accounted f... 详细信息
来源: 评论
ATAP: Automatic Template-Augmented Commonsense Knowledge Graph Completion via Pre-Trained language Models
ATAP: Automatic Template-Augmented Commonsense Knowledge Gra...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Fu Ding, Yifan Cheng, Jingwei School of Computer Science and Engineering Northeastern University China
The mission of commonsense knowledge graph completion (CKGC) is to infer missing facts from known commonsense knowledge. CKGC methods can be roughly divided into two categories: triple-based methods and text-based met... 详细信息
来源: 评论
The Accuracy Paradox in RLHF: When Better Reward Models Don't Yield Better language Models
The Accuracy Paradox in RLHF: When Better Reward Models Don'...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Yanjun Zhu, Dawei Sun, Yirong Chen, Xinghao Zhang, Wei Shen, Xiaoyu Department of Computing The Hong Kong Polytechnic University Hong Kong Saarland University Saarland Informatics Germany Digital Twin Institute Eastern Institute of Technology Ningbo China
Reinforcement Learning from Human Feedback significantly enhances natural language processing by aligning language models with human expectations. A critical factor in this alignment is the strength of reward models u... 详细信息
来源: 评论
Exploring Intrinsic language-specific Subspaces in Fine-tuning Multilingual Neural Machine Translation
Exploring Intrinsic Language-specific Subspaces in Fine-tuni...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cao, Zhe Qu, Zhi Kamigaito, Hidetaka Watanabe, Taro Nara Institute of Science and Technology Japan
Multilingual neural machine translation models support fine-tuning hundreds of languages simultaneously. However, fine-tuning on full parameters solely is inefficient potentially leading to negative interactions among... 详细信息
来源: 评论
methods for Automatic Matrix language Determination of Code-Switched Speech
Methods for Automatic Matrix Language Determination of Code-...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Iakovenko, Olga Hain, Thomas The University of Sheffield United Kingdom
Code-switching (CS) is the process of speakers interchanging between two or more languages which in the modern world becomes increasingly common. In order to better describe CS speech the Matrix language Frame (MLF) t... 详细信息
来源: 评论
Can Large language Models Faithfully Express Their Intrinsic Uncertainty in Words?
Can Large Language Models Faithfully Express Their Intrinsic...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yona, Gal Aharoni, Roee Geva, Mor Google Research United States Tel Aviv University Israel
We posit that large language models (LLMs) should be capable of expressing their intrinsic uncertainty in natural language. For example, if the LLM is equally likely to output two contradicting answers to the same que... 详细信息
来源: 评论
SURf: Teaching Large Vision-language Models to Selectively Utilize Retrieved Information
SURf: Teaching Large Vision-Language Models to Selectively U...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Sun, Jiashuo Zhang, Jihai Zhou, Yucheng Su, Zhaochen Qu, Xiaoye Cheng, Yu Xiamen University China The Chinese University of Hong Kong Hong Kong SKL-IOTSC CIS University of Macau China Soochow University China Shanghai AI Laboratory China
Large Vision-language Models (LVLMs) have become pivotal at the intersection of computer vision and natural language processing. However, the full potential of LVLMs' Retrieval-Augmented Generation (RAG) capabilit... 详细信息
来源: 评论
TelBench: A Benchmark for Evaluating Telco-Specific Large language Models
TelBench: A Benchmark for Evaluating Telco-Specific Large La...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lee, Sunwoo Arya, Dhammiko Cho, Seung-Mo Han, Gyoung-Eun Hong, Seokyoung Jang, Wonbeom Lee, Seojin Park, Sohee Sek, Sereimony Song, Injee Yoon, Sungbin Davis, Eric SK Telecom Korea Republic of
The telecommunications industry, characterized by its vast customer base and complex service offerings, necessitates a high level of domain expertise and proficiency in customer service center operations. Consequently... 详细信息
来源: 评论
Direct Multi-Turn Preference Optimization for language Agents
Direct Multi-Turn Preference Optimization for Language Agent...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Shi, Wentao Yuan, Mengqi Wu, Junkang Wang, Qifan Feng, Fuli University of Science and Technology of China China Meta AI United States
Adapting Large language Models (LLMs) for agent tasks is critical in developing language agents. Direct Preference Optimization (DPO) is a promising technique for this adaptation with the alleviation of compounding er... 详细信息
来源: 评论
InfiniPot: Infinite Context processing on Memory-Constrained LLMs
InfiniPot: Infinite Context Processing on Memory-Constrained...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kim, Minsoo Shim, Kyuhong Choi, Jungwook Chang, Simyung Hanyang University Korea Republic of Qualcomm AI Research Qualcomm Korea YH Korea Republic of
Handling long input contexts remains a significant challenge for Large language Models (LLMs), particularly in resource-constrained environments such as mobile devices. Our work aims to address this limitation by intr... 详细信息
来源: 评论