咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是851-860 订阅
排序:
A Framework of Knowledge Graph-Enhanced Large language Model Based on Question Decomposition and Atomic Retrieval
A Framework of Knowledge Graph-Enhanced Large Language Model...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Yading Song, Dandan Zhou, Changzhi Tian, Yuhang Wang, Hao Yang, Ziyi Zhang, Shuhao School of Computer Science and Technology Beijing Institute of Technology China School of Cyberspace Science and Technology Beijing Institute of Technology China College of Computing and Data Science Nanyang Technological University Singapore
Knowledge graphs (KGs) can provide explainable reasoning for large language models (LLMs), alleviating their hallucination problem. Knowledge graph question answering (KGQA) is a typical benchmark to evaluate the meth... 详细信息
来源: 评论
Reparameterization-Based Parameter-Efficient Fine-Tuning methods for Large language Models: A Systematic Survey  13th
Reparameterization-Based Parameter-Efficient Fine-Tuning Met...
收藏 引用
13th International conference on natural language processing and Chinese Computing
作者: Chen, Zezhou Liu, Zhaoxiang Wang, Kai Lian, Shiguo China Unicom Ai Innovat Ctr Beijing 100013 Peoples R China China Unicom Unicom Digital Technol Beijing 100013 Peoples R China
The rapid advancement of Large language Models (LLMs) has revolutionized both academia and industry, leveraging Transformer architectures and pre-training objectives to achieve unprecedented performance. To fully expl... 详细信息
来源: 评论
Chain-of-Dictionary Prompting Elicits Translation in Large language Models
Chain-of-Dictionary Prompting Elicits Translation in Large L...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lu, Hongyuan Yang, Haoran Huang, Haoyang Zhang, Dongdong Lam, Wai Wei, Furu The Chinese University of Hong Kong Hong Kong Microsoft Corporation United States
Large language models (LLMs) have shown surprisingly good performance in multilingual neural machine translation (MNMT) even if not being trained explicitly for translation. Yet, they still struggle with translating l... 详细信息
来源: 评论
API Is Enough: Conformal Prediction for Large language Models Without Logit-Access
API Is Enough: Conformal Prediction for Large Language Model...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Su, Jiayuan Luo, Jing Wang, Hongwei Cheng, Lu ZJU-UIUC Institute Zhejiang University China School of Mathematics and Statistics Shandong University China Department of Computer Science University of Illinois Chicago United States
This study aims to address the pervasive challenge of quantifying uncertainty in large language models (LLMs) without logit-access. Conformal Prediction (CP), known for its model-agnostic and distribution-free feature... 详细信息
来源: 评论
Mitigating language Bias of LMMs in Social Intelligence Understanding with Virtual Counterfactual Calibration
Mitigating Language Bias of LMMs in Social Intelligence Unde...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Peng Guo, Xiao-Yu Li, Yuan-Fang Zhang, Xiaowang Feng, Zhiyong College of Intelligence and Computing Tianjin University China AIML University of Adelaide Australia Monash University Australia
Social intelligence is essential for understanding complex human expressions and social interactions. While large multimodal models (LMMs) have demonstrated remarkable performance in social intelligence question answe... 详细信息
来源: 评论
Interpreting Arithmetic Mechanism in Large language Models through Comparative Neuron Analysis
Interpreting Arithmetic Mechanism in Large Language Models t...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Yu, Zeping Ananiadou, Sophia Department of Computer Science National Centre for Text Mining The University of Manchester United Kingdom
We find arithmetic ability resides within a limited number of attention heads, with each head specializing in distinct operations. To delve into the reason, we introduce the Comparative Neuron Analysis (CNA) method, w... 详细信息
来源: 评论
NLLP 2024 - natural Legal language processing Workshop 2024, Proceedings of the Workshop
NLLP 2024 - Natural Legal Language Processing Workshop 2024,...
收藏 引用
6th natural Legal language processing Workshop 2024, NLLP 2024, co-located with the 2024 conference on empirical methods in natural language processing
The proceedings contain 33 papers. The topics discussed include: LeGen: complex information extraction from legal sentences using generative models;summarizing long regulatory documents with a multi-step pipeline;enha...
来源: 评论
SCA: Selective Compression Attention for Efficiently Extending the Context Window of Large language Models
SCA: Selective Compression Attention for Efficiently Extendi...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zheng, Huanran Zhu, Wei Wang, Xiaoling East China Normal University Shanghai China
Large language models (LLMs) have achieved impressive performance across various domains, but the limited context window and the expensive computational cost of processing long texts restrict their more comprehensive ... 详细信息
来源: 评论
Linguistic Bias in ChatGPT: language Models Reinforce Dialect Discrimination
Linguistic Bias in ChatGPT: Language Models Reinforce Dialec...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Fleisig, Eve Smith, Genevieve Bossi, Madeline Rustagi, Ishita Yin, Xavier Klein, Dan University of California Berkeley United States
We present a large-scale study of linguistic bias exhibited by ChatGPT covering ten dialects of English (Standard American English, Standard British English, and eight widely spoken non"standard" varieties f... 详细信息
来源: 评论
Breaking the Script Barrier in Multilingual Pre-Trained language Models with Transliteration-Based Post-Training Alignment
Breaking the Script Barrier in Multilingual Pre-Trained Lang...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Xhelili, Orgest Liu, Yihong Schütze, Hinrich Technical University of Munich Germany Center for Information and Language Processing LMU Munich Germany Germany
Multilingual pre-trained models (mPLMs) have shown impressive performance on cross-lingual transfer tasks. However, the transfer performance is often hindered when a low-resource target language is written in a differ... 详细信息
来源: 评论