咨询与建议

限定检索结果

文献类型

  • 14,413 篇 会议
  • 650 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,204 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,937 篇 工学
    • 10,278 篇 计算机科学与技术...
    • 5,404 篇 软件工程
    • 1,460 篇 信息与通信工程
    • 953 篇 电气工程
    • 875 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,413 篇 管理学
    • 1,747 篇 图书情报与档案管...
    • 754 篇 管理科学与工程(可...
    • 239 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,523 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 736 篇 semantics
  • 680 篇 machine learning
  • 606 篇 deep learning
  • 520 篇 natural language...
  • 345 篇 computational mo...
  • 334 篇 training
  • 331 篇 sentiment analys...
  • 330 篇 accuracy
  • 325 篇 large language m...
  • 320 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 263 篇 speech recogniti...
  • 250 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 211 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 carnegie mellon ...
  • 44 篇 tsinghua univers...
  • 42 篇 zhejiang univers...
  • 42 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 28 篇 peking universit...
  • 27 篇 language technol...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,611 篇 英文
  • 481 篇 其他
  • 104 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15205 条 记 录,以下是291-300 订阅
排序:
LinguAlchemy: Fusing Typological and Geographical Elements for Unseen language Generalization
LinguAlchemy: Fusing Typological and Geographical Elements f...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Adilazuarda, Muhammad Farid Cahyawijaya, Samuel Winata, Genta Indra Purwarianti, Ayu Aji, Alham Fikri MBZUAI United Arab Emirates Institut Teknologi Bandung Indonesia The Hong Kong University of Science and Technology Hong Kong Capital One United States
Pretrained language models (PLMs) have become remarkably adept at task and language generalization. Nonetheless, they often fail when faced with unseen languages. In this work, we present LINGUALCHEMY, a regularizatio... 详细信息
来源: 评论
Let Me Speak Freely? A Study on the Impact of Format Restrictions on Performance of Large language Models
Let Me Speak Freely? A Study on the Impact of Format Restric...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tam, Zhi Rui Wu, Cheng-Kuang Tsai, Yi-Lin Lin, Chieh-Yen Lee, Hung-Yi Chen, Yun-Nung Appier AI Research National Taiwan University Taiwan
Structured generation, the process of producing content in standardized formats like JSON and XML, is widely utilized in real-world applications to extract key output information from large language models (LLMs). Thi... 详细信息
来源: 评论
Backward Lens: Projecting language Model Gradients into the Vocabulary Space
Backward Lens: Projecting Language Model Gradients into the ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Katz, Shahar Belinkov, Yonatan Geva, Mor Wolf, Lior Faculty of Computer Science Technion - Israel Institute of Technology Israel Blavatnik School of Computer Science Tel Aviv University Israel
Understanding how Transformer-based language Models (LMs) learn and recall information is a key goal of the deep learning community. Recent interpretability methods project weights and hidden states obtained from the ... 详细信息
来源: 评论
Bridging Information-Theoretic and Geometric Compression in language Models
Bridging Information-Theoretic and Geometric Compression in ...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Cheng, Emily Kervadec, Corentin Baroni, Marco Univ Pompeu Fabra Barcelona Spain ICREA Barcelona Spain
For a language model (LM) to faithfully model human language, it must compress vast, potentially infinite information into relatively few dimensions. We propose analyzing compression in (pre-trained) LMs from two poin... 详细信息
来源: 评论
Initialization of Large language Models via Reparameterization to Mitigate Loss Spikes
Initialization of Large Language Models via Reparameterizati...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Nishida, Kosuke Nishida, Kyosuke Saito, Kuniko NTT Human Informatics Laboratories NTT Corporation Japan
Loss spikes, a phenomenon in which the loss value diverges suddenly, is a fundamental issue in the pre-training of large language models. This paper supposes that the non-uniformity of the norm of the parameters is on... 详细信息
来源: 评论
1+1>2: Can Large language Models Serve as Cross-Lingual Knowledge Aggregators?
1+1>2: Can Large Language Models Serve as Cross-Lingual Know...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Huang, Yue Fan, Chenrui Li, Yuan Wu, Siyuan Zhou, Tianyi Zhang, Xiangliang Sun, Lichao University of Notre Dame United States University of Maryland College Park United States University of Cambridge United Kingdom Huazhong University of Science and Technology China Lehigh University United States
Large language Models (LLMs) have garnered significant attention due to their remarkable ability to process information across various languages. Despite their capabilities, they exhibit inconsistencies in handling id... 详细信息
来源: 评论
Leveraging Large language Models for NLG Evaluation: Advances and Challenges
Leveraging Large Language Models for NLG Evaluation: Advance...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Zhen Xu, Xiaohan Shen, Tao Xu, Can Gu, Jia-Chen Lai, Yuxuan Tao, Chongyang Ma, Shuai WICT Peking University China The University of Hong Kong Hong Kong UTS Australia Microsoft United States UCLA United States The Open University of China China SKLSDE Lab Beihang University China
In the rapidly evolving domain of natural language Generation (NLG) evaluation, introducing Large language Models (LLMs) has opened new avenues for assessing generated content quality, e.g., coherence, creativity, and... 详细信息
来源: 评论
Can language Models Recognize Convincing Arguments?
Can Language Models Recognize Convincing Arguments?
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Rescala, Paula Dolores Ribeiro, Manoel Horta Hu, Tiancheng West, Robert EPFL Switzerland University of Cambridge United Kingdom
The capabilities of large language models (LLMs) have raised concerns about their potential to create and propagate convincing ***, we study their performance in detecting convincing arguments to gain insights into LL... 详细信息
来源: 评论
STOP! Benchmarking Large language Models with Sensitivity Testing on Offensive Progressions
STOP! Benchmarking Large Language Models with Sensitivity Te...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Morabito, Robert Madhusudan, Sangmitra McDonald, Tyler Emami, Ali Brock University Saint Catharines Canada
Mitigating explicit and implicit biases in Large language Models (LLMs) has become a critical focus in the field of natural language processing. However, many current methodologies evaluate scenarios in isolation, wit... 详细信息
来源: 评论
Review of Research on Mongolian Fixed Phrases Recognition
Review of Research on Mongolian Fixed Phrases Recognition
收藏 引用
28th International conference on Asian language processing (IALP)
作者: Sui, Xiaolong Zhang, Zhonghao Liu, Na Liu, Guiping Ji, Yatu Ren, Qing-Dao-Er-Ji Wu, Nier Lu, Min Inner Mongolia Univ Technol Sch Informat Engn Hohhot 010051 Peoples R China
Mongolian fixed phrase recognition is one of the most fundamental tasks in Mongolian natural language processing, and its main purpose is to identify the fixed phrase boundaries and types with specific meanings in Mon... 详细信息
来源: 评论