咨询与建议

限定检索结果

文献类型

  • 14,413 篇 会议
  • 646 篇 期刊文献
  • 39 篇 学位论文
  • 36 册 图书
  • 1 篇 科技报告

馆藏范围

  • 15,134 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,934 篇 工学
    • 10,275 篇 计算机科学与技术...
    • 5,404 篇 软件工程
    • 1,460 篇 信息与通信工程
    • 953 篇 电气工程
    • 875 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,413 篇 管理学
    • 1,747 篇 图书情报与档案管...
    • 754 篇 管理科学与工程(可...
    • 239 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,523 篇 natural language...
  • 1,768 篇 natural language...
  • 945 篇 computational li...
  • 736 篇 semantics
  • 676 篇 machine learning
  • 606 篇 deep learning
  • 520 篇 natural language...
  • 346 篇 computational mo...
  • 334 篇 training
  • 333 篇 sentiment analys...
  • 330 篇 accuracy
  • 327 篇 large language m...
  • 322 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 263 篇 speech recogniti...
  • 250 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 211 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 carnegie mellon ...
  • 44 篇 tsinghua univers...
  • 42 篇 zhejiang univers...
  • 41 篇 national univers...
  • 37 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 28 篇 peking universit...
  • 27 篇 language technol...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 28 篇 lapata mirella
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,541 篇 英文
  • 481 篇 其他
  • 104 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15135 条 记 录,以下是351-360 订阅
排序:
Select, Prompt, Filter: Distilling Large language Models for Summarizing Conversations
Select, Prompt, Filter: Distilling Large Language Models for...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Pham, Minh-Quang Indurthi, Sathish Reddy Chollampatt, Shamil Turchi, Marco Zoom Video Commun San Jose CA 95113 USA
Large language models (LLMs) like ChatGPT can be expensive to train, deploy, and use for specific natural language generation tasks such as text summarization and for certain domains. A promising alternative is to fin... 详细信息
来源: 评论
IBADR: an Iterative Bias-Aware Dataset Refinement Framework for Debiasing NLU models
IBADR: an Iterative Bias-Aware Dataset Refinement Framework ...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wang, Xiaoyue Liu, Xin Wang, Lijie Wang, Yaoxiang Su, Jinsong Wu, Hua Xiamen Univ Sch Informat Xiamen 361005 Peoples R China Baidu Inc Beijing 100085 Peoples R China Xiamen Univ Minist Culture & Tourism Key Lab Digital Protect & Intelligent Proc Intang Xiamen Peoples R China
As commonly-used methods for debiasing natural language understanding (NLU) models, dataset refinement approaches heavily rely on manual data analysis, and thus maybe unable to cover all the potential biased features.... 详细信息
来源: 评论
GEM: Gestalt Enhanced Markup language Model for Web Understanding via Render Tree
GEM: Gestalt Enhanced Markup Language Model for Web Understa...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Shao, Zirui Gao, Feiyu Qi, Zhongda Xing, Hangdi Bu, Jiajun Yu, Zhi Zheng, Qi Liu, Xiaozhong Zhejiang Univ Zhejiang Prov Key Lab Serv Robot Hangzhou Zhejiang Peoples R China Alibaba Grp Hangzhou Peoples R China Worcester Polytech Inst Worcester MA USA
Inexhaustible web content carries abundant perceptible information beyond text. Unfortunately, most prior efforts in pre-trained language Models (LMs) ignore such cyber-richness, while few of them only employ plain HT... 详细信息
来源: 评论
People Make Better Edits: Measuring the Efficacy of LLM-Generated Counterfactually Augmented Data for Harmful language Detection
People Make Better Edits: Measuring the Efficacy of LLM-Gene...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Sen, Indira Assenmacher, Dennis Samory, Mattia Augenstein, Isabelle van der Aalst, Wil Wagner, Claudia Rhein Westfal TH Aachen Aachen Germany Univ Konstanz Constance Germany GESIS Leibniz Inst Social Sci Mannheim Germany Sapienza Univ Rome Rome Italy Univ Copenhagen Copenhagen Denmark
NLP models are used in a variety of critical social computing tasks, such as detecting sexist, racist, or otherwise hateful content. Therefore, it is imperative that these models are robust to spurious features. Past ... 详细信息
来源: 评论
Beyond Label Attention: Transparency in language Models for Automated Medical Coding via Dictionary Learning
Beyond Label Attention: Transparency in Language Models for ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wu, John Wu, David Sun, Jimeng University of Illinois Urbana-Champaign United States Vanderbilt University United States
Medical coding, the translation of unstructured clinical text into standardized medical codes, is a crucial but time-consuming healthcare practice. Though large language models (LLM) could automate the coding process ... 详细信息
来源: 评论
I Learn Better If You Speak My language: Understanding the Superior Performance of Fine-Tuning Large language Models with LLM-Generated Responses
I Learn Better If You Speak My Language: Understanding the S...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Ren, Xuan Wu, Biao Liu, Lingqiao University of Adelaide Australia University of Technology Sydney Australia
This paper explores an intriguing observation: fine-tuning a large language model (LLM) with responses generated by a LLM often yields better results than using responses generated by humans, particularly in reasoning... 详细信息
来源: 评论
Scaling Sentence Embeddings with Large language Models
Scaling Sentence Embeddings with Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Jiang, Ting Huang, Shaohan Luan, Zhongzhi Wang, Deqing Zhuang, Fuzhen SKLSDE Lab School of Computer Beihang University Beijing China Sino-German Joint Software Institute Beihang University Beijing China Institute of Artificial Intelligence Beihang University Beijing China Zhongguancun Laboratory Beijing China
Large language Models (LLMs) have recently gained significant interest due to their impressive results in various natural language tasks. However, their application to sentence embeddings is still under active researc... 详细信息
来源: 评论
Scalable Efficient Training of Large language Models with Low-dimensional Projected Attention
Scalable Efficient Training of Large Language Models with Lo...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lv, Xingtai Ding, Ning Zhang, Kaiyan Hua, Ermo Cui, Ganqu Zhou, Bowen Department of Electronic Engineering Tsinghua University China Shanghai AI Laboratory China Department of Computer Science and Technology Tsinghua University China
Improving the effectiveness and efficiency of large language models (LLMs) simultaneously is a critical yet challenging research goal. In this paper, we find that low-rank pre-training, normally considered as efficien... 详细信息
来源: 评论
Stochastic Fine-Tuning of language Models Using Masked Gradients
Stochastic Fine-Tuning of Language Models Using Masked Gradi...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Akbar-Tajari, Mohammad Pilehvar, Mohammad Taher Sharif University of Technology Iran Cardiff University United Kingdom Tehran Institute for Advanced Studies Iran
Large language Models (LLMs) have emerged as the dominant paradigm in natural language processing owing to their remarkable performance across various target tasks. However, naively fine-tuning them for specific downs... 详细信息
来源: 评论
ZEROTOP: Zero-Shot Task-Oriented Semantic Parsing using Large language Models
ZEROTOP: Zero-Shot Task-Oriented Semantic Parsing using Larg...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Mekala, Dheeraj Wolfe, Jason Roy, Subhro Univ Calif San Diego La Jolla CA 92093 USA OpenAI San Francisco CA USA Microsoft Semant Machines Newton MA USA
We explore the use of large language models (LLMs) for zero-shot semantic parsing. Semantic parsing involves mapping natural language utterances to task-specific meaning representations. LLMs are generally trained on ... 详细信息
来源: 评论