咨询与建议

限定检索结果

文献类型

  • 7,585 篇 会议
  • 71 册 图书
  • 49 篇 期刊文献
  • 2 篇 学位论文

馆藏范围

  • 7,706 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 6,483 篇 工学
    • 6,256 篇 计算机科学与技术...
    • 3,577 篇 软件工程
    • 748 篇 信息与通信工程
    • 535 篇 控制科学与工程
    • 272 篇 电气工程
    • 212 篇 生物工程
    • 121 篇 化学工程与技术
    • 100 篇 机械工程
    • 86 篇 电子科学与技术(可...
    • 74 篇 生物医学工程(可授...
    • 63 篇 安全科学与工程
    • 59 篇 农业工程
    • 57 篇 交通运输工程
    • 49 篇 网络空间安全
  • 1,522 篇 管理学
    • 1,165 篇 图书情报与档案管...
    • 467 篇 管理科学与工程(可...
    • 134 篇 工商管理
  • 1,471 篇 文学
    • 1,464 篇 外国语言文学
    • 161 篇 中国语言文学
  • 1,446 篇 理学
    • 776 篇 数学
    • 352 篇 物理学
    • 249 篇 生物学
    • 240 篇 统计学(可授理学、...
    • 120 篇 化学
    • 101 篇 系统科学
  • 164 篇 法学
    • 153 篇 社会学
  • 129 篇 医学
    • 93 篇 临床医学
    • 75 篇 基础医学(可授医学...
  • 111 篇 教育学
    • 105 篇 教育学
  • 68 篇 农学
    • 68 篇 作物学
  • 42 篇 经济学
  • 6 篇 哲学
  • 3 篇 艺术学
  • 1 篇 军事学

主题

  • 1,181 篇 natural language...
  • 872 篇 computational li...
  • 619 篇 natural language...
  • 283 篇 semantics
  • 165 篇 natural language...
  • 128 篇 machine learning
  • 127 篇 graphic methods
  • 123 篇 iterative method...
  • 111 篇 sentiment analys...
  • 110 篇 speech recogniti...
  • 105 篇 deep learning
  • 94 篇 syntactics
  • 90 篇 text processing
  • 86 篇 speech processin...
  • 81 篇 embeddings
  • 72 篇 information retr...
  • 69 篇 modeling languag...
  • 69 篇 artificial intel...
  • 66 篇 contrastive lear...
  • 63 篇 zero-shot learni...

机构

  • 74 篇 carnegie mellon ...
  • 36 篇 national univers...
  • 34 篇 carnegie mellon ...
  • 34 篇 language technol...
  • 34 篇 institute for na...
  • 33 篇 university of wa...
  • 33 篇 school of comput...
  • 32 篇 tsinghua univers...
  • 31 篇 university of ch...
  • 30 篇 nanyang technolo...
  • 30 篇 stanford univers...
  • 29 篇 zhejiang univers...
  • 27 篇 alibaba grp peop...
  • 26 篇 gaoling school o...
  • 26 篇 carnegie mellon ...
  • 25 篇 harbin institute...
  • 25 篇 peking universit...
  • 25 篇 natl univ singap...
  • 24 篇 allen inst artif...
  • 23 篇 the chinese univ...

作者

  • 42 篇 neubig graham
  • 39 篇 zhou guodong
  • 39 篇 smith noah a.
  • 36 篇 liu yang
  • 36 篇 lapata mirella
  • 34 篇 sun maosong
  • 32 篇 zhang min
  • 30 篇 liu qun
  • 30 篇 hovy eduard
  • 29 篇 zhao jun
  • 27 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 gurevych iryna
  • 25 篇 vulic ivan
  • 22 篇 huang xuanjing
  • 21 篇 chang kai-wei
  • 21 篇 liu kang
  • 21 篇 zhang yue
  • 21 篇 zhang qi
  • 20 篇 wen ji-rong

语言

  • 6,955 篇 英文
  • 722 篇 其他
  • 23 篇 中文
  • 8 篇 法文
  • 4 篇 土耳其文
  • 2 篇 德文
  • 2 篇 俄文
检索条件"任意字段=Proceedings of the Conference on Empirical Methods in Natural Language Processing"
7707 条 记 录,以下是221-230 订阅
排序:
GraphQL Query Generation: A Large Training and Benchmarking Dataset
GraphQL Query Generation: A Large Training and Benchmarking ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kesarwani, Manish Ghosh, Sambit Gupta, Nitin Chakraborty, Shramona Sindhgatta, Renuka Mehta, Sameep Eberhardt, Carlos Debrunner, Dan IBM Research India IBM StepZen United States
GraphQL is a powerful query language for APIs that allows clients to fetch precise data efficiently and flexibly, querying multiple resources with a single request. However, crafting complex GraphQL query operations c... 详细信息
来源: 评论
StablePrompt: Automatic Prompt Tuning using Reinforcement Learning for Large language Models
StablePrompt: Automatic Prompt Tuning using Reinforcement Le...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kwon, Minchan Kim, Gaeun Kim, Jongsuk Lee, Haeil Kim, Junmo KAIST Korea Republic of
Finding appropriate prompts for the specific task has become an important issue as the usage of Large language Models (LLM) has expanded. Reinforcement Learning (RL) is widely used for prompt tuning, but its inherent ... 详细信息
来源: 评论
OpenSep: Leveraging Large language Models with Textual Inversion for Open World Audio Separation
OpenSep: Leveraging Large Language Models with Textual Inver...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Mahmud, Tanvir Marculescu, Diana Chandra Family Department of Electrical and Computer Engineering The University of Texas Austin United States
Audio separation in real-world scenarios, where mixtures contain a variable number of sources, presents significant challenges due to limitations of existing models, such as over-separation, under-separation, and depe... 详细信息
来源: 评论
The Distributional Hypothesis Does Not Fully Explain the Benefits of Masked language Model Pretraining
The Distributional Hypothesis Does Not Fully Explain the Ben...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Chiang, Ting-Rui Yogatama, Dani Univ Southern Calif Los Angeles CA 90007 USA
We analyze the masked language modeling pretraining objective function from the perspective of the distributional hypothesis. We investigate whether better sample efficiency and the better generalization capability of... 详细信息
来源: 评论
DynaThink: Fast or Slow? A Dynamic Decision-Making Framework for Large language Models
DynaThink: Fast or Slow? A Dynamic Decision-Making Framework...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Pan, Jiabao Zhang, Yan Zhang, Chen Liu, Zuozhu Wang, Hongwei Li, Haizhou Cambridge University Zhejiang University China National University of Singapore Singapore Zhejiang University China The Chinese University of Hong Kong Hong Kong
Large language models (LLMs) have demonstrated emergent capabilities across diverse reasoning tasks via popular Chains-of-Thought (COT) prompting. However, such a simple and fast COT approach often encounters limitati... 详细信息
来源: 评论
Holistic Automated Red Teaming for Large language Models through Top-Down Test Case Generation and Multi-turn Interaction
Holistic Automated Red Teaming for Large Language Models thr...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Jinchuan Zhou, Yan Liu, Yaxin Li, Ziming Hu, Songlin Institute of Information Engineering Chinese Academy of Sciences China School of Cyber Security University of Chinese Academy of Sciences China
Automated red teaming is an effective method for identifying misaligned behaviors in large language models (LLMs). Existing approaches, however, often focus primarily on improving attack success rates while overlookin... 详细信息
来源: 评论
Story Morals: Surfacing value-driven narrative schemas using large language models
Story Morals: Surfacing value-driven narrative schemas using...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Hobson, David G. Zhou, Haiqi Ruths, Derek Piper, Andrew School of Computer Science McGill University Canada Department of Languages Literatures and Cultures McGill University Canada
Stories are not only designed to entertain but to encode lessons reflecting their authors' beliefs about the world. In this paper, we propose a new task of narrative schema labelling based on the concept of "... 详细信息
来源: 评论
Neuron Specialization: Leveraging Intrinsic Task Modularity for Multilingual Machine Translation
Neuron Specialization: Leveraging Intrinsic Task Modularity ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tan, Shaomu Wu, Di Monz, Christof Language Technology Lab University of Amsterdam Netherlands
Training a unified multilingual model promotes knowledge transfer but inevitably introduces negative interference. language-specific modeling methods show promise in reducing interference. However, they often rely on ... 详细信息
来源: 评论
How Far Can We Extract Diverse Perspectives from Large language Models?
How Far Can We Extract Diverse Perspectives from Large Langu...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Hayati, Shirley Anugrah Lee, Minhwa Rajagopal, Dheeraj Kang, Dongyeop University of Minnesota United States Google DeepMind United Kingdom
Collecting diverse human opinions is costly and challenging. This leads to a recent trend in exploiting large language models (LLMs) for generating diverse data for potential scalable and efficient solutions. However,... 详细信息
来源: 评论
DATA ADVISOR: Dynamic Data Curation for Safety Alignment of Large language Models
DATA ADVISOR: Dynamic Data Curation for Safety Alignment of ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Wang, Fei Mehrabi, Ninareh Goyal, Palash Gupta, Rahul Chang, Kai-Wei Galstyan, Aram University of Southern California United States Amazon AGI Foundations United States
Data is a crucial element in large language model (LLM) alignment. Recent studies have explored using LLMs for efficient data collection. However, LLM-generated data often suffers from quality issues, with underrepres... 详细信息
来源: 评论