咨询与建议

限定检索结果

文献类型

  • 14,413 篇 会议
  • 650 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,204 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,937 篇 工学
    • 10,278 篇 计算机科学与技术...
    • 5,404 篇 软件工程
    • 1,460 篇 信息与通信工程
    • 953 篇 电气工程
    • 875 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,413 篇 管理学
    • 1,747 篇 图书情报与档案管...
    • 754 篇 管理科学与工程(可...
    • 239 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,523 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 736 篇 semantics
  • 680 篇 machine learning
  • 606 篇 deep learning
  • 520 篇 natural language...
  • 345 篇 computational mo...
  • 334 篇 training
  • 331 篇 sentiment analys...
  • 330 篇 accuracy
  • 325 篇 large language m...
  • 320 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 263 篇 speech recogniti...
  • 250 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 211 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 carnegie mellon ...
  • 44 篇 tsinghua univers...
  • 42 篇 zhejiang univers...
  • 42 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 28 篇 peking universit...
  • 27 篇 language technol...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,611 篇 英文
  • 481 篇 其他
  • 104 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15205 条 记 录,以下是451-460 订阅
排序:
RECOVERING FROM PRIVACY-PRESERVING MASKING WITH LARGE language MODELS  49
RECOVERING FROM PRIVACY-PRESERVING MASKING WITH LARGE LANGUA...
收藏 引用
49th IEEE International conference on Acoustics, Speech, and Signal processing (ICASSP)
作者: Vats, Arpita Liu, Zhe Sue, Peng Paul, Debjyoti Ma, Yingyi Pang, Yutong Ahmed, Zeeshan Kalinli, Ozlem Santa Clara Univ Santa Clara CA 95053 USA Meta Menlo Pk CA USA
Model adaptation is crucial to handle the discrepancy between proxy training data and actual users' data received. To effectively perform adaptation, textual data of users is typically stored on servers or their l... 详细信息
来源: 评论
MalayMMLU: A Multitask Benchmark for the Low-Resource Malay language
MalayMMLU: A Multitask Benchmark for the Low-Resource Malay ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Poh, Soon Chang Yang, Sze Jue Tan, Jeraelyn Ming Li Chieng, Lawrence Leroy Tze Yao Tan, Jia Xuan Yu, Zhenyu Foong, Chee Mun Chan, Chee Seng Universiti Malaya Malaysia YTL AI Labs Malaysia
Large language Models (LLMs) and Large Vision language Models (LVLMs) exhibit advanced proficiency in language reasoning and comprehension across a wide array of languages. While their performance is notably robust in... 详细信息
来源: 评论
Grounding language in Multi-Perspective Referential Communication
Grounding Language in Multi-Perspective Referential Communic...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Tang, Zineng Mao, Lingjun Suhr, Alane University of California Berkeley United States
We introduce a task and dataset for referring expression generation and comprehension in multi-agent embodied *** this task, two agents in a shared scene must take into account one another's visual perspective, wh... 详细信息
来源: 评论
GraphQL Query Generation: A Large Training and Benchmarking Dataset
GraphQL Query Generation: A Large Training and Benchmarking ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kesarwani, Manish Ghosh, Sambit Gupta, Nitin Chakraborty, Shramona Sindhgatta, Renuka Mehta, Sameep Eberhardt, Carlos Debrunner, Dan IBM Research India IBM StepZen United States
GraphQL is a powerful query language for APIs that allows clients to fetch precise data efficiently and flexibly, querying multiple resources with a single request. However, crafting complex GraphQL query operations c... 详细信息
来源: 评论
PSST: A Benchmark for Evaluation-driven Text Public-Speaking Style Transfer
PSST: A Benchmark for Evaluation-driven Text Public-Speaking...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Sun, Huashan Wu, Yixiao Ye, Yuhao Yang, Yizhe Li, Yinghao Li, Jiawei Gao, Yang School of Computer Science and Technology Beijing Institute of Technology Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications China
language style is necessary for AI systems to understand and generate diverse human language ***, previous text style transfer primarily focused on sentence-level data-driven approaches, limiting exploration of potent... 详细信息
来源: 评论
Beyond Shared Vocabulary: Increasing Representational Word Similarities across languages for Multilingual Machine Translation
Beyond Shared Vocabulary: Increasing Representational Word S...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wu, Di Monz, Christof Univ Amsterdam Language Technol Lab Amsterdam Netherlands
Using a vocabulary that is shared across languages is common practice in Multilingual Neural Machine Translation (MNMT). In addition to its simple design, shared tokens play an important role in positive knowledge tra... 详细信息
来源: 评论
SecureSQL: Evaluating Data Leakage of Large language Models as natural language Interfaces to Databases
SecureSQL: Evaluating Data Leakage of Large Language Models ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Song, Yanqi Liu, Ruiheng Chen, Shu Ren, Qianhao Zhang, Yu Yu, Yongqi Harbin Institute of Technology China Xi'an Research Institute of High-Tech China
With the widespread application of Large language Models (LLMs) in natural language Interfaces to Databases (NLIDBs), concerns about security issues in NLIDBs have been increasing gradually. However, research on sensi... 详细信息
来源: 评论
PEFTDebias : Capturing debiasing information using PEFTs
PEFTDebias : Capturing debiasing information using PEFTs
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Agarwal, Sumit Veerubhotla, Aditya Srikanth Bansal, Srijan Carnegie Mellon Univ Language Technol Inst Pittsburgh PA 15213 USA
The increasing use of foundation models highlights the urgent need to address and eliminate implicit biases present in them that arise during pre-training. In this paper, we introduce PEFTDebias, a novel approach that... 详细信息
来源: 评论
DLoRA: Distributed Parameter-Efficient Fine-Tuning Solution for Large language Model
DLoRA: Distributed Parameter-Efficient Fine-Tuning Solution ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Gao, Chao Zhang, Sai Qian University of California Riverside United States New York University United States
To enhance the performance of large language models (LLM) on downstream tasks, one solution is to fine-tune certain LLM parameters and make it better align with the characteristics of the training dataset. This proces... 详细信息
来源: 评论
WALLEDEVAL: A Comprehensive Safety Evaluation Toolkit for Large language Models
WALLEDEVAL: A Comprehensive Safety Evaluation Toolkit for La...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Gupta, Prannaya Yau, Le Qi Low, Hao Han Lee, I-Shiang Lim, Hugo M. Teoh, Yu Xin Koh, Jia Hng Liew, Dar Win Bhardwaj, Rishabh Bhardwaj, Rajat Poria, Soujanya Walled AI Labs
WALLEDEVAL is a comprehensive AI safety testing toolkit designed to evaluate large language models (LLMs). It accommodates a diverse range of models, including both open-weight and API-based ones, and features over 35... 详细信息
来源: 评论