咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是631-640 订阅
排序:
On the In-context Generation of language Models
On the In-context Generation of Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Jiang, Zhongtao Zhang, Yuanzhe Luo, Kun Yuan, Xiaowei Zhao, Jun Liu, Kang The Key Laboratory of Cognition and Decision Intelligence for Complex Systems Institute of Automation Chinese Academy of Sciences China School of Artificial Intelligence University of Chinese Academy of Sciences China Beijing Academy of Artificial Intelligence China Shanghai Artificial Intelligence Laboratory China
Large language models (LLMs) are found to have the ability of in-context generation (ICG): when they are fed with an in-context prompt concatenating a few somehow similar examples, they can implicitly recognize the pa... 详细信息
来源: 评论
Zero-Shot Sharpness-Aware Quantization for Pre-trained language Models
Zero-Shot Sharpness-Aware Quantization for Pre-trained Langu...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Zhu, Miaoxi Zhong, Qihuang Shen, Li Ding, Liang Liu, Juhua Du, Bo Tao, Dacheng Wuhan Univ Sch Comp Sci Natl Engn Res Ctr Multimedia Software Inst Artificial Intelligence Wuhan Peoples R China Wuhan Univ Hubei Key Lab Multimedia & Network Commun Engn Wuhan Peoples R China JD Explore Acad Beijing Peoples R China Univ Sydney Sydney NSW Australia
Quantization is a promising approach for reducing memory overhead and accelerating inference, especially in large pre-trained language model (PLM) scenarios. While having no access to original training data due to sec... 详细信息
来源: 评论
Please note that I'm just an AI: Analysis of Behavior Patterns of LLMs in (Non-)offensive Speech Identification
Please note that I'm just an AI: Analysis of Behavior Patter...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Dönmez, Esra Vu, Thang Falenska, Agnieszka Institute for Natural Language Processing University of Stuttgart Germany Interchange Forum for Reflecting on Intelligent Systems University of Stuttgart Germany
Offensive speech is highly prevalent on online *** trained on online data, Large language Models (LLMs) display undesirable behaviors, such as generating harmful text or failing to recognize *** these shortcomings, th... 详细信息
来源: 评论
UNICORN: A Unified Causal Video-Oriented language-Modeling Framework for Temporal Video-language Tasks
UNICORN: A Unified Causal Video-Oriented Language-Modeling F...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Xiong, Yuanhao Nie, Yixin Liu, Haotian Wang, Boxin Chen, Jun Jin, Rong Hsieh, Cho-Jui Torresani, Lorenzo Lei, Jie UCLA United States Meta United States University of Wisconsin-Madison United States UIUC United States
The great success of large language models has encouraged the development of large multimodal models, with a focus on image-language interaction. Despite promising results in various image-language downstream tasks, i...
来源: 评论
LONGAGENT: Achieving Question Answering for 128k-Token-Long Documents through Multi-Agent Collaboration
LONGAGENT: Achieving Question Answering for 128k-Token-Long ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhao, Jun Zu, Can Xu, Hao Lu, Yi He, Wei Ding, Yiwen Gui, Tao Zhang, Qi Huang, Xuanjing School of Computer Science Fudan University China Shanghai Key Laboratory of Intelligent Information Processing Fudan University China Institute of Modern Languages and Linguistics Fudan University China
Large language models (LLMs) have achieved tremendous success in understanding language and processing text. However, question-answering (QA) on lengthy documents faces challenges of resource constraints and a high pr... 详细信息
来源: 评论
PromptReps: Prompting Large language Models to Generate Dense and Sparse Representations for Zero-Shot Document Retrieval
PromptReps: Prompting Large Language Models to Generate Dens...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhuang, Shengyao Ma, Xueguang Koopman, Bevan Lin, Jimmy Zuccon, Guido CSIRO Australia The University of Queensland Australia University of Waterloo Canada
Utilizing large language models (LLMs) for zero-shot document ranking is done in one of two ways: (1) prompt-based re-ranking methods, which require no further training but are only feasible for re-ranking a handful o... 详细信息
来源: 评论
Universal Vulnerabilities in Large language Models: Backdoor Attacks for In-context Learning
Universal Vulnerabilities in Large Language Models: Backdoor...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhao, Shuai Jia, Meihuizi Tuan, Luu Anh Pan, Fengjun Wen, Jinming Nanyang Technological University Singapore Guangzhou University Guangzhou China Beijing Institute of Technology Beijing China
In-context learning, a paradigm bridging the gap between pre-training and fine-tuning, has demonstrated high efficacy in several NLP tasks, especially in few-shot settings. Despite being widely applied, in-context lea... 详细信息
来源: 评论
DEPN: Detecting and Editing Privacy Neurons in Pretrained language Models
DEPN: Detecting and Editing Privacy Neurons in Pretrained La...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wu, Xinwei Li, Junzhuo Xu, Minghui Dong, Weilong Wu, Shuangzhi Bian, Chao Xiong, Deyi Tianjin Univ Coll Intelligence & Comp Tianjin Peoples R China Tianjin Univ Sch New Media & Commun Tianjin Peoples R China Tsinghua Univ Dept Comp Sci & Technol Beijing Peoples R China ByteDance Lark AI Beijing Peoples R China
Large language models pretrained on a huge amount of data capture rich knowledge and information in the training data. The ability of data memorization and regurgitation in pre-trained language models, revealed in pre... 详细信息
来源: 评论
Compressing Context to Enhance Inference Efficiency of Large language Models
Compressing Context to Enhance Inference Efficiency of Large...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Li, Yucheng Dong, Bo Guerin, Frank Lin, Chenghua Univ Surrey Dept Comp Sci Guildford Surrey England Univ Manchester Dept Comp Sci Manchester Lancs England Univ Sheffield Dept Comp Sci Sheffield S Yorkshire England
Large language models (LLMs) achieved remarkable performance across various tasks. However, they face challenges in managing long documents and extended conversations, due to significantly increased computational requ... 详细信息
来源: 评论
Discovering Biases in Information Retrieval Models Using Relevance Thesaurus as Global Explanation
Discovering Biases in Information Retrieval Models Using Rel...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kim, Youngwoo Rahimi, Razieh Allan, James University of Massachusetts Amherst United States
Most efforts in interpreting neural relevance models have focused on local explanations, which explain the relevance of a document to a query but are not useful in predicting the model's behavior on unseen query-d... 详细信息
来源: 评论