咨询与建议

限定检索结果

文献类型

  • 14,463 篇 会议
  • 654 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,258 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,944 篇 工学
    • 10,283 篇 计算机科学与技术...
    • 5,408 篇 软件工程
    • 1,463 篇 信息与通信工程
    • 954 篇 电气工程
    • 880 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 142 篇 电子科学与技术(可...
    • 101 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,416 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 757 篇 管理科学与工程(可...
    • 239 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 283 篇 基础医学(可授医学...
    • 111 篇 公共卫生与预防医...
  • 276 篇 法学
    • 248 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 96 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,535 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 740 篇 semantics
  • 681 篇 machine learning
  • 609 篇 deep learning
  • 520 篇 natural language...
  • 347 篇 computational mo...
  • 338 篇 training
  • 333 篇 accuracy
  • 331 篇 sentiment analys...
  • 329 篇 large language m...
  • 321 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 252 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 language technol...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,662 篇 英文
  • 482 篇 其他
  • 106 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15259 条 记 录,以下是741-750 订阅
排序:
Large language Models Can Self-Improve
Large Language Models Can Self-Improve
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Huang, Jiaxin Gu, Shixiang Shane Hou, Le Wu, Yuexin Wang, Xuezhi Yu, Hongkun Han, Jiawei Univ Illinois Champaign IL USA Google Mountain View CA 94043 USA
Large language Models (LLMs) have achieved excellent performances in various tasks. However, fine-tuning an LLM requires extensive supervision. Human, on the other hand, may improve their reasoning abilities by self-t... 详细信息
来源: 评论
Lost in Translation: Chemical language Models and the Misunderstanding of Molecule Structures
Lost in Translation: Chemical Language Models and the Misund...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Ganeeva, Veronika Sakhovskiy, Andrey Khrabrov, Kuzma Savchenko, Andrey Kadurin, Artur Tutubalina, Elena AIRI Sber AI Skoltech Russia HSE University Russia Sber AI Lab Russia ISP RAS Research Center for Trusted Artificial Intelligence Russia
The recent integration of chemistry with natural language processing (NLP) has advanced drug discovery. Molecule representation in language models (LMs) is crucial in enhancing chemical understanding. We propose Augme... 详细信息
来源: 评论
NLEBench+NorGLM: A Comprehensive empirical Analysis and Benchmark Dataset for Generative language Models in Norwegian
NLEBench+NorGLM: A Comprehensive Empirical Analysis and Benc...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Liu, Peng Zhang, Lemei Farup, Terje Lauvrak, Even W. Ingvaldsen, Jon Espen Eide, Simen Gulla, Jon Atle Yang, Zhirong Department of Computer Science Norwegian University of Science and Technology Norway Schibsted Media Norway Jinhua Institute of Zhejiang University China
Norwegian, spoken by only 5 million population, is under-representative within the most impressive breakthroughs in NLP tasks. To the best of our knowledge, there has not yet been a comprehensive evaluation of the exi... 详细信息
来源: 评论
Privacy Implications of Retrieval-Based language Models
Privacy Implications of Retrieval-Based Language Models
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Huang, Yangsibo Gupta, Samyak Zhong, Zexuan Li, Kai Chen, Danqi Princeton Univ Princeton NJ 08544 USA
Retrieval-based language models (LMs) have demonstrated improved interpretability, factuality, and adaptability compared to their parametric counterparts by incorporating retrieved text from external datastores. While... 详细信息
来源: 评论
STEREOMAP: Quantifying the Awareness of Human-like Stereotypes in Large language Models
STEREOMAP: Quantifying the Awareness of Human-like Stereotyp...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Jeoung, Sullam Ge, Yubin Diesner, Jana Univ Illinois Urbana IL 61801 USA
Large language Models (LLMs) have been observed to encode and perpetuate harmful associations present in the training data. We propose a theoretically grounded framework called STEREOMAP to gain insights into their pe... 详细信息
来源: 评论
Teaching Small language Models Reasoning through Counterfactual Distillation
Teaching Small Language Models Reasoning through Counterfact...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Feng, Tao Li, Yicheng Li, Chenglin Chen, Hao Yu, Fei Zhang, Yin Zhejiang University Hangzhou China Ant Group Hangzhou China
With the rise of large language models (LLMs), many studies are interested in transferring the reasoning capabilities of LLMs to small language models (SLMs). Previous distillation methods usually utilize the capabili... 详细信息
来源: 评论
Self-training language Models for Arithmetic Reasoning
Self-training Language Models for Arithmetic Reasoning
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Kadlčík, Marek Štefánik, Michal Faculty of Informatics Masaryk University Czech Republic
Recent language models achieve impressive results in tasks involving complex multistep reasoning, but scaling these capabilities further traditionally requires expensive collection of more annotated data. In this work... 详细信息
来源: 评论
Why do LLaVA Vision-language Models Reply to Images in English?
Why do LLaVA Vision-Language Models Reply to Images in Engli...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Hinck, Musashi Holtermann, Carolin Olson, Matthew Lyle Schneider, Florian Yu, Sungduk Bhiwandiwalla, Anahita Lauscher, Anne Tseng, Shaoyen Lal, Vasudev Intel Labs United States University of Hamburg Germany
We uncover a surprising multilingual bias occurring in a popular class of multimodal vision-language models (VLMs). Including an image in the query to a LLaVA-style VLM significantly increases the likelihood of the mo... 详细信息
来源: 评论
Text Rendering Strategies for Pixel language Models
Text Rendering Strategies for Pixel Language Models
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Lotz, Jonas F. Salesky, Elizabeth Rust, Phillip Elliott, Desmond Univ Copenhagen Dept Comp Sci Copenhagen Denmark ROCKWOOL Fdn Res Unit Copenhagen Denmark Johns Hopkins Univ Baltimore MD USA
Pixel-based language models process text rendered as images, which allows them to handle any script, making them a promising approach to open vocabulary language modelling. However, recent approaches use text renderer... 详细信息
来源: 评论
Hit the Nail on the Head: Parameter-Efficient Multi-task Tuning via Human language Intervention
Hit the Nail on the Head: Parameter-Efficient Multi-task Tun...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lu, Wenxuan Jiang, Songhao Wang, Yijing Zang, Tianning Institute of Information Engineering Chinese Academy of Sciences Beijing China1 School of Cyber Security University of Chinese Academy of Sciences Beijing China
Parameter-Efficient Fine-Tuning (PEFT) on small Pre-trained language Models (PLMs) has emerged as a promising approach to enhance their multi-tasking *** methods simultaneously train additional modules (i.e., one task... 详细信息
来源: 评论