咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是961-970 订阅
排序:
Breaking language Barriers: Cross-Lingual Continual Pre-Training at Scale
Breaking Language Barriers: Cross-Lingual Continual Pre-Trai...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zheng, Wenzhen Pan, Wenbo Xu, Xu Qin, Libo Yue, Li Zhou, Ming Chinese Academy of Sciences China Harbin Institute of Technology China Peking University China School of Computer Science and Engineering Central South University China Langboat Inc. China
In recent years, Large language Models (LLMs) have made significant strides towards Artificial General Intelligence. However, training these models from scratch requires substantial computational resources and vast am... 详细信息
来源: 评论
Scaling Behavior for Large language Models regarding Numeral Systems: An Example using Pythia
Scaling Behavior for Large Language Models regarding Numeral...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhou, Zhejian Wang, Jiayu Lin, Dahua Chen, Kai University of Southern California United States Shanghai AI Laboratory China The Chinese University of Hong Kong Hong Kong
Though Large language Models (LLMs) have shown remarkable abilities in mathematics reasoning, they are still struggling with performing numeric operations accurately, such as addition and multiplication. Numbers can b... 详细信息
来源: 评论
Exploring Quantization for Efficient Pre-Training of Transformer language Models
Exploring Quantization for Efficient Pre-Training of Transfo...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chitsaz, Kamran Fournier, Quentin Mordido, Gonçalo Chandar, Sarath Chandar Research Lab Mila - Quebec AI Institute Canada Polytechnique Montréal Canada Canada CIFAR AI Canada
The increasing scale of Transformer models has led to an increase in their pre-training computational requirements. While quantization has proven to be effective after pre-training and during fine-tuning, applying qua... 详细信息
来源: 评论
LLMC: Benchmarking Large language Model Quantization with a Versatile Compression Toolkit
LLMC: Benchmarking Large Language Model Quantization with a ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Gong, Ruihao Yong, Yang Gu, Shiqiao Huang, Yushi Lv, Chengtao Zhang, Yunchen Tao, Dacheng Liu, Xianglong Beihang University China SenseTime Research China Nanyang Technological University Singapore
Recent advancements in large language models (LLMs) are propelling us toward artificial general intelligence with their remarkable emergent abilities and reasoning capabilities. However, the substantial computational ... 详细信息
来源: 评论
MindGames: Targeting Theory of Mind in Large language Models with Dynamic Epistemic Modal Logic
MindGames: Targeting Theory of Mind in Large Language Models...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Sileo, Damien Lernould, Antoine Univ Lille INRIA CNRS Centrale LilleUMR 9189CRIStAL F-59000 Lille France Univ Lille CRIStAL F-59000 Lille France
Theory of Mind (ToM) is a critical component of intelligence but its assessment remains the subject of heated debates. Prior research applied human ToM assessments to natural language processing models using either hu... 详细信息
来源: 评论
IPL: Leveraging Multimodal Large language Models for Intelligent Product Listing
IPL: Leveraging Multimodal Large Language Models for Intelli...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chen, Kang Zhang, Qingheng Lian, Chengbao Ji, Yixin Liu, Xuwei Han, Shuguang Wu, Guoqiang Huang, Fei Chen, Jufeng Alibaba Group China Fudan University China
Unlike professional Business-to-Consumer (B2C) e-commerce platforms (e.g., Amazon), Consumer-to-Consumer (C2C) platforms (e.g., Facebook marketplace) are mainly targeting individual sellers who usually lack sufficient... 详细信息
来源: 评论
MMNeuron: Discovering Neuron-Level Domain-Specific Interpretation in Multimodal Large language Model
MMNeuron: Discovering Neuron-Level Domain-Specific Interpret...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Huo, Jiahao Yan, Yibo Hu, Boren Yue, Yutao Hu, Xuming China The Hong Kong University of Science and Technology Hong Kong Tongji University China
Projecting visual features into word embedding space has become a significant fusion strategy adopted by Multimodal Large language Models (MLLMs). However, its internal mechanisms have yet to be explored. Inspired by ... 详细信息
来源: 评论
ModSCAN: Measuring Stereotypical Bias in Large Vision-language Models from Vision and language Modalities
ModSCAN: Measuring Stereotypical Bias in Large Vision-Langua...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Jiang, Yukun Li, Zheng Shen, Xinyue Liu, Yugeng Backes, Michael Zhang, Yang CISPA Helmholtz Center for Information Security Germany
Large vision-language models (LVLMs) have been rapidly developed and widely used in various fields, but the (potential) stereotypical bias in the model is largely unexplored. In this study, we present a pioneering mea...
来源: 评论
Make Large language Model a Better Ranker
Make Large Language Model a Better Ranker
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Chao, Wen-Shuo Zheng, Zhi Zhu, Hengshu Liu, Hao The Hong Kong University of Science and Technology Guangzhou China School of Data Science University of Science and Technology of China China Computer Network Information Center Chinese Academy of Sciences China
Large language Models (LLMs) demonstrate robust capabilities across various fields, leading to a paradigm shift in LLM-enhanced Recommender System (RS). Research to date focuses on point-wise and pair-wise recommendat... 详细信息
来源: 评论
Unnatural language processing: How do language models handle machine-generated prompts?
Unnatural language processing: How do language models handle...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Kervadec, Corentin Franzon, Francesca Baroni, Marco Univ Pompeu Fabra UPF Barcelona Spain UPF Barcelona Spain ICREA Barcelona Spain
language model prompt optimization research has shown that semantically and grammatically well-formed manually crafted prompts are routinely outperformed by automatically generated token sequences with no apparent mea... 详细信息
来源: 评论