咨询与建议

限定检索结果

文献类型

  • 14,702 篇 会议
  • 666 篇 期刊文献
  • 101 册 图书
  • 37 篇 学位论文

馆藏范围

  • 15,505 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,112 篇 工学
    • 10,444 篇 计算机科学与技术...
    • 5,444 篇 软件工程
    • 1,501 篇 信息与通信工程
    • 983 篇 电气工程
    • 945 篇 控制科学与工程
    • 448 篇 生物工程
    • 245 篇 网络空间安全
    • 223 篇 化学工程与技术
    • 197 篇 机械工程
    • 177 篇 生物医学工程(可授...
    • 150 篇 电子科学与技术(可...
    • 122 篇 安全科学与工程
    • 117 篇 交通运输工程
  • 2,492 篇 理学
    • 1,158 篇 数学
    • 662 篇 物理学
    • 518 篇 生物学
    • 400 篇 统计学(可授理学、...
    • 244 篇 化学
    • 240 篇 系统科学
  • 2,429 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 779 篇 管理科学与工程(可...
    • 236 篇 工商管理
    • 119 篇 公共管理
  • 1,832 篇 文学
    • 1,776 篇 外国语言文学
    • 169 篇 中国语言文学
  • 535 篇 医学
    • 310 篇 临床医学
    • 285 篇 基础医学(可授医学...
    • 125 篇 公共卫生与预防医...
  • 281 篇 法学
    • 251 篇 社会学
  • 242 篇 教育学
    • 229 篇 教育学
  • 100 篇 农学
  • 93 篇 经济学
  • 10 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,639 篇 natural language...
  • 1,790 篇 natural language...
  • 952 篇 computational li...
  • 754 篇 semantics
  • 700 篇 machine learning
  • 634 篇 deep learning
  • 521 篇 natural language...
  • 378 篇 accuracy
  • 375 篇 computational mo...
  • 356 篇 training
  • 354 篇 large language m...
  • 342 篇 sentiment analys...
  • 329 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 264 篇 transformers
  • 258 篇 speech recogniti...
  • 238 篇 neural networks
  • 219 篇 support vector m...
  • 217 篇 iterative method...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 44 篇 carnegie mellon ...
  • 42 篇 zhejiang univers...
  • 41 篇 national univers...
  • 35 篇 univ chinese aca...
  • 35 篇 nanyang technolo...
  • 35 篇 carnegie mellon ...
  • 34 篇 university of sc...
  • 34 篇 university of wa...
  • 33 篇 alibaba grp peop...
  • 32 篇 gaoling school o...
  • 32 篇 stanford univers...
  • 30 篇 tsinghua univ de...
  • 30 篇 school of artifi...
  • 28 篇 peking universit...
  • 27 篇 harbin institute...
  • 26 篇 univ sci & techn...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 25 篇 liu zhiyuan
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 13,014 篇 英文
  • 2,377 篇 其他
  • 131 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15506 条 记 录,以下是1381-1390 订阅
排序:
Video-LLaMA An Instruction-tuned Audio-Visual language Model for Video Understanding
Video-LLaMA An Instruction-tuned Audio-Visual Language Model...
收藏 引用
2023 conference on empirical methods in natural language processing: System Demonstrations, EMNLP 2023
作者: Zhang, Hang Li, Xin Bing, Lidong DAMO Academy Alibaba Group China Hupan Lab Hangzhou310023 China
We present Video-LLaMA1 a multi-modal framework that empowers Large language Models (LLMs) with the capability of understanding both visual and auditory content in the video. Video-LLaMA bootstraps cross-modal trainin... 详细信息
来源: 评论
NEWTON: Are Large language Models Capable of Physical Reasoning?
NEWTON: Are Large Language Models Capable of Physical Reason...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Wang, Yi Ru Du, Jiafei Fox, Dieter Srinivasa, Siddhartha Univ Washington Seattle WA 98195 USA NVIDIA Seattle WA USA
Large language Models (LLMs), through their contextualized representations, have been empirically proven to encapsulate syntactic, semantic, word sense, and common-sense knowledge. However, there has been limited expl... 详细信息
来源: 评论
Efficient Continue Training of Temporal language Model with Structural Information
Efficient Continue Training of Temporal Language Model with ...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Su, Zhaochen Li, Juntao Zhang, Zikang Zhou, Zihan Zhang, Min Soochow Univ Inst Comp Sci & Technol Suzhou Peoples R China Peking Univ Dept Chinese Language & Literature Beijing Peoples R China
Current language models are mainly trained on snap-shots of data gathered at a particular time, which decreases their capability to generalize over time and model language change. To model the time variable, existing ... 详细信息
来源: 评论
Unveiling the Flaws: Exploring Imperfections in Synthetic Data and Mitigation Strategies for Large language Models
Unveiling the Flaws: Exploring Imperfections in Synthetic Da...
收藏 引用
2024 Findings of the Association for Computational Linguistics, EMNLP 2024
作者: Chen, Jie Zhang, Yupeng Wang, Bingning Zhao, Wayne Xin Wen, Ji-Rong Chen, Weipeng Baichuan Inc. China Gaoling School of Artificial Intelligence Renmin University of China China
Synthetic data has been proposed as a solution to address the issue of high-quality data scarcity in the training of large language models (LLMs). Studies have shown that synthetic data can effectively improve the per... 详细信息
来源: 评论
Leveraging Large language Models Knowledge Enhancement Dual-Stage Fine-Tuning Framework for Recommendation  13th
Leveraging Large Language Models Knowledge Enhancement Dual-...
收藏 引用
13th International conference on natural language processing and Chinese Computing
作者: Zeng, Biqing Shi, Hao Li, Yangyu Li, Ruizhe Deng, Huimin South China Normal Univ Sch Software Foshan Peoples R China South China Normal Univ Aberdeen Inst Data Sci & Artificial Intelligence Foshan Peoples R China Univ Aberdeen Dept Comp Sci Aberdeen Scotland Guangdong AIB Polytech Sch Comp Sci Guangzhou Peoples R China
Large language models(LLMs) have exhibited notable general-purpose task-solving abilities in language understanding and generation, including processing recommendation tasks. The majority of existing research relies o... 详细信息
来源: 评论
Generative Spoken language Model based on continuous word-sized audio tokens
Generative Spoken Language Model based on continuous word-si...
收藏 引用
2023 conference on empirical methods in natural language processing, EMNLP 2023
作者: Algayres, Robin Adi, Yossi Nguyen, Tu Anh Copet, Jade Synnaeve, Gabriel Sagot, Benoit Dupoux, Emmanuel ENS INRIA INSERM UPEC PSL Research University France The Hebrew University of Jerusalem Israel Meta AI United States
In NLP, text language models based on words or subwords are known to outperform their character-based counterparts. Yet, in the speech community, the standard input of spoken LMs are 20ms or 40ms-long discrete units (... 详细信息
来源: 评论
Leveraging Grammar Induction for language Understanding and Generation
Leveraging Grammar Induction for Language Understanding and ...
收藏 引用
2024 Findings of the Association for Computational Linguistics, EMNLP 2024
作者: Kai, Jushi Hou, Shengyuan Huang, Yusheng Lin, Zhouhan Shanghai Jiao Tong University China
Grammar induction has made significant progress in recent years. However, it is not clear how the application of induced grammar could enhance practical performance in downstream tasks. In this work, we introduce an u... 详细信息
来源: 评论
UReader: Universal OCR-free Visually-situated language Understanding with Multimodal Large language Model
UReader: Universal OCR-free Visually-situated Language Under...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Ye, Jiabo Hu, Anwen Xu, Haiyang Ye, Qinghao Yan, Ming Xu, Guohai Li, Chenliang Tian, Junfeng Qian, Qi Zhang, Ji Jin, Qin He, Liang Lin, Xin Huang, Fei East China Normal Univ Shanghai Peoples R China Alibaba Grp DAMO Acad Hangzhou Peoples R China Renmin Univ China Beijing Peoples R China
Text is ubiquitous in our visual world, conveying crucial information, such as in documents, websites, and everyday photographs. In this work, we propose UReader, a first exploration of universal OCR-free visually-sit... 详细信息
来源: 评论
MalayMMLU: A Multitask Benchmark for the Low-Resource Malay language
MalayMMLU: A Multitask Benchmark for the Low-Resource Malay ...
收藏 引用
2024 Findings of the Association for Computational Linguistics, EMNLP 2024
作者: Poh, Soon Chang Yang, Sze Jue Tan, Jeraelyn Ming Li Chieng, Lawrence Leroy Tze Yao Tan, Jia Xuan Yu, Zhenyu Foong, Chee Mun Chan, Chee Seng Universiti Malaya Malaysia YTL AI Labs Malaysia
Large language Models (LLMs) and Large Vision language Models (LVLMs) exhibit advanced proficiency in language reasoning and comprehension across a wide array of languages. While their performance is notably robust in... 详细信息
来源: 评论
An empirical Study on Multiple Knowledge from ChatGPT for Emotion Recognition in Conversations
An Empirical Study on Multiple Knowledge from ChatGPT for Em...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Tu, Geng Liang, Bin Qin, Bing Wong, Kam-Fai Xu, Ruifeng Harbin Inst Technol Shenzhen Peoples R China Guangdong Prov Key Lab Novel Secur Intelligence T Guangzhou Peoples R China Chinese Univ Hong Kong Hong Kong Peoples R China Harbin Inst Technol Harbin Peoples R China Peng Cheng Lab Shenzhen Peoples R China
Multiple knowledge (e.g., co-reference, topics, emotional causes, etc) has been demonstrated effective for emotion detection. However, exploring this knowledge in Emotion Recognition in Conversations (ERC) is currentl... 详细信息
来源: 评论