咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是881-890 订阅
排序:
Can language Models Laugh at YouTube Short-form Videos?
Can Language Models Laugh at YouTube Short-form Videos?
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Ko, Dayoon Lee, Sangho Kim, Gunhee Seoul Natl Univ Seoul South Korea Allen Inst Artificial Intelligence Seattle WA USA
As short-form funny videos on social networks are gaining popularity, it becomes demanding for AI models to understand them for better communication with humans. Unfortunately, previous video humor datasets target spe... 详细信息
来源: 评论
Empowering Large language Model for Continual Video Question Answering with Collaborative Prompting
Empowering Large Language Model for Continual Video Question...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cai, Chen Wang, Zheng Gao, Jianjun Liu, Wenyang Lu, Ye Zhang, Runzhong Yap, Kim-Hui Nanyang Technological University Singapore
In recent years, the rapid increase in online video content has underscored the limitations of static Video Question Answering (VideoQA) models trained on fixed datasets, as they struggle to adapt to new questions or ... 详细信息
来源: 评论
Pretraining language Models Using Translationese
Pretraining Language Models Using Translationese
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Doshi, Meet Dabre, Raj Bhattacharyya, Pushpak CFILT Indian Institute of Technology Bombay Mumbai India National Institute of Information and Communications Technology Kyoto Japan IIT Madras Chennai India
In this paper, we explore the utility of Translationese as synthetic data created using machine translation for pre-training language models (LMs) for low-resource languages (LRLs). Our simple methodology consists of ... 详细信息
来源: 评论
TroL: Traversal of Layers for Large language and Vision Models
TroL: Traversal of Layers for Large Language and Vision Mode...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Lee, Byung-Kwan Chung, Sangyun Kim, Chae Won Park, Beomchan Ro, Yong Man KAIST Korea Republic of
Large language and vision models (LLVMs) have been driven by the generalization power of large language models (LLMs) and the advent of visual instruction tuning. Along with scaling them up directly, these models enab... 详细信息
来源: 评论
***: Real-Time Multilingual Sign language Translation Application
***: Real-Time Multilingual Sign Language Translation Applic...
收藏 引用
2024 conference on empirical methods in natural language processing: System Demonstrations, EMNLP 2024
作者: Moryossef, Amit
This paper presents ***, an open-source application for real-time multilingual bidirectional translation between spoken and signed languages. Harnessing state-of-the-art open-source models, this tool aims to address t... 详细信息
来源: 评论
BASES: Large-scale Web Search User Simulation with Large language Model based Agents
BASES: Large-scale Web Search User Simulation with Large Lan...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Ren, Ruiyang Qiu, Peng Qu, Yingqi Liu, Jing Zhao, Wayne Xin Wu, Hua Wen, Ji-Rong Wang, Haifeng Gaoling School of Artificial Intelligence Renmin University of China China Baidu Inc. China Beijing Key Laboratory of Big Data Management and Analysis Methods China
Due to the excellent capacities of large language models (LLMs), it becomes feasible to develop LLM-based agents for reliable user simulation. Considering the scarcity and limit (e.g., privacy issues) of real user dat... 详细信息
来源: 评论
Retrospex: language Agent Meets Offline Reinforcement Learning Critic
Retrospex: Language Agent Meets Offline Reinforcement Learni...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Xiang, Yufei Shen, Yiqun Zhang, Yeqin Nguyen, Cam-Tu State Key Laboratory for Novel Software Technology Nanjing University School of Artificial Intelligence Nanjing University Nanjing China
Large language Models (LLMs) possess extensive knowledge and commonsense reasoning capabilities, making them valuable for creating powerful agents. However, existing LLM agent frameworks have not fully utilized past e... 详细信息
来源: 评论
GeoGPT4V: Towards Geometric Multi-modal Large language Models with Geometric Image Generation
GeoGPT4V: Towards Geometric Multi-modal Large Language Model...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cai, Shihao Bao, Keqin Guo, Hangyu Zhang, Jizhi Song, Jun Zheng, Bo University of Science and Technology of China China Alibaba Group China
Large language models have seen widespread adoption in math problem-solving. However, in geometry problems that usually require visual aids for better understanding, even the most advanced multi-modal models currently... 详细信息
来源: 评论
Error Analysis of Multilingual language Models in Machine Translation: A Case Study of English-Amharic Translation
Error Analysis of Multilingual Language Models in Machine Tr...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Alemyehu, Hizkiel Mitiku Zahera, Hamada M. Ngomo, Axel-Cyrille Ngonga Department of Computer Science Paderborn Universtiy Germany
Multilingual large language models (mLLMs) have significantly advanced machine translation, yet challenges remain for low-resource languages like *** study evaluates the performance of state-of-the-art mLLMs, specific... 详细信息
来源: 评论
A Simple yet Effective Training-free Prompt-free Approach to Chinese Spelling Correction Based on Large language Models
A Simple yet Effective Training-free Prompt-free Approach to...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhou, Houquan Li, Zhenghua Zhang, Bo Li, Chen Lai, Shaopeng Zhang, Ji Huang, Fei Zhang, Min School of Computer Science and Technology Soochow University China DAMO Academy Alibaba Group China
This work proposes a simple training-free prompt-free approach to leverage large language models (LLMs) for the Chinese spelling correction (CSC) task, which is totally different from all previous CSC approaches. The ... 详细信息
来源: 评论