咨询与建议

限定检索结果

文献类型

  • 14,558 篇 会议
  • 663 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,362 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 11,025 篇 工学
    • 10,359 篇 计算机科学与技术...
    • 5,436 篇 软件工程
    • 1,474 篇 信息与通信工程
    • 963 篇 电气工程
    • 925 篇 控制科学与工程
    • 446 篇 生物工程
    • 223 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 187 篇 机械工程
    • 175 篇 生物医学工程(可授...
    • 144 篇 电子科学与技术(可...
    • 102 篇 仪器科学与技术
    • 99 篇 安全科学与工程
  • 2,494 篇 理学
    • 1,163 篇 数学
    • 655 篇 物理学
    • 520 篇 生物学
    • 395 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 235 篇 化学
  • 2,427 篇 管理学
    • 1,755 篇 图书情报与档案管...
    • 760 篇 管理科学与工程(可...
    • 241 篇 工商管理
    • 106 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 514 篇 医学
    • 303 篇 临床医学
    • 284 篇 基础医学(可授医学...
    • 113 篇 公共卫生与预防医...
  • 278 篇 法学
    • 249 篇 社会学
  • 238 篇 教育学
    • 225 篇 教育学
  • 100 篇 农学
  • 98 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,557 篇 natural language...
  • 1,786 篇 natural language...
  • 953 篇 computational li...
  • 740 篇 semantics
  • 682 篇 machine learning
  • 613 篇 deep learning
  • 520 篇 natural language...
  • 352 篇 computational mo...
  • 343 篇 accuracy
  • 339 篇 training
  • 335 篇 large language m...
  • 335 篇 sentiment analys...
  • 325 篇 feature extracti...
  • 312 篇 data mining
  • 290 篇 speech processin...
  • 260 篇 speech recogniti...
  • 256 篇 transformers
  • 236 篇 neural networks
  • 218 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 52 篇 university of ch...
  • 46 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of sc...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 carnegie mellon ...
  • 33 篇 gaoling school o...
  • 33 篇 stanford univers...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 26 篇 microsoft resear...
  • 26 篇 language technol...
  • 26 篇 peking universit...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 26 篇 wen ji-rong
  • 26 篇 lapata mirella
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,282 篇 英文
  • 966 篇 其他
  • 113 篇 中文
  • 18 篇 法文
  • 14 篇 土耳其文
  • 2 篇 德文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15363 条 记 录,以下是951-960 订阅
排序:
Dual-Space Knowledge Distillation for Large language Models
Dual-Space Knowledge Distillation for Large Language Models
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Songming Zhang, Xue Sun, Zengkui Chen, Yufeng Xu, Jinan Beijing Key Lab of Traffic Data Analysis and Mining Beijing Jiaotong University Beijing China
Knowledge distillation (KD) is known as a promising solution to compress large language models (LLMs) via transferring their knowledge to smaller *** this process, white-box KD methods usually minimize the distance be... 详细信息
来源: 评论
The Mystery of the Pathological Path-star Task for language Models
The Mystery of the Pathological Path-star Task for Language ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Frydenlund, Arvid University of Toronto Vector Institute Canada
The recently introduced path-star task is a minimal task designed to exemplify limitations to the abilities of language models (Bachmann and Nagarajan, 2024). It involves a path-star graph where multiple arms radiate ... 详细信息
来源: 评论
UniFashion: A Unified Vision-language Model for Multimodal Fashion Retrieval and Generation
UniFashion: A Unified Vision-Language Model for Multimodal F...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhao, Xiangyu Zhang, Yuehan Zhang, Wenlong Wu, Xiao-Ming Department of Computing The Hong Kong Polytechnic University Hong Kong Wuhan University China Shanghai AI Laboratory China
The fashion domain includes a range of real-world multimodal tasks, such as multimodal retrieval and generation. Recent advancements in AI-generated content, particularly large language models for text and diffusion m... 详细信息
来源: 评论
Estimating Knowledge in Large language Models Without Generating a Single Token
Estimating Knowledge in Large Language Models Without Genera...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Gottesman, Daniela Geva, Mor Blavatnik School of Computer Science Tel Aviv University Israel
To evaluate knowledge in large language models (LLMs), current methods query the model and then evaluate its generated responses. In this work, we ask whether evaluation can be done before the model has generated any ... 详细信息
来源: 评论
FIRST: Teach A Reliable Large language Model Through Efficient Trustworthy Distillation
FIRST: Teach A Reliable Large Language Model Through Efficie...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Shum, Kashun Xu, Minrui Zhang, Jianshu Chen, Zixin Diao, Shizhe Dong, Hanze Zhang, Jipeng Raza, Muhammad Omer The Hong Kong University of Science and Technology Hong Kong Wuhan University China NVIDIA United States Purdue University United States
Large language models (LLMs) have become increasingly prevalent in our daily lives, leading to an expectation for LLMs to be trustworthy - both accurate and well-calibrated (the prediction confidence should align with... 详细信息
来源: 评论
Hopping Too Late: Exploring the Limitations of Large language Models on Multi-Hop Queries
Hopping Too Late: Exploring the Limitations of Large Languag...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Biran, Eden Gottesman, Daniela Yang, Sohee Geva, Mor Globerson, Amir Tel Aviv University Israel UCL United States Google Research United States
Large language models (LLMs) can solve complex multi-step problems, but little is known about how these computations are implemented internally. Motivated by this, we study how LLMs answer multi-hop queries such as &q... 详细信息
来源: 评论
Learn to Refuse: Making Large language Models More Controllable and Reliable through Knowledge Scope Limitation and Refusal Mechanism
Learn to Refuse: Making Large Language Models More Controlla...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Cao, Lang University of Illinois Department of Computer Science Urbana-Champaign United States
Large language models (LLMs) have demonstrated impressive language understanding and generation capabilities, enabling them to answer a wide range of questions across various domains. However, these models are not fla... 详细信息
来源: 评论
VLFeedback: A Large-Scale AI Feedback Dataset for Large Vision-language Models Alignment
VLFeedback: A Large-Scale AI Feedback Dataset for Large Visi...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Li, Lei Xie, Zhihui Li, Mukai Chen, Shunian Wang, Peiyi Chen, Liang Yang, Yazheng Wang, Benyou Kong, Lingpeng Liu, Qi The University of Hong Kong Hong Kong Peking University China The Chinese University of HongKong Shenzhen China
As large vision-language models (LVLMs) evolve rapidly, the demand for high-quality and diverse data to align these models becomes increasingly crucial. However, the creation of such data with human supervision proves... 详细信息
来源: 评论
If CLIP Could Talk: Understanding Vision-language Model Representations Through Their Preferred Concept Descriptions
If CLIP Could Talk: Understanding Vision-Language Model Repr...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Esfandiarpoor, Reza Menghini, Cristina Bach, Stephen H. Department of Computer Science Brown University United States Data Science Institute Brown University United States
Recent works often assume that Vision-language Model (VLM) representations are based on visual attributes like shape. However, it is unclear to what extent VLMs prioritize this information to represent concepts. We pr... 详细信息
来源: 评论
ActPlan-1K: Benchmarking the Procedural Planning Ability of Visual language Models in Household Activities
ActPlan-1K: Benchmarking the Procedural Planning Ability of ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Su, Ying Ling, Zhan Shi, Haochen Cheng, Jiayang Yim, Yauwai Song, Yangqiu HKUST Hong Kong University of California San Diego United States
Large language models (LLMs) have been adopted to process textual task description and accomplish procedural planning in embodied AI tasks because of their powerful reasoning ability. However, there is still lack of s... 详细信息
来源: 评论