咨询与建议

限定检索结果

文献类型

  • 14,463 篇 会议
  • 653 篇 期刊文献
  • 101 册 图书
  • 40 篇 学位论文
  • 1 篇 科技报告

馆藏范围

  • 15,257 篇 电子文献
  • 1 种 纸本馆藏

日期分布

学科分类号

  • 10,943 篇 工学
    • 10,283 篇 计算机科学与技术...
    • 5,409 篇 软件工程
    • 1,461 篇 信息与通信工程
    • 953 篇 电气工程
    • 879 篇 控制科学与工程
    • 446 篇 生物工程
    • 221 篇 网络空间安全
    • 220 篇 化学工程与技术
    • 186 篇 机械工程
    • 174 篇 生物医学工程(可授...
    • 141 篇 电子科学与技术(可...
    • 100 篇 仪器科学与技术
    • 100 篇 安全科学与工程
  • 2,473 篇 理学
    • 1,150 篇 数学
    • 649 篇 物理学
    • 518 篇 生物学
    • 391 篇 统计学(可授理学、...
    • 241 篇 系统科学
    • 232 篇 化学
  • 2,417 篇 管理学
    • 1,748 篇 图书情报与档案管...
    • 758 篇 管理科学与工程(可...
    • 240 篇 工商管理
    • 104 篇 公共管理
  • 1,761 篇 文学
    • 1,709 篇 外国语言文学
    • 184 篇 中国语言文学
  • 510 篇 医学
    • 299 篇 临床医学
    • 282 篇 基础医学(可授医学...
    • 112 篇 公共卫生与预防医...
  • 277 篇 法学
    • 249 篇 社会学
  • 237 篇 教育学
    • 224 篇 教育学
  • 100 篇 农学
  • 97 篇 经济学
  • 9 篇 艺术学
  • 7 篇 哲学
  • 4 篇 军事学

主题

  • 3,534 篇 natural language...
  • 1,768 篇 natural language...
  • 952 篇 computational li...
  • 741 篇 semantics
  • 680 篇 machine learning
  • 609 篇 deep learning
  • 520 篇 natural language...
  • 347 篇 computational mo...
  • 336 篇 training
  • 333 篇 accuracy
  • 331 篇 sentiment analys...
  • 329 篇 large language m...
  • 320 篇 feature extracti...
  • 311 篇 data mining
  • 290 篇 speech processin...
  • 261 篇 speech recogniti...
  • 252 篇 transformers
  • 235 篇 neural networks
  • 217 篇 iterative method...
  • 212 篇 support vector m...

机构

  • 85 篇 carnegie mellon ...
  • 51 篇 university of ch...
  • 45 篇 tsinghua univers...
  • 45 篇 carnegie mellon ...
  • 43 篇 zhejiang univers...
  • 43 篇 national univers...
  • 38 篇 nanyang technolo...
  • 36 篇 university of wa...
  • 35 篇 univ chinese aca...
  • 34 篇 university of sc...
  • 34 篇 carnegie mellon ...
  • 33 篇 stanford univers...
  • 32 篇 gaoling school o...
  • 32 篇 school of artifi...
  • 32 篇 alibaba grp peop...
  • 29 篇 tsinghua univ de...
  • 28 篇 harbin institute...
  • 27 篇 language technol...
  • 27 篇 peking universit...
  • 26 篇 microsoft resear...

作者

  • 55 篇 zhou guodong
  • 50 篇 neubig graham
  • 46 篇 liu yang
  • 39 篇 sun maosong
  • 36 篇 zhang min
  • 34 篇 liu qun
  • 33 篇 smith noah a.
  • 28 篇 schütze hinrich
  • 27 篇 liu zhiyuan
  • 27 篇 lapata mirella
  • 26 篇 wen ji-rong
  • 24 篇 chang kai-wei
  • 23 篇 zhou jie
  • 23 篇 yang diyi
  • 23 篇 zhao hai
  • 23 篇 zhao wayne xin
  • 21 篇 chua tat-seng
  • 20 篇 dredze mark
  • 18 篇 biemann chris
  • 18 篇 fung pascale

语言

  • 14,663 篇 英文
  • 481 篇 其他
  • 105 篇 中文
  • 18 篇 法文
  • 15 篇 土耳其文
  • 2 篇 西班牙文
  • 2 篇 俄文
检索条件"任意字段=Conference on empirical methods in natural language processing"
15258 条 记 录,以下是521-530 订阅
排序:
Getting More from Less: Large language Models are Good Spontaneous Multilingual Learners
Getting More from Less: Large Language Models are Good Spont...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Shimao Gao, Changjiang Zhu, Wenhao Chen, Jiajun Huang, Xin Han, Xue Feng, Junlan Deng, Chao Huang, Shujian National Key Laboratory for Novel Software Technology Nanjing University China China Mobile Research Beijing China
Recently, Large language Models (LLMs) have shown impressive language capabilities, while most of them have very unbalanced performance across different languages. Multilingual alignment based on the translation paral... 详细信息
来源: 评论
When are Lemons Purple? The Concept Association Bias of Vision-language Models
When are Lemons Purple? The Concept Association Bias of Visi...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Tang, Yingtian Yamada, Yutaro Zhang, Yoyo Yildirim, Ilker Ecole Polytech Fed Lausanne Lausanne Switzerland Yale Univ New Haven CT USA
Large-scale vision-language models such as CLIP have shown impressive performance on zero-shot image classification and image-to-text retrieval. However, such performance does not realize in tasks that require a finer... 详细信息
来源: 评论
Evaluating Object Hallucination in Large Vision-language Models
Evaluating Object Hallucination in Large Vision-Language Mod...
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Li, Yifan Du, Yifan Zhou, Kun Wang, Jinpeng Zhao, Wayne Xin Wen, Ji-Rong Renmin Univ China Gaoling Sch Artificial Intelligence Beijing Peoples R China Renmin Univ China Sch Informat Beijing Peoples R China Beijing Key Lab Big Data Management & Anal Method Beijing Peoples R China Meituan Grp Beijing Peoples R China
Inspired by the superior language abilities of large language models (LLM), large vision-language models (LVLM) have been recently proposed by integrating powerful LLMs for improving the performance on complex multimo... 详细信息
来源: 评论
Defining Knowledge: Bridging Epistemology and Large language Models
Defining Knowledge: Bridging Epistemology and Large Language...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Fierro, Constanza Dhar, Ruchira Stamatiou, Filippos Garneau, Nicolas Søgaard, Anders Department of Computer Science University of Copenhagen Denmark Center for Philosophy in Artificial Intelligence University of Copenhagen Denmark
Knowledge claims are abundant in the literature on large language models (LLMs);but can we say that GPT-4 truly "knows" the Earth is round? To address this question, we review standard definitions of knowled... 详细信息
来源: 评论
V-DPO: Mitigating Hallucination in Large Vision language Models via Vision-Guided Direct Preference Optimization
V-DPO: Mitigating Hallucination in Large Vision Language Mod...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Xie, Yuxi Li, Guanzhen Xu, Xiao Kan, Min-Yen National University of Singapore Singapore
Large vision-language models (LVLMs) suffer from hallucination, resulting in misalignment between the output textual response and the input visual content. Recent research indicates that the over-reliance on the Large... 详细信息
来源: 评论
Self-Evaluation of Large language Model based on Glass-box Features
Self-Evaluation of Large Language Model based on Glass-box F...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Huang, Hui Qu, Yingqi Liu, Jing Yang, Muyun Xu, Bing Zhao, Tiejun Lu, Wenpeng Faculty of Computing Harbin Institute of Technology Harbin China Baidu Inc. Beijing China Key Laboratory of Computing Power Network and Information Security Ministry of Education Shandong Computer Science Center Qilu University of Technology Jinan China
The proliferation of open-source Large language Models (LLMs) underscores the pressing need for evaluation methods. Existing works primarily rely on external evaluators, focusing on training and prompting strategies. ... 详细信息
来源: 评论
Fishing for Magikarp: Automatically Detecting Under-trained Tokens in Large language Models
Fishing for Magikarp: Automatically Detecting Under-trained ...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Land, Sander Bartolo, Max Cohere
The disconnect between tokenizer creation and model training in language models allows for specific inputs, such as the infamous_SolidGoldMagikarp token, to induce unwanted model behaviour. Although such 'glitch t... 详细信息
来源: 评论
Introducing Compiler Semantics into Large language Models as Programming language Translators: A Case Study of C to x86 Assembly
Introducing Compiler Semantics into Large Language Models as...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhang, Shuoming Zhao, Jiacheng Xia, Chunwei Wang, Zheng Chen, Yunji Cui, Huimin SKLP Institute of Computing Technology CAS China University of Leeds United Kingdom University of Chinese Academy of Sciences Beijing China
Compilers are complex software containing millions of lines of code, taking years to develop. This paper investigates to what extent Large language Models (LLMs) can replace hand-crafted compilers in translating high-... 详细信息
来源: 评论
Active Retrieval Augmented Generation
Active Retrieval Augmented Generation
收藏 引用
conference on empirical methods in natural language processing (EMNLP)
作者: Jiang, Zhengbao Xu, Frank F. Gao, Luyu Sun, Zhiqing Liu, Qian Dwivedi-Yu, Jane Yang, Yiming Callan, Jamie Neubig, Graham Carnegie Mellon Univ Language Technol Inst Pittsburgh PA 15213 USA Sea AI Lab Singapore Singapore Meta FAIR Menlo Pk CA USA
Despite the remarkable ability of large language models (LMs) to comprehend and generate language, they have a tendency to hallucinate and create factually inaccurate output. Augmenting LMs by retrieving information f... 详细信息
来源: 评论
Inference-Time Decontamination: Reusing Leaked Benchmarks for Large language Model Evaluation
Inference-Time Decontamination: Reusing Leaked Benchmarks fo...
收藏 引用
2024 conference on empirical methods in natural language processing, EMNLP 2024
作者: Zhu, Qin Cheng, Qinyuan Peng, Runyu Li, Xiaonan Liu, Tengxiao Peng, Ru Qiu, Xipeng Huang, Xuanjing School of Computer Science Fudan University China Shanghai Key Laboratory of Intelligent Information Processing Fudan University China College of Computer Science and Technology Zhejiang University China
The training process of large language models (LLMs) often involves varying degrees of test data contamination (Yang et al., 2023b).Although current LLMs are achieving increasingly better performance on various benchm... 详细信息
来源: 评论