咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >ScratchEval: Are GPT-4o Smarte... 收藏
arXiv

ScratchEval: Are GPT-4o Smarter than My Child? Evaluating Large Multimodal Models with Visual Programming Challenges

作     者:Fu, Rao Luo, Ziyang Lin, Hongzhan Ye, Zhen Ma, Jing 

作者机构:Hong Kong Baptist University Hong Kong Hong Kong University of Science and Technology Hong Kong 

出 版 物:《arXiv》 (arXiv)

年 卷 期:2024年

核心收录:

主  题:Logic programming 

摘      要:Recent advancements in large multimodal models (LMMs) have showcased impressive code generation capabilities, primarily evaluated through image-to-code benchmarks. However, these benchmarks are limited to specific visual programming scenarios where the logic reasoning and the multimodal understanding capacities are split apart. To fill this gap, we propose ScratchEval, a novel benchmark designed to evaluate the visual programming reasoning ability of LMMs. ScratchEval is based on Scratch, a block-based visual programming language widely used in children s programming education. By integrating visual elements and embedded programming logic, ScratchEval requires the model to process both visual information and code structure, thereby comprehensively evaluating its programming intent understanding ability. Our evaluation approach goes beyond the traditional image-to-code mapping and focuses on unified logical thinking and problem-solving abilities, providing a more comprehensive and challenging framework for evaluating the visual programming ability of LMMs. ScratchEval not only fills the gap in existing evaluation methods, but also provides new insights for the future development of LMMs in the field of visual programming. Our benchmark can be accessed at https: //***/HKBUNLP/ScratchEval. Copyright © 2024, The Authors. All rights reserved.

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分