咨询与建议

看过本文的还看了

相关文献

该作者的其他文献

文献详情 >TableGPT: a novel table unders... 收藏

TableGPT: a novel table understanding method based on table recognition and large language model collaborative enhancement

作     者:Ren, Yi Yu, Chenglong Li, Weibin Li, Wei Zhu, Zixuan Zhang, Tianyi Qin, Chenhao Ji, Wenbo Zhang, Jianjun 

作者机构:Xidian Univ Lab AI Hangzhou Inst Technol Hangzhou 311231 Peoples R China Xidian Univ Sch Artif Intelligence Xian 710071 Peoples R China Xi Xian Rail Transit Investment & Construct Co Ltd 2Taibai South Rd Xian 710086 Shaanxi Peoples R China 

出 版 物:《APPLIED INTELLIGENCE》 (Appl Intell)

年 卷 期:2025年第55卷第4期

页      面:1-25页

核心收录:

学科分类:08[工学] 0812[工学-计算机科学与技术(可授工学、理学学位)] 

基  金:Research Project of Shaanxi Coal Geology Group Co. Ltd National Natural Science Foundation of China Shaanxi Provincial Water Conservancy Development Fund Project SMDZ-2023CX-14 

主  题:Table recognition Large language model Image superresolution Agent Prompt words 

摘      要:In today s information age, table images play a crucial role in storing structured information, making table image recognition technology an essential component in many fields. However, accurately recognizing the structure and text content of various complex table images has remained a challenge. Recently, large language models (LLMs) have demonstrated exceptional capabilities in various natural language processing tasks. Therefore, applying LLMs to the correction tasks of structure and text content after table image recognition presents a novel solution. This paper introduces a new method, TableGPT, which combines table recognition with LLMs and develops a specialized multimodal agent to enhance the effectiveness of table image recognition. Our approach is divided into four stages. In the first stage, TableGPT_agent initially evaluates whether the input is a table image and, upon confirmation, uses algorithms such as the transformer for preliminary recognition. In the second stage, the agent converts the recognition results into HTML format and autonomously assesses whether corrections are needed. If corrections are needed, the data are input into a trained LLM to achieve more accurate table recognition and optimization. In the third stage, the agent evaluates user satisfaction through feedback and applies superresolution algorithms to low-quality images, as this is often the main reason for user dissatisfaction. Finally, the agent inputs both the enhanced and original images into the trained model, integrating the information to obtain the optimal table text representation. Our research shows that trained LLMs can effectively interpret table images, improving the Tree Edit Distance Similarity (TEDS) score by an average of 4% even when based on the best current table recognition methods, across both public and private datasets. They also demonstrate better performance in correcting structural and textual errors. We also explore the impact of image superresolution t

读者评论 与其他读者分享你的观点

用户名:未登录
我的评分