版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Department of Information Technology Pranveer Singh Institute of Technology Uttar Pradesh Kanpur India Department of Computer Science and Engineering Amity School of Engineering and Technology Amity University Uttar Pradesh Lucknow India Department of Computer Science and Engineering Babu Banarasi Das University Uttar Pradesh Lucknow India Department of Computer Science and Engineering Pranveer Singh Institute of Technology Uttar Pradesh Kanpur India
出 版 物:《Multimedia Tools and Applications》 (Multimedia Tools Appl)
年 卷 期:2025年
页 面:1-24页
核心收录:
学科分类:0831[工学-生物医学工程(可授工学、理学、医学学位)] 0711[理学-系统科学] 070207[理学-光学] 08[工学] 0835[工学-软件工程] 0803[工学-光学工程] 0702[理学-物理学] 0812[工学-计算机科学与技术(可授工学、理学学位)]
摘 要:Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, and cost. In recent years, convolution neural networks (CNNs) have revolutionized computer vision. Convolution is a local CNN technique that is only applicable to a small region surrounding an image. Vision Transformers (ViT) use self-attention, which is a global activity since it collects information from the entire image. As a result, the ViT can successfully gather distant semantic relevance from an image. This study examined several optimizers, including Adamax, SGD, RMSprop, Adadelta, Adafactor, Nadam, and Adagrad. With 1750 Healthy and Glaucoma images in the IEEE fundus image dataset and 4800 healthy and glaucoma images in the LAG fundus image dataset, we trained and tested the ViT model on these datasets. Additionally, the datasets underwent image scaling, auto-rotation, and auto-contrast adjustment via adaptive equalization during preprocessing. The results demonstrated that preparing the provided dataset with various optimizers improved accuracy and other performance metrics. Additionally, according to the results, the Nadam Optimizer improved accuracy in the adaptive equalized preprocessing of the IEEE dataset by up to 97.8% and in the adaptive equalized preprocessing of the LAG dataset by up to 92%, both of which were followed by auto rotation and image resizing processes. In addition to integrating our vision transformer model with the shift tokenization model, we also combined ViT with a hybrid model that consisted of six different models, including SVM, Gaussian NB, Bernoulli NB, Decision Tree, KNN, and Random Forest, based on which optimizer was the most successful for each dataset. Empirical results show that the SVM Model worked well and improved accuracy by up to 93% with precision of up to 94% in the adaptive equalization preprocess