Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, an...
详细信息
Glaucoma is currently one of the most significant causes of permanent blindness. Fundus imaging is the most popular glaucoma screening method because of the compromises it has to make in terms of portability, size, and cost. In recent years, convolution neural networks (CNNs) have revolutionized computer vision. Convolution is a "local" CNN technique that is only applicable to a small region surrounding an image. Vision Transformers (ViT) use self-attention, which is a "global" activity since it collects information from the entire image. As a result, the ViT can successfully gather distant semantic relevance from an image. This study examined several optimizers, including Adamax, SGD, RMSprop, Adadelta, Adafactor, Nadam, and Adagrad. With 1750 Healthy and Glaucoma images in the IEEE fundus image dataset and 4800 healthy and glaucoma images in the LAG fundus image dataset, we trained and tested the ViT model on these datasets. Additionally, the datasets underwent image scaling, auto-rotation, and auto-contrast adjustment via adaptive equalization during preprocessing. The results demonstrated that preparing the provided dataset with various optimizers improved accuracy and other performance metrics. Additionally, according to the results, the Nadam Optimizer improved accuracy in the adaptive equalized preprocessing of the IEEE dataset by up to 97.8% and in the adaptive equalized preprocessing of the LAG dataset by up to 92%, both of which were followed by auto rotation and image resizing processes. In addition to integrating our vision transformer model with the shift tokenization model, we also combined ViT with a hybrid model that consisted of six different models, including SVM, Gaussian NB, Bernoulli NB, Decision Tree, KNN, and Random Forest, based on which optimizer was the most successful for each dataset. Empirical results show that the SVM Model worked well and improved accuracy by up to 93% with precision of up to 94% in the adaptive equalization preprocess
Automatic timetable generation is a complex optimization problem with practical applications in various domains such as education, healthcare, and event management. The challenge lies in efficiently scheduling activit...
详细信息
ISBN:
(纸本)9798350318609
Automatic timetable generation is a complex optimization problem with practical applications in various domains such as education, healthcare, and event management. The challenge lies in efficiently scheduling activities while satisfying numerous constraints and objectives. In this study, we propose an OptiSchedule algorithm for automatic timetable generation. The algorithm employs a combination of heuristic search techniques and metaheuristic optimization methods to iteratively improve timetable solutions. It starts with initializing a timetable grid and iteratively refines the solution by generating neighbouring solutions and selecting the most promising ones based on an evaluation function. Through extensive testing and validation, our OptiSchedule algorithm demonstrates significant improvements in timetable quality and efficiency compared to existing approaches. The algorithm effectively minimizes conflicts, optimizes resource utilization, and balances workload distribution. Furthermore, it provides flexibility for users to input constraints and preferences, allowing customization to specific scheduling requirements. The OptiSchedule algorithm represents a significant advancement in the field of automatic timetable generation. Its ability to produce high-quality schedules while considering complex constraints makes it a valuable tool for educational institutions, healthcare facilities, and businesses alike. By streamlining scheduling processes and optimizing resource allocation, OptiSchedule contributes to improved operational efficiency and overall organizational performance. Through rigorous experimentation and evaluation, our study demonstrates the effectiveness of the OptiSchedule algorithm in improving timetable quality and reducing scheduling overhead. Compared to traditional methods, OptiSchedule generates timetables with fewer conflicts and better resource utilization, leading to enhanced productivity and satisfaction among stakeholders. Moreover, its fl
This study aims to evaluate how SiO2 fiber additions affect the mechanical properties of hybrid nanocomposites composed of polypropylene (PP), sisal, and Kevlar fibers. Different concentrations of SiO2 are integrated ...
详细信息
Chatbots use artificial intelligence (AI) and natural language processing (NLP) algorithms to construct a clever system. By copying human connections in the most helpful way possi-ble, chatbots emulate individuals and...
详细信息
This paper investigates the performance of solar PV system by implementing MPPT algorithm. The IV and PV characteristics of solar photovoltaic system have been analyzed for various parameters of temperature and irradi...
详细信息
Capturing the distributed platform with remotely controlled compromised machines using botnet is extensively analyzed by various ***,certain limitations need to be addressed *** provisioning of detection mechanism wit...
详细信息
Capturing the distributed platform with remotely controlled compromised machines using botnet is extensively analyzed by various ***,certain limitations need to be addressed *** provisioning of detection mechanism with learning approaches provides a better solution more broadly by saluting multi-objective *** bots’patterns or features over the network have to be analyzed in both linear and non-linear *** linear and non-linear features are composed of high-level and low-level *** collected features are maintained over the Bag of Features(BoF)where the most influencing features are collected and provided into the classifier ***,the linearity and non-linearity of the threat are evaluated with Support Vector Machine(SVM).Next,with the collected BoF,the redundant features are eliminated as it triggers overhead towards the predictor ***,a novel Incoming data Redundancy Elimination-based learning model(RedE-L)is built to classify the network features to provide robustness towards BotNets *** simulation is carried out in MATLAB environment,and the evaluation of proposed RedE-L model is performed with various online accessible network traffic dataset(benchmark dataset).The proposed model intends to show better tradeoff compared to the existing approaches like conventional SVM,C4.5,RepTree and so ***,various metrics like Accuracy,detection rate,Mathews Correlation Coefficient(MCC),and some other statistical analysis are performed to show the proposed RedE-L model's *** F1-measure is 99.98%,precision is 99.93%,Accuracy is 99.84%,TPR is 99.92%,TNR is 99.94%,FNR is 0.06 and FPR is 0.06 respectively.
Millets delves into the dynamics of the millets industry, with a particular focus on sales projection and customer segmentation as strategic levers for growth. The research commences with an in-depth analysis of the m...
详细信息
Millets delves into the dynamics of the millets industry, with a particular focus on sales projection and customer segmentation as strategic levers for growth. The research commences with an in-depth analysis of the millets market, encompassing production patterns, consumption trends, and emerging market opportunities. It explores the diverse range of millets varieties, their nutritional profiles, and the factors driving consumer preference. By understanding the market landscape, the study identifies key trends and challenges shaping the industry. A core component of this research is the development of a robust sales projection model. Employing advanced statistical and data-driven techniques, the model forecasts future sales based on historical data, market trends, and relevant economic indicators. The model incorporates factors such as consumer demographics, purchasing behavior, and competitive landscape to provide accurate and actionable insights. Customer segmentation is another critical aspect of the study. By applying clustering and profiling methodologies, the research identifies distinct customer segments based on factors such as age, income, dietary preferences, and purchasing habits. This segmentation enables a deeper understanding of customer needs and preferences, facilitating targeted marketing strategies and product development. The integration of sales projection and customer segmentation empowers businesses to make informed decisions, optimize resource allocation, and enhance overall market performance. By aligning product offerings and marketing efforts with customer segments, companies can achieve higher customer satisfaction, increased market share, and improved profitability. This research contributes to the growing body of knowledge on the millets industry by providing valuable insights into market dynamics, sales forecasting, and customer segmentation. The findings offer practical guidance for industry stakeholders, including farmers, processors
Swarm robotics describes the coordination among multiple robots assigned to perform a single task collectively and work as a system. The system is usually used in search and-rescue missions in adverse natural environm...
详细信息
Large models have recently played a dominant role in natural language processing and multimodal vision-language learning. However, their effectiveness in text-related visual tasks remains relatively unexplored. In thi...
详细信息
Large models have recently played a dominant role in natural language processing and multimodal vision-language learning. However, their effectiveness in text-related visual tasks remains relatively unexplored. In this paper, we conducted a comprehensive evaluation of large multimodal models, such as GPT4V and Gemini, in various text-related visual tasks including text recognition, scene text-centric visual question answering(VQA), document-oriented VQA, key information extraction(KIE), and handwritten mathematical expression recognition(HMER). To facilitate the assessment of optical character recognition(OCR) capabilities in large multimodal models, we propose OCRBench, a comprehensive evaluation benchmark. OCRBench contains 29 datasets, making it the most comprehensive OCR evaluation benchmark available. Furthermore, our study reveals both the strengths and weaknesses of these models, particularly in handling multilingual text, handwritten text, non-semantic text, and mathematical expression *** importantly, the baseline results presented in this study could provide a foundational framework for the conception and assessment of innovative strategies targeted at enhancing zero-shot multimodal *** evaluation pipeline and benchmark are available at https://***/Yuliang-Liu/Multimodal OCR.
The Internet of Things (IoT), which enables seamless connectivity and effective data exchange between physical items and digital systems, has completely changed the way we interact with our surroundings. This study ev...
详细信息
暂无评论