版权所有:内蒙古大学图书馆 技术提供:维普资讯• 智图
内蒙古自治区呼和浩特市赛罕区大学西街235号 邮编: 010021
作者机构:Department of Computer Science (SF) Sourasthra College Tamil Nadu Madurai India Department of Computer Applications CMR Institute of Technology Karnataka Bangalore India Business Analytics School of Business and Management Christ University Karnataka Bangalore India Department of Computer Science and Engineering KPR Institute of Engineering and Technology Tamilnadu Uthupalayam 641407 India
出 版 物:《SN Computer Science》 (SN COMPUT. SCI.)
年 卷 期:2025年第6卷第2期
页 面:1-12页
主 题:Computer vision Loss operator Object tracking Residual network
摘 要:The activity of the object in question is alerted directly upon completion of an effective object tracking. Dependent on hardware support or not, a strong object tracking protocol is required for a precise object tracking application. According to these methods, tracking an object accurately within a predetermined processing time window required a significant amount of computer complexity. In contrast, a variety of quality-degrading elements, including occlusion, shifting lighting, shadows, and so on, have an adverse effect on tracking. All of these tracking shortcomings will be fixed by a revolutionary residual network based on loss operator and anchor creation. Detection of object has concerns that rely on the process of feature extraction to afford efficient quality. For this purpose a model called ResNet has been used that comprises thirty layers and hence named as Resnet-thirty. These networks are a type of Convolutional Neural Network (CNN) that contain residual connections among various layers. The various merits of these connections is the network has the capability to learn the features of global, local and intermediate in parallel. As such, the system is robust against changes in lighting. These variations in light were understood in terms of tracking objects within a changing background. The proposed work uses MOT datasets. This dataset comprises of MOT 15, MOT16, MOT17 and MOT20. The results have been found by using these datasets. Hence, it evidently outperforms in terms of precision, recall, MOTA, IDF, MOTP, SAIDF and F1 measure to track the objects. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2025.