Speeded-Up Robust Feature (SURF) and Scale Invariant Feature Transform (SIFT) have been two well-known methods used in extracting features. This paper presents and analyzes performance comparison between the SURF appr...
Speeded-Up Robust Feature (SURF) and Scale Invariant Feature Transform (SIFT) have been two well-known methods used in extracting features. This paper presents and analyzes performance comparison between the SURF approach and the SIFT technique for content-based image retrieval (CBIR) application. In particular, we are interested in comparing the accuracy and the response time between these two methods. For the testing purposes, we make use sample images obtained for the Pennsylvania State College of Information Science and Technology database. As it turns out, in this paper, we will demonstrate that in terms of accuracy and speed, SURF shows superior performance compared to SIFT.
Validation metrics are key for tracking scientific progress and bridging the current chasm between artificial intelligence (AI) research and its translation into practice. However, increasing evidence shows that parti...
详细信息
Automatically verifying the identity of a person by means of biometrics (e.g., face and fingerprint) is an important application in our day-to-day activities such as accessing banking services and security control in ...
详细信息
Automatically verifying the identity of a person by means of biometrics (e.g., face and fingerprint) is an important application in our day-to-day activities such as accessing banking services and security control in airports. To increase the system reliability, several biometric devices are often used. Such a combined system is known as a multimodal biometric system. This paper reports a benchmarking study carried out within the framework of the BioSecure DS2 (Access Control) evaluation campaign organized by the University of Surrey, involving face, fingerprint, and iris biometrics for person authentication, targeting the application of physical access control in a medium-size establishment with some 500 persons. While multimodal biometrics is a well-investigated subject in the literature, there exists no benchmark for a fusion algorithm comparison. Working towards this goal, we designed two sets of experiments: quality-dependent and cost-sensitive evaluation. The quality-dependent evaluation aims at assessing how well fusion algorithms can perform under changing quality of raw biometric images principally due to change of devices. The cost-sensitive evaluation, on the other hand, investigates how well a fusion algorithm can perform given restricted computation and in the presence of software and hardware failures, resulting in errors such as failure-to-acquire and failure-to-match. Since multiple capturing devices are available, a fusion algorithm should be able to handle this nonideal but nevertheless realistic scenario. In both evaluations, each fusion algorithm is provided with scores from each biometric comparison subsystem as well as the quality measures of both the template and the query data. The response to the call of the evaluation campaign proved very encouraging, with the submission of 22 fusion systems. To the best of our knowledge, this campaign is the first attempt to benchmark quality-based multimodal fusion algorithms. In the presence of changing
Increasing evidence shows that flaws in machine learning (ML) algorithm validation are an underestimated global problem. Particularly in automatic biomedical image analysis, chosen performance metrics often do not ref...
详细信息
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from t...
International benchmarking competitions have become fundamental for the comparative performance assessment of image analysis methods. However, little attention has been given to investigating what can be learnt from these competitions. Do they really generate scientific progress? What are common and successful participation strategies? What makes a solution superior to a competing method? To address this gap in the literature, we performed a multicenter study with all 80 competitions that were conducted in the scope of IEEE ISBI 2021 and MICCAI 2021. Statistical analyses performed based on comprehensive descriptions of the submitted algorithms linked to their rank as well as the underlying participation strategies revealed common characteristics of winning solutions. These typically include the use of multi-task learning (63%) and/or multi-stage pipelines (61%), and a focus on augmentation (100%), image preprocessing (97%), data curation (79%), and post-processing (66%). The “typical” lead of a winning team is a computer scientist with a doctoral degree, five years of experience in biomedical image analysis, and four years of experience in deep learning. Two core general development strategies stood out for highly-ranked teams: the reflection of the metrics in the method design and the focus on analyzing and handling failure cases. According to the organizers, 43% of the winning algorithms exceeded the state of the art but only 11% completely solved the respective domain problem. The insights of our study could help researchers (1) improve algorithm development strategies when approaching new problems, and (2) focus on open research questions revealed by this work.
An algorithm capable of computing the robot position by evaluating measurements of frame to frame intensity differences was extended to be able to detect outliers in the measurements to exclude them from the evaluatio...
详细信息
An algorithm capable of computing the robot position by evaluating measurements of frame to frame intensity differences was extended to be able to detect outliers in the measurements to exclude them from the evaluation to perform the positioning, with the aim of improving its robustness in irregular terrain scenes, such as consisting of flat surfaces with stones on them. The images are taken by a camera firmly attached to the robot, tilted downwards, looking at the planetary surface. A measurement is detected as an outlier only if its intensity difference and linear intensity gradients can not be described by motion compensation. According to the experimental results, this modification reduced the positioning error by a factor of one third in difficult terrain, maintaining its positioning error, which resulted in an average of 1.8%, within a range of 0.15% and 2.5% of distance traveled, similar to those achieved by state of the art algorithms successfully used in robots here on earth and on Mars.
This paper reviews the AIM 2019 challenge on real world super-resolution. It focuses on the participating methods and final results. The challenge addresses the real world setting, where paired true high and low-resol...
详细信息
Accurate segmentation of lung cancer in pathology slides is a critical step in improving patient care. We proposed the ACDC@LungHP (Automatic Cancer Detection and Classification in Whole-slide Lung Histopathology) cha...
详细信息
While the importance of automatic image analysis is continuously increasing, recent meta-research revealed major flaws with respect to algorithm validation. Performance metrics are particularly key for meaningful, obj...
详细信息
Automatic aorta segmentation and quantification in thoracic computed tomography (CT) images is important for detection and prevention of aortic diseases. This paper proposes an automatic aorta segmentation algorithm i...
详细信息
暂无评论