The important task of correcting label noise is addressed infrequently in literature. The difficulty of developing a robust label correction algorithm leads to this silence concerning label correction. To break the si...
详细信息
The important task of correcting label noise is addressed infrequently in literature. The difficulty of developing a robust label correction algorithm leads to this silence concerning label correction. To break the silence, we propose two algorithms to correct label noise. One utilizes self-training to re-label noise, called Self-Training Correction (STC). Another is a clustering-based method, which groups instances together to infer their ground-truth labels, called Cluster-based Correction (CC). We also adapt an algorithm from previous work, a consensus-based method called Polishing that consults with an ensemble of classifiers to change the values of attributes and labels. We simplify Polishing such that it only alters labels of instances, and call it Polishing Labels (PL). We experimentally compare our novel methods with Polishing Labels by examining their improvements on the label qualities, model qualities, and AUC metrics of binary and multi-class data sets under different noise levels. Our experimental results demonstrate that CC significantly improves label qualities, model qualities, and AUC metrics consistently. We further investigate how these three noise correction algorithms improve the data quality, in terms of label accuracy, in the context of image labeling in crowdsourcing. First, we look at three consensus methods for inferring a ground-truth label from the multiple noisy labels obtained from crowdsourcing, i.e., Majority Voting (MV), Dawid Skene (DS), and KOS. We then apply the three noise correction methods to correct labels inferred by these consensus methods. Our experimental results show that the noise correction methods improve the labeling quality significantly. As an overall result of our experiments, we conclude that CC performs the best. Our research has illustrated the viability of implementing noise correction as another line of defense against labeling error, especially in a crowdsourcing setting. Furthermore, it presents the feasibil
In this paper, an image segmentation method is presented to analyze the clusters of Computed Tomography (CT) image. Target image is divided to small parts called as observation screens. Principal Component Analysis (P...
详细信息
ISBN:
(纸本)9781509013531
In this paper, an image segmentation method is presented to analyze the clusters of Computed Tomography (CT) image. Target image is divided to small parts called as observation screens. Principal Component Analysis (PCA) is used for better representation of features about observation screens. The optimal number of component related with observation screen is determined by Horn's Parallel Analysis (PA). Besides, Local Standard Deviation (LSD) which is a method for extracting meaningful sub-features is applied to whole image for successful segmentation. The effect of segmentation success rate is analyzed by selected features. Consequently, a novel algorithm is proposed for minimizing total computation time and error of dimension reduction significantly. It is seen that the results of the algorithm are approximately same as conventional segmentation algorithms.
In this paper, an integral design that combines optical system with imageprocessing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often...
详细信息
ISBN:
(纸本)9781510601154
In this paper, an integral design that combines optical system with imageprocessing is introduced to obtain high resolution images, and the performance is evaluated and demonstrated. Traditional imaging methods often separate the two technical procedures of optical system design and imaging processing, resulting in the failures in efficient cooperation between the optical and digital elements. Therefore, an innovative approach is presented to combine the merit function during optical design together with the constraint conditions of imageprocessingalgorithms. Specifically, an optical imaging system with low resolution is designed to collect the image signals which are indispensable for imaging processing, while the ultimate goal is to obtain high resolution images from the final system. In order to optimize the global performance, the optimization function of ZEMAX software is utilized and the number of optimization cycles is controlled. Then Wiener filter algorithm is adopted to process the image simulation and mean squared error (MSE) is taken as evaluation criterion. The results show that, although the optical figures of merit for the optical imaging systems is not the best, it can provide image signals that are more suitable for imageprocessing. In conclusion. The integral design of optical system and imageprocessing can search out the overall optimal solution which is missed by the traditional design methods. Especially, when designing some complex optical system, this integral design strategy has obvious advantages to simplify structure and reduce cost, as well as to gain high resolution images simultaneously, which has a promising perspective of industrial application.
Facebook is one of the most used socials networking sites. It is more than a simple website, but a popular tool of communication. Social networking users communicate between them exchanging a several kinds of content ...
详细信息
ISBN:
(纸本)9789897581861
Facebook is one of the most used socials networking sites. It is more than a simple website, but a popular tool of communication. Social networking users communicate between them exchanging a several kinds of content including a free text, image and video. Today, the social media users have a special way to express themselves. They create a new language known as "internet slang", which crosses the same meaning using different lexical units. This unstructured text has its own specific characteristics, such as, massive, noisy and dynamic, while it requires novel preprocessing methods adapted to those characteristics in order to ease and make the process of the classification algorithms effective. Most of previous works about social media text classification eliminate Stopwords and classify posts based on their topic (e.g. politics, sport, art, etc). In this paper, we propose to classify them in a lower level into diverse pre-chosen classes using three machine learning algorithms SVM, Naive Bayes and K-NN. To improve our classification, we propose a new preprocessing approach based on the Stopwords, Internet slang and other specific lexical units. Finally, we compared between all results for each classifier, then between classifiers results.
This study concerns the effectiveness of several techniques and methods of signals processing and data interpretation for the diagnosis of aerospace structure defects. This is done by applying different known feature ...
详细信息
This study concerns the effectiveness of several techniques and methods of signals processing and data interpretation for the diagnosis of aerospace structure defects. This is done by applying different known feature extraction methods, in addition to a new CBIR-based one;and some soft computing techniques including a recent HPC parallel implementation of the U-BRAIN learning algorithm on Non Destructive Testing data. The performance of the resulting detection systems are measured in terms of Accuracy, Sensitivity, Specificity, and Precision. Their effectiveness is evaluated by the Matthews correlation, the Area Under Curve (AUC), and the F-Measure. Several experiments are performed on a standard dataset of eddy current signal samples for aircraft structures. Our experimental results evidence that the key to a successful defect classifier is the feature extraction method namely the novel CBIR-based one outperforms all the competitors - and they illustrate the greater effectiveness of the U-BRAIN algorithm and the MLP neural network among the soft computing methods in this kind of application. (c) 2016 Elsevier Ltd. All rights reserved.
Tetrolet transform has a better directionality of the structure and can express texture features of image precisely in dealing with high-dimensional signal. This paper introduces tetrolet transform into infrared and v...
详细信息
ISBN:
(数字)9783662498316
ISBN:
(纸本)9783662498316;9783662498293
Tetrolet transform has a better directionality of the structure and can express texture features of image precisely in dealing with high-dimensional signal. This paper introduces tetrolet transform into infrared and visible images for fusion to obtain a greater amount of information. First, the tetrolet transform was performed on the images which are fused to obtain high-pass and low-pass subbands on different scales. Then, a method based on local region gradient information was applied to low-pass subbands to get the low-pass fusion coefficients. Finally, the inverse tetrolet transform was utilized to obtain fused image. Using a variety of images to perform fusion experiment, all the results have shown that the fused image has more abundant features and more amount of information by using tetrolet transform. Compared with the traditional fusion algorithms, the fusion algorithm presented in this paper provides better subjective visual effect, and the standard deviation and entropy value would be somewhat increased.
This paper proposes a novel approach to person re-identification, a fundamental task in distributed multi-camera surveillance systems. Although a variety of powerful algorithms have been presented in the past few year...
详细信息
This paper proposes a novel approach to person re-identification, a fundamental task in distributed multi-camera surveillance systems. Although a variety of powerful algorithms have been presented in the past few years, most of them usually focus on designing hand-crafted features and learning metrics either individually or sequentially. Different from previous works, we formulate a unified deep ranking framework that jointly tackles both of these key components to maximize their strengths. We start from the principle that the correct match of the probe image should be positioned in the top rank within the whole gallery set. An effective learning-to-rank algorithm is proposed to minimize the cost corresponding to the ranking disorders of the gallery. The ranking model is solved with a deep convolutional neural network (CNN) that builds the relation between input image pairs and their similarity scores through joint representation learning directly from raw image pixels. The proposed framework allows us to get rid of feature engineering and does not rely on any assumption. An extensive comparative evaluation is given, demonstrating that our approach significantly outperforms all the state-of-the-art approaches, including both traditional and CNN-based methods on the challenging VIPeR, CUHK-01, and CAVIAR4REID datasets. In addition, our approach has better ability to generalize across datasets without fine-tuning.
Video is one of the most important forms of multimedia available, as it is utilized for security purposes, to transmit information, promote safety, and provide entertainment. As motion is the most integral element in ...
详细信息
ISBN:
(数字)9781522510260
ISBN:
(纸本)9781522510253
Video is one of the most important forms of multimedia available, as it is utilized for security purposes, to transmit information, promote safety, and provide entertainment. As motion is the most integral element in videos, it is important that motion detection systems and algorithms meet specific requirements to achieve accurate detection of real time events. Feature Detectors and Motion Detection in Video processing explores innovative methods and approaches to analyzing and retrieving video images. Featuring empirical research and significant frameworks regarding feature detectors and descriptor algorithms, the book is a critical reference source for professionals, researchers, advanced-level students, technology developers, and academicians.
暂无评论