As a kind of traditional Chinese Medicine,plaster has been favored by more and more patients because of its unique advantages in the treatment of *** the coating of plaster,its quality is very important to the ***,coa...
详细信息
ISBN:
(纸本)9781538631089;9781538631072
As a kind of traditional Chinese Medicine,plaster has been favored by more and more patients because of its unique advantages in the treatment of *** the coating of plaster,its quality is very important to the ***,coating surface defect detection is very *** current mainstream detection method is manual inspection,there are many disadvantages in this,such as extremely low efficiency and bad for *** light of this situation,a set of plaster coating quality automatic detection system based on machine vision has been proposed in this *** a detailed analysis of the coating defects,a set of image detection algorithms have been *** can be found from the experimental results that the algorithm can identify the type of defects and locate the position *** error detection rate is low,and the robustness is good.
Efficient solutions must be considered, in order to solve the problem of intensive computing of the imageprocessing applications and to achieve high real-time performance. The graphics processing unit (GPU) is an eff...
详细信息
We present an algorithm for creating high resolution anatomically plausible images consistent with acquired clinical brain MRI scans with large inter-slice spacing. Although large databases of clinical images contain ...
详细信息
ISBN:
(数字)9783319590509
ISBN:
(纸本)9783319590509;9783319590493
We present an algorithm for creating high resolution anatomically plausible images consistent with acquired clinical brain MRI scans with large inter-slice spacing. Although large databases of clinical images contain a wealth of information, medical acquisition constraints result in sparse scans that miss much of the anatomy. These characteristics often render computational analysis impractical as standard processingalgorithms tend to fail when applied to such images. Highly specialized or application-specific algorithms that explicitly handle sparse slice spacing do not generalize well across problem domains. In contrast, our goal is to enable application of existing algorithms that were originally developed for high resolution research scans to significantly undersampled scans. We introduce a model that captures fine-scale anatomical similarity across subjects in clinical image collections and use it to fill in the missing data in scans with large slice spacing. Our experimental results demonstrate that the proposed method outperforms current upsampling methods and promises to facilitate subsequent analysis not previously possible with scans of this quality.
Presentation attack on the face recognition systems is well studied in the biometrics community resulting in various techniques for detecting the attacks. A low-cost presentation attack (e.g. print attacks) on face re...
详细信息
ISBN:
(纸本)9781538607336
Presentation attack on the face recognition systems is well studied in the biometrics community resulting in various techniques for detecting the attacks. A low-cost presentation attack (e.g. print attacks) on face recognition systems has been demonstrated for systems operating in visible, multispectral (visible and near infrared spectrum) and extended multispectral (more than two spectral bands spanning from visible to near infrared space, commonly in 500nm-1000nm). In this paper, we propose a novel method to detect the presentation attacks on the extended multispectral face recognition systems. The proposed method is based on characterising the reflectance properties of the captured image through the spectral signature. The spectral signature is further classified using the linear Support Vector Machine (SVM) to obtain the decision on presented sample as an artefact or bona-fide. Since the reflectance property of the human skin and the artefact material differ, the proposed method can efficiently detect the presentation attacks on the extended multispectral system. Extensive experiments are carried out on a publicly available extended multispectral face database (EMSPAD) comprised of 50 subjects with two different Presentation Attack Instruments (PAI) generated using two different printers. The comparison analysis is presented by comparing the performance of the proposed scheme with the contemporary schemes based on the image fusion and score level fusion for PAD. Based on the obtained results, the proposed method has indicated the best performance in detecting both known and unknown attacks.
Smart vision systems on a chip are promising for embedded applications. Currently, flexibility in the choice of integrated pre-processing tools is obtained at the expense of total silicon area and fill factor, which a...
详细信息
Brain tumors constitute one of the deadliest forms of cancers, with a high mortality rate. Of these, Glioblastoma multiforme (GBM) remains the most common and lethal primary brain tumor in adults. Tumor biopsy being c...
详细信息
ISBN:
(纸本)9781509060344
Brain tumors constitute one of the deadliest forms of cancers, with a high mortality rate. Of these, Glioblastoma multiforme (GBM) remains the most common and lethal primary brain tumor in adults. Tumor biopsy being challenging for brain tumor patients, noninvasive techniques like imaging play an important role in the process of brain cancer detection, diagnosis and prognosis;particularly using Magnetic Resonance Imaging (MRI). Therefore, development of advanced extraction and selection strategies of quantitative MRI features become necessary for noninvasively predicting and grading the tumors. In this paper we extract 56 three-dimensional quantitative MRI features, related to tumor image intensities, shape and texture, from 254 brain tumor patients. An adaptive neuro-fuzzy classifier based on linguistic hedges (ANFC-LH) is developed to simultaneously select significant features and predict the tumor grade. ANFC-LH achieves a significantly higher testing accuracy (85.83%) as compared to existing standard classifiers.
The proceedings contain 220 papers. The topics discussed include: intelligent disaster warning and response system with dynamic route selection for evacuation;leveraging cell phones for surveillance;using natural lang...
ISBN:
(纸本)9781538619599
The proceedings contain 220 papers. The topics discussed include: intelligent disaster warning and response system with dynamic route selection for evacuation;leveraging cell phones for surveillance;using natural language processing for analyzing Arabic poetry rhyme;artificial intelligence and sensors based assistive system for the visually impaired people;single image super resolution: an efficient approach using auto-learning and filter pooling;essential pre-processing tasks involved in data preparation for social network user behavior analysis;energy performance of optimally inclined free standing photovoltaic system;micro controller based automatic aquaphonic system using solar;performance comparison of DTN multicasting routing algorithms- opportunities and challenges;smooth video streaming in bandwidth fluctuating environment;survivable fiber optic networks design by using digital signal levels approach;and automation of home appliances using visible light communication.
Huffman encoding provides a simple approach for lossless compression of sequential data. The length of encoded symbols varies and these symbols are tightly packed in the compressed data. Thus, Huffman decoding is not ...
详细信息
ISBN:
(纸本)9781538637906
Huffman encoding provides a simple approach for lossless compression of sequential data. The length of encoded symbols varies and these symbols are tightly packed in the compressed data. Thus, Huffman decoding is not easily parallelisable. This is unfortunate since it is desirable to have a parallel algorithm which scales with the increased core count of modern systems. This paper presents a parallel approach for decoding Huffman codes which work by decoding from every location in the bit sequence then concurrently combining the results into the uncompressed sequence. Although requiring more operations than serial approaches the presented approach is able to produce results marginally faster, on sufficiently large data sets, then that of a simple serial implementation. This is achieved by using the large number of threads available on modern GPUs. A variety of implementations, primarily OpenCL, are presented to demonstrate the scaling of this algorithm on CPU and GPU hardware in response to cores available. As devices with more cores become available, the importance of such an algorithm will increase.
This article addresses the study of the anomaly and fraud detection problem in the data from social services. The problem of detecting anomalies is extremely relevant for data-driven processes in the digital economy. ...
This article addresses the study of the anomaly and fraud detection problem in the data from social services. The problem of detecting anomalies is extremely relevant for data-driven processes in the digital economy. In this paper, we propose a two-step approach for the detection of anomalies using auto-encoders and the conjugacy indicator. An experimental study of the efficiency of the proposed algorithms was conducted using open-access data set.
Lensless imaging systems have the potential to provide new capabilities for lower size and weight configuration than traditional imaging systems. Lensless imagers frequently utilize computational imaging techniques, w...
详细信息
ISBN:
(纸本)9781510609457;9781510609464
Lensless imaging systems have the potential to provide new capabilities for lower size and weight configuration than traditional imaging systems. Lensless imagers frequently utilize computational imaging techniques, which moves the complexity of the system away from optical subcomponents and into a calibration process whereby the measurement matrix is estimated. We report on the design, simulation, and prototyping of a lensless imaging system that utilizes a 3D printed optically transparent random scattering element. Development of end-to-end system simulations, which includes simulations of the calibration process, as well as the data processing algorithm used to generate an image from the raw data are presented. These simulations utilize GPU-based raytracing software, and parallelized minimization algorithms to bring complete system simulation times down to the order of seconds. Hardware prototype results are presented, and practical lessons such as the effect of sensor noise on reconstructed image quality are discussed. System performance metrics are proposed and evaluated to discuss image quality in a manner that is relatable to traditional image quality metrics. Various hardware instantiations are discussed.
暂无评论