Digital Watermarking has evolved as one of the latest technologies for digital media copyright protection. Watermarking of images can be done in many ways and one of the proposed algorithms for image watermarking is b...
详细信息
Digital Watermarking has evolved as one of the latest technologies for digital media copyright protection. Watermarking of images can be done in many ways and one of the proposed algorithms for image watermarking is by utilizing Fuzzy Logic. It is similar to the concept of a Fuzzy set, each element can be defined by an ordered pair, in which one is the value and other is the membership function value. Fuzzy logic systems can explain inaccurate information and explain their decisions. Fuzzy inference system is the simplest way of performing Fuzzy Logic. In the proposed method, three Fuzzy inference models are used to generate the weighing factor for embedding the watermark and input to the Fuzzy Inference System is taken from the Human visual System model. The Performance measures used in the Process are Peak Signal to Noise Ratio, Normalized Cross Correlation. The Proposed algorithm is immune to various imageprocessing attacks.
With the advent of high-performance embedded computing (HPEC) systems, many digital processingalgorithms are now implemented by special-purpose massively parallel processors. In this paper, a low-power ARM/GPU co-des...
详细信息
With the advent of high-performance embedded computing (HPEC) systems, many digital processingalgorithms are now implemented by special-purpose massively parallel processors. In this paper, a low-power ARM/GPU co-design architecture is addressed using OpenCL-based parallel programming for implementing complex reconstructive signal processing operations. Such operations are accelerated using data-parallel functions on the GPU and ARM processor, in a HW/SW co-design scheme via OpenCL API calls. Experimental results shows the achieved computational performance and the effectiveness of the OpenCL standard comparing the framework against traditional parallel embedded versions.
As medical imaging facilities move towards film-less imaging technology, robust image compression systems are starting to play a key role. Conventional storage and transmission of large-scale raw medical image dataset...
详细信息
ISBN:
(纸本)9781479988518
As medical imaging facilities move towards film-less imaging technology, robust image compression systems are starting to play a key role. Conventional storage and transmission of large-scale raw medical image datasets can be very expensive and time-consuming. Recently, we proposed a memory-assisted lossless image compression algorithm based on Principal Component Analysis (PCA). In this paper, we further improve the performance of the algorithm in two different directions: Firstly, we replace PCA with NMF (Non Negative Matrix Factorization). NMF has several advantages in representing images with an image-like basis, results in sparse factors, and provides better user control over iterations. Secondly, we expand the single-level model with a new multi-level decomposition/projection framework to further reduce entropy of residual images. Our experimental results on X-ray images confirm that both modifications provide significant improvements over the single level PCA based algorithm as well as existing non-memory based techniques.
Machine vision systems are used in many areas for monitoring of technological processes. Among this processes welding takes important place, where often infrared cameras are used. Besides reliable hardware, successful...
详细信息
Machine vision systems are used in many areas for monitoring of technological processes. Among this processes welding takes important place, where often infrared cameras are used. Besides reliable hardware, successful application of vision systems requires suitable software based on proper algorithms. One of most important group of imageprocessingalgorithms is connected to image segmentation. Obtainment of exact boundary of an object that changes shape in time, such as the welding arc, represented on a thermogram is not a trivial task. In the paper a segmentation method using supervised approach based on a cellular neural networks is presented. Simulated annealing and genetic algorithm were used for training of the network (template optimization). Comparison of proposed method to a well elaborated segmentation method based on region growing approach was made. Obtained results prove that the cellular neural network can be a valuable tool for infrared welding pool images segmentation. (C) 2014 Elsevier B.v. All rights reserved.
In recent years there have been changes in the way cars are designed. Car manufactures put a lot of effort on safety and systems that provide information to the driver with the long-term objective of achieve a complet...
详细信息
In recent years there have been changes in the way cars are designed. Car manufactures put a lot of effort on safety and systems that provide information to the driver with the long-term objective of achieve a complete self-driving car. Nowadays the most effective approach relies on data fusion of information, coming from a plethora of different sensors like RADARs and videocameras. While some of these sensors are already available on commercial cars, others will be introduced step-by-step. Therefore, data fusion algorithms should address the possibility to manage duplicated information and using this redundant information to validate the received data. In this work we describe two near-real time algorithms which exploit the video stream acquired by on-board camera. One allows for the identification of traffic light status and the second one addresses vehicles tracking and plate recognition.
Digital images are corrupted by impulse noise mainly due to sensor faults of image acquisition devices and adverse channel environment which in turn degrades the image quality. A decision based switching median filter...
详细信息
Digital images are corrupted by impulse noise mainly due to sensor faults of image acquisition devices and adverse channel environment which in turn degrades the image quality. A decision based switching median filter (DBSMF) to restore images corrupted with high density impulse noise is proposed in this paper. The global use of standard median filters for impulse noise removal from corrupted images provide good results but the filtering operation may affect fine pixels in addition to noisy pixels which leaves a blurred effect on the filtered image. In order to address this issue the proposed algorithm makes use of an efficient detection scheme to identify the noise pixels and noise free pixels. The detection algorithm clusters the pixels in the corrupted image so as to fall under three categories which states whether the pixels are corrupted or uncorrupted. The proposed switching median filter processes only on those pixels that are classified as corrupted and replaces the processing pixel by the median value. Under high noise densities the filtering window consists of more number of corrupted pixels. For such cases, the proposed algorithm restricts certain conditions on the expansion of the filtering window size to effectively choose the median value. The performance of this decision based algorithm is tested against four noise models for different levels of noise densities and is evaluated in terms of performance metrics which include Peak Signal to Noise ratio (PSNR) and image Enhancement Factor (IEF). It gives better results for images that are extremely corrupted up to 90% noise density and outperforms classic filters in terms of handling image corruption.
In this paper, a real time multi-view human activity recognition model using a RGB-D (Red Green Blue-Depth) sensor is proposed. The method receives as input RGB-D data streams in real time from a Kinect for Windows v2...
详细信息
In this paper, a real time multi-view human activity recognition model using a RGB-D (Red Green Blue-Depth) sensor is proposed. The method receives as input RGB-D data streams in real time from a Kinect for Windows v2 sensor. Initially, a skeleton-tracking algorithm is applied which gives 3D joint information of 25 unique joints. The presented approach uses a weighted version of the Fast Dynamic Time Warping that weighs the importance of each skeleton joint towards the Dynamic Time Warping (DTW) similarity cost. To recognize multi-view human activities, the weighted Dynamic Time Warping warps a time sequence of joint positions to reference time sequences and produces a similarity value. Experimental results demonstrate that the proposed method is robust, flexible and efficient with respect to multiple views activity recognition, scale and phase variations activities at different realistic scenes.
Search Engines are based on bivalent logic and probability theory with lack of real world conceptual knowledge, higher precision and reasoning ability. Ranked links or snippets are less effective with big data Analysi...
详细信息
ISBN:
(纸本)9781479976782
Search Engines are based on bivalent logic and probability theory with lack of real world conceptual knowledge, higher precision and reasoning ability. Ranked links or snippets are less effective with big data Analysis and web of information. State of art Search Engine Google endeavor to retrieve good quality document, Msn Engine i.e. Bing endeavor decision boosting. Advanced algorithms account in user intents semantics and societal patterns on Web with recommendation system as product (recommendation Engines). Search engines have advanced from Text based to voice based (Dialogue based) to image based (multimedia as input Question to search). Major advancement analysis of search Engine Enhancement, search Engines have advanced from traditional data base retrieval machine to web based machines from horizontal engines to vertical Engines (***, ***) with dedicated crawler Technology at heart. Information of web is structured instructed and posses a huge challenge to big data analytics. All though Retrieval results are optimistic yet they Lack in ability to interpret user Question. Precise answering from relevant Informatics is Search Engine Enhancement. Time complexity and memory are Algorithmic and Machine design parameters that support optimized search result. Question Answering Search Engine (QA Engine) is machine with deductive reasoning capability ability to amalgamate information from various knowledge datasets. QA Engine is front linear area in advanced information retrieval techniques, state of art technology to future of search engines, expert systems. QA search engines have advanced from Shallow Technique (keyword technique) to template based Structured Knowledge processing Engines, profile based engines, and context based machines to cross language machines to Multimedia QA Engines. Community based QA Yahoo answer, stack overflow to specialized search engines like ***, ***, true knowledge. Question Answering system have been embedded
High resolution remote sensing image contains abundant information, and the amount of data expands rapidly , it is challenge and advantage to the corn information extraction. For high resolution images, the spectral c...
详细信息
Underwater imageprocessing area has been considered an important topic within the last decades with important achievements. This kind of images are essentially characterized by their poor visibility because light is ...
详细信息
Underwater imageprocessing area has been considered an important topic within the last decades with important achievements. This kind of images are essentially characterized by their poor visibility because light is exponentially attenuated as it travels in the water and the scenes result poorly contrasted and hazy. On the other hand, image restoration takes into account the influence of the environment on the image in order to achieve an image with an improved quality. This technique consist of inverting the physical model of image formation. That model contains parameters which represent variables such as coefficients of absorption, scattering, among others. In this case, the quality of the restored image depends on the correct estimation of these parameters. In this work, an approach based on evolutionary optimization algorithms is proposed, for restoring underwater images by estimating the model parameters, and using two metrics for quality assessment. The degradation in the images has been simulated by using an image formation model. Results show that image restoration based on a Multi-Objective Differential Evolution (MODE) algorithm achieves images with good contrast and sharpness, being even better than the original image.
暂无评论