image enhancement is a processing on an image to make it more suitable for some applications. The main problem addressed by this paper is the enhancement of medical images using efficient algorithms based on HE techni...
详细信息
ISBN:
(纸本)9781450334419
image enhancement is a processing on an image to make it more suitable for some applications. The main problem addressed by this paper is the enhancement of medical images using efficient algorithms based on HE techniques. The paper involves analyzing and formulating different HE image enhancement techniques suitable for various medical applications. More precisely, proposed research will focus on the enhancement of medical images captured under poor illumination conditions, foggy situations and speckle noise etc.,. Developing algorithms that would assist doctors to diagnose the disease in the beginning stage only. For example the removal of speckle noise and artifacts with segmenting kidney from those images where kidney boundary are not much clear. Medical images are enhanced using efficient Histogram Equalization(HE) techniques such as Iterative dynamic HE, Dualistic sub-image HE, Background brightness preserving HE, Gray-level and gradient magnitude HE. For improving the performance of imageprocessingsystems the crucial solution is implementation of imageprocessing techniques in hardware. Therefore for improving the performance of imageprocessingsystems, for flexible design development, more compact, low power, and high speed and to reduce cost and time, gives the implementation of efficient histogram algorithms on Field Programmable Gate Array (FPGA).
Ultrasound elastography is able to provide a non-invasive measurement of tissue elasticity properties. Shear wave imaging (SWI) technique is a quantitative method for tissue stiffness assessment. However, traditional ...
详细信息
ISBN:
(纸本)9781424492701
Ultrasound elastography is able to provide a non-invasive measurement of tissue elasticity properties. Shear wave imaging (SWI) technique is a quantitative method for tissue stiffness assessment. However, traditional SWI implementations cannot acquire 2D quantitative images of tissue elasticity distribution. In this study, a new shear wave imaging system is proposed and evaluated. Detailed delineation of hardware and imageprocessingalgorithms are presented. Programmable devices are selected to support flexible control of the system and the imageprocessingalgorithms. Analytic signal based cross-correlation method and a Radon transform based shear wave speed determination method are proposed with parallel computation ability. Tissue mimicking phantom imaging, and in vitro imaging measurements are conducted to demonstrate the performance of the proposed system. The system has the ability to provide a new choice for quantitative mapping of the tissue elasticity, and has good potential to be implemented into commercial ultrasound scanner.
Automatic extraction of retinal blood vessels is an important task for computer aided diagnosis from retinal images. Without extracting of blood vessels, the structures with pathological findings such as microaneurysm...
详细信息
ISBN:
(纸本)9781467383530
Automatic extraction of retinal blood vessels is an important task for computer aided diagnosis from retinal images. Without extracting of blood vessels, the structures with pathological findings such as microaneurysms, haemorrhages or neovascularisations could be erroneously exchanged. We developed two independent methods;every method is the combination of different morphological operations with different structural elements (different types and sizes). images from standard database with blood vessels marked by ophthalmologist were used for evaluation. Sensitivity, specificity and accuracy were used as measures of methods efficiency. Both approaches show promising results and could be used as a part of image preprocessing before pathological retinal findings detection algorithms.
Motion detection plays a crucial role in most video based applications. A particular background subtraction technique called ViBe (Visual Background Extractor) is commonly used to obtain foreground objects from the ba...
详细信息
Motion detection plays a crucial role in most video based applications. A particular background subtraction technique called ViBe (Visual Background Extractor) is commonly used to obtain foreground objects from the background due to its high detection rate and low computational complexity. However, the performance is not very satisfying. Therefore, this paper presents an improved ViBe algorithm to increase the accuracy and robustness of motion detection. Specifically, a foreground feature map is created by optimizing the result of ViBe algorithm. Then the edge detection of the original video frames is achieved after pre-sharpening using improved Sobel operator and Otsu algorithm. Finally, by feature fusion (of the foreground and background feature maps) and contour filling, the motion detection results can be obtained. The experiments demonstrate the improvements of the proposed modifications at a limited additional cost.
As automated system is an efficient one which decreases the malfunctions and error, industries having more demand to adopt such systems day by day. To enhance this capability, imageprocessing-based system has a major...
详细信息
ISBN:
(纸本)9788132220084
As automated system is an efficient one which decreases the malfunctions and error, industries having more demand to adopt such systems day by day. To enhance this capability, imageprocessing-based system has a major role. But to design an embedded system with the facilities, it is difficult and challengeable task. Another issue for the use interfacing of such systems should be simple so that the system can be user friendly. To develop this system with various facilities, this piece of work has been attempted. Hence in this paper, authors showcase the attempt to realize the algorithms based on imageprocessing for different applications. The work is developed in Raspberry pi development board with python 2.7.3 and OpenCV 2.3.1 platform. The result shows for the fully automated without any user intervention. The prime focus is on python image library (PIL) in addition to numpy, scipy, and matplotlib form a powerful platform for scientific computing.
PDR (Pedestrian Dead Reckoning) is a very promising technology for indoor positioning. We held a technical challenge, entitled the UbiComp/ISWC 2015 PDR Challenge, consisting of the following three categories: a PDR a...
详细信息
ISBN:
(纸本)9781509017423
PDR (Pedestrian Dead Reckoning) is a very promising technology for indoor positioning. We held a technical challenge, entitled the UbiComp/ISWC 2015 PDR Challenge, consisting of the following three categories: a PDR algorithm category; a PDR Evaluation method category; and an exhibition. In this paper, we especially focus on several systems for the PDR algorithm category. A PDR skeleton was prepared for the participants. Using an Android skeleton, participants focus on implementing the PDR algorithm because of the skeleton's various functions, such as sensor data acquisition, trajectory visualization, and sensor data upload. The evaluation server evaluates the accuracy of each PDR algorithm automatically as often as sensor data is uploaded to the server and provides a trajectory image file so that participants can compare their PDR algorithms in real time.
Approximate Computing is frequently mentioned as a new computing paradigm that enables improving energy efficiency at the expense of quality. But what is Approximate Computing? How can it be used in a truly innovative...
详细信息
Approximate Computing is frequently mentioned as a new computing paradigm that enables improving energy efficiency at the expense of quality. But what is Approximate Computing? How can it be used in a truly innovative way that goes beyond reducing precision or approximating complex operations or algorithms at the expense of accuracy as it is already done regularly in the VLSI signal processing community for implementing complex video, audio, or communication systems? In this talk, we focus on Approximate Computing as a new paradigm to deal specifically with one of the most important problems of the semiconductor industry today: the reliability issues and uncertainties in modern process technologies that appear especially at low voltages. We show how Approximate Computing can and should be interpreted as a systematic idea of dealing with these reliability issues that are statistical in nature and appear only at run-time. In this sense, the interpretation of the term is significantly different from the static design-time interpretation of the term used in the VLSI signal processing community. Approximations and corresponding circuits serve as a means to ensure graceful performance degradation at run-time in the presence of uncertainties or errors, rather than simply reducing complexity once at design time. This ability then allows not only for circuits with reduced area and better energy efficiency. It also enables better overall performance metrics since each chip delivers at every moment of its life the best possible (adjustable) energy and quality trade off with an energy-proportional behavior adjusted to its operating conditions and user demands.
Low-poly has lately become a trending style in flat designs, Web designs, illustrations, and etc., in which only a small amount of triangles are used to provide an abstract and artistic effect. However, creating desig...
详细信息
ISBN:
(纸本)9781467383530
Low-poly has lately become a trending style in flat designs, Web designs, illustrations, and etc., in which only a small amount of triangles are used to provide an abstract and artistic effect. However, creating designs in low-poly style manually is obviously a tiring job. In this paper, we propose a real-time triangulation method to synthesize images and videos automatically into low-poly style. In order to keep edge and color information out of limited triangles, vertices on the edges detected in the image have a greater probability to be selected to compose triangles. For video low-poly stylization, an antijittering method is proposed to eliminate the abrupt changes in position and color of the triangles between adjacent frames. We use OpenGL Shading Language (GLSL) to speed up the calculation in GPU. We compare images generated with our method with other algorithms and those drawn manually by artists. Results show that our method provides an elegant and artistic effect for low-poly in real-time.
Brain-Computer Interfaces (BCIs) are systems capable of capturing and interpreting the consent changes in the activity of brain (e.g. intention of limb movement, attention focus on specific frequency or symbol) and tr...
详细信息
ISBN:
(纸本)9781467383479
Brain-Computer Interfaces (BCIs) are systems capable of capturing and interpreting the consent changes in the activity of brain (e.g. intention of limb movement, attention focus on specific frequency or symbol) and translating them into sets of instructions, which can be used for the control of a computer. The most popular hardware solutions in BCI are based on the signals recorded by the electroencephalograph (EEG). Such signals can be used to record and monitor the bioelectrical activity of the brain. However, raw EEG scalp potentials are characterized by a weak spatial resolution. Due to that reason, multichannel EEG recordings tend to provide an unclear image of the activity of brain and the use of special signal processing and analysis methods is needed. A typical approach towards modern BCIs requires an extensive use of Machine Learning methods. It is generally accepted that the performance of such systems is highly sensitive to the feature extraction step. One of the most effective and widely used descriptors of EEG data is the power of the signal calculated in a specific frequency range. In order to improve the performance of chosen classification algorithm, the distribution of the extracted bandpower features is often normalized with the use of natural logarithm function. In this study the step of normalization of feature distribution was taken into careful consideration. Commonly used logarithm function is not always the best choice for this process. Therefore, the influence on the skewness of features, as well as, on the general classification accuracy of different settings of Box-Cox transformation will be tested in this article and compared to classical approach that employs natural logarithm function. For the better evaluation of the performance of the proposed approach, its effectiveness is tested in the task of classification of the benchmark data provided for the "BCI Competition III" (dataset "IVa") organized by the Berlin Brain-Computer Interface
This paper presents an approximated Robust Principal Component Analysis (ARPCA) framework for recovery of a set of linearly correlated images. Our algorithm seeks an optimal solution for decomposing a batch of realist...
详细信息
ISBN:
(纸本)9781467383530
This paper presents an approximated Robust Principal Component Analysis (ARPCA) framework for recovery of a set of linearly correlated images. Our algorithm seeks an optimal solution for decomposing a batch of realistic unaligned and corrupted images as the sum of a low-rank and a sparse corruption matrix, while simultaneously aligning the images according to the optimal image transformations. This extremely challenging optimization problem has been reduced to solving a number of convex programs, that minimize the sum of Frobenius norm and the l(1)-norm of the mentioned matrices, with guaranteed faster convergence than the state-of-the-art algorithms. The efficacy of the proposed method is verified with extensive experiments with real and synthetic data.
暂无评论