There are a lot of methods to acquire multispectral images. Dynamic band selective and area-scan multispectral camera has not developed yet. This research focused on development of a filter exchangeable 3CCD camera wh...
详细信息
ISBN:
(纸本)9780819490476
There are a lot of methods to acquire multispectral images. Dynamic band selective and area-scan multispectral camera has not developed yet. This research focused on development of a filter exchangeable 3CCD camera which is modified from the conventional 3CCD camera. The camera consists of F-mounted lens, image splitter without dichroic coating, three bandpass filters, three image sensors, filer exchangeable frame and electric circuit for parallelimage signal processing. In addition firmware and application software have developed. Remarkable improvements compared to a conventional 3CCD camera are its redesigned image splitter and filter exchangeable frame. Computer simulation is required to visualize a pathway of ray inside of prism when redesigning image splitter. Then the dimensions of splitter are determined by computer simulation which has options of BK7 glass and non-dichroic coating. These properties have been considered to obtain full wavelength rays on all film planes. The image splitter is verified by two line lasers with narrow waveband. The filter exchangeable frame is designed to make swap bandpass filters without displacement change of image sensors on film plane. The developed 3CCD camera is evaluated to application of detection to scab and bruise on Fuji apple. As a result, filter exchangeable 3CCD camera could give meaningful functionality for various multispectral applications which need to exchange bandpass filter.
This dissertation is a study on reliability and performance improvement of distributed system for large-scale data processing researched through 1999 to 2012 by the author who is enrolled at Hitachi Solutions, Ltd. an...
This dissertation is a study on reliability and performance improvement of distributed system for large-scale data processing researched through 1999 to 2012 by the author who is enrolled at Hitachi Solutions, Ltd. and Graduate School of Information Science and Technology, Osaka University. Recently, as the growth rate of data is getting larger, technologies of storing and processing data at low cost are essential in enterprises and organizations. Because the prices of high-performance commodity servers are going down, it is more popular to construct a cluster or a distributed system of those machines for parallel and distributed data processing of the large-scale data set. In those distributed systems, security, performance, scalability, and availability are principal measures of the system. In terms of the security perspective, the thin client system is one of the effective security methods for the distributed systems dealing with a great number of files in the central file servers. However, it does not diffuse because of its expensive introduction and operational cost. For this reason, the methodology for construction of low-cost thin client system is required. In the high-performance perspective, technologies for batch and real-time processing on the cluster of commodity machines are significant. As for the batch processing, one of the important applications is the high-performance parallel generation of segmented indexes for the distributed search system. The resource saving and performance improvement of the index generation and reconfiguration are strongly required. Furthermore, as a representative example of the real-time processing, the parallel rendering on the PC cluster is a significant application. As the image data has high resolution and fine-grained quality with a great number of polygons, it is essential to realize the high performance rendering of the large-scale three dimensional images. Based on this background, we address the following three iss
parallel to new developments in the fields of computer networks and high performance computing, effective distributed systems have emerged to answer the growing demand to process huge amounts of data. Comparing to tra...
详细信息
ISBN:
(纸本)9783642311277;9783642311284
parallel to new developments in the fields of computer networks and high performance computing, effective distributed systems have emerged to answer the growing demand to process huge amounts of data. Comparing to traditional network systems aimed mostly to send data, distributed computing systems are also focused on data processing what introduces additionally requirements in the system performance and operation. In this paper we assume that the distributed system works in an overlay mode, which enables fast, cost-effective and flexible deployment comparing to traditional network model. The objective of the design problem is to optimize task scheduling and network capacity in order to minimize the operational cost and to realize all computational projects assigned to the system. The optimization problem is formulated in the form of an ILP (Integer Linear Programming) model. Due to the problem complexity, four heuristics are proposed including evolutionary algorithms and Tabu Search algorithm. All methods are evaluated in comparison to optimal results yielded by the CPLEX solver. The best performance is obtained for the Tabu Search method that provides average results only 0.38% worse than optimal ones. Moreover, for larger problem instances with 20-minute limit of the execution time, the Tabu Search algorithm outperforms CPLEX for some cases.
Advances in the imageprocessing field have brought new methods which are able to perform complex tasks robustly. However, in order to meet constraints on functionality and reliability, imaging application developers ...
详细信息
ISBN:
(纸本)9780819484093
Advances in the imageprocessing field have brought new methods which are able to perform complex tasks robustly. However, in order to meet constraints on functionality and reliability, imaging application developers often design complex algorithms with many parameters which must be finely tuned for each particular environment. The best approach for tuning these algorithms is to use an automatic training method, but the computational cost of this kind of training method is prohibitive, making it inviable even in powerful machines. The same problem arises when designing testing procedures. This work presents methods to train and test complex imageprocessing algorithms in parallel execution environments. The approach proposed in this work is to use existing resources in offices or laboratories, rather than expensive clusters. These resources are typically non-dedicated, heterogeneous and unreliable. The proposed methods have been designed to deal with all these issues. Two methods are proposed: intelligent training based on genetic algorithms and PVM, and a full factorial design based on grid computing which can be used for training or testing. These methods are capable of harnessing the available computational power resources, giving more work to more powerful machines, while taking its unreliable nature into account. Both methods have been tested using real applications.
Most small-animal X-ray computed tomography (CT) scanners are based on cone-beam geometry with a flat-panel detector orbiting in a circular trajectory. image reconstruction in these systems is usually performed by app...
详细信息
Most small-animal X-ray computed tomography (CT) scanners are based on cone-beam geometry with a flat-panel detector orbiting in a circular trajectory. image reconstruction in these systems is usually performed by approximate methods based on the algorithm proposed by Feldkamp et al. Currently there are a strong need to speed-up the reconstruction of XRay CT data in order to extend its clinical applications. We present an efficient modular implementation of an FDK-based reconstruction algorithm that takes advantage of the parallel computing capabilities and the efficient bilinear interpolation provided by general purpose graphic processing units (GPGPU). The proposed implementation of the algorithm is evaluated for a high-resolution micro-CT and achieves a speed-up of 46, while preserving the reconstructed image quality.
The processing of microscopic tissue images and especially the detection of cell nuclei is nowadays done more and more using digital imagery and special immunodiagnostic software products. Since several methods (and a...
详细信息
Systems intended for the execution of long-running parallel applications require fault tolerant capabilities, since the probability of failure increases with the execution time and the number of nodes. Checkpointing a...
详细信息
Systems intended for the execution of long-running parallel applications require fault tolerant capabilities, since the probability of failure increases with the execution time and the number of nodes. Checkpointing and rollback recovery is one of the most popular techniques to provide fault tolerance support. However, in order to be useful for large scale systems, current checkpoint-recovery techniques should tackle the problem of reducing checkpointing cost. This paper addresses this issue through the reduction of the checkpoint file sizes. Different solutions to reduce the size of the checkpoints generated at application level are proposed and implemented in a checkpointing tool. Detailed experimental results on two multicore clusters show the effectiveness of the proposed methods.
An object-based imageprocessing technique is applied to detect inundated areas using Land sat images. An interoperable Web-based system was developed to conduct the analyses so that redundant steps in Land sat image ...
详细信息
An object-based imageprocessing technique is applied to detect inundated areas using Land sat images. An interoperable Web-based system was developed to conduct the analyses so that redundant steps in Land sat imageprocessing can be effectively eliminated. A review process is used to discover and develop suitable algorithms to automatically detect inundated areas and immediately transfer the results to the Web-based interface. This is a significant improvement over currently available methods for inundation detection systems. The tool is expected to be a practical inundated area detection function and be applicable across wide-reaching areas.
In this work, we introduce a new approach for the signal deconvolution problem, which is useful for the enhancement of neutron radiography projections. We attempt to restore original signals and get rid of noise prese...
详细信息
In this work, we introduce a new approach for the signal deconvolution problem, which is useful for the enhancement of neutron radiography projections. We attempt to restore original signals and get rid of noise present during acquisition or processing, due to gamma radiations or randomly distributed neutron flux. Signal deconvolution is an ill-posed inverse problem, so regularization techniques are used to smooth solutions by imposing constraints in the objective function. Various popular algorithms have been developed to solve such problem. This paper proposes a new approach to the nonlinear degraded signals restoration which is useful in many signal enhancement applications, based on a synergy of two swarm intelligence algorithms: particle swarm optimization (PSO) and bacterial foraging optimization (BFO) applied for total variation (TV) minimization, instead of the standard Tikhonov regularization method. We attempt to reconstruct or recover signals using some a priori knowledge of the degradation phenomenon. The truncated singular value decomposition and the wavelet filtering methods are also considered in this paper. A comparison between several powerful techniques is conducted.
暂无评论