Image based rendering (IBR) consists of several steps: (i) calibration (or ego-motion computation) of all input images, (ii) determination of regions in the input images used to synthesize the new view. (iii) interpol...
详细信息
Image based rendering (IBR) consists of several steps: (i) calibration (or ego-motion computation) of all input images, (ii) determination of regions in the input images used to synthesize the new view. (iii) interpolating the new view from the selected areas of the input images. We propose a unified representation for all these aspects of IBR using the space-time (x-y-t) volume. The presented approach is very robust, and allows to use IBR in general conditions even with a hand-held camera. To take care of (i), the space-time volume is constructed by placing frames at locations along the time axis so that image features create straight lines in the EPI (epipolar plane images). different slices of the space-time volume are used to produce new views, taking care of (ii). Step (iii) is done by interpolating between image samples using the feature lines in the EPI images. IBR examples are shown for various cases: sequences taken from a driving car, from a handheld camera, or when using a tripod.
This work deals with the problem of dense matching from stereo images under wide baseline conditions. By considering the characteristic properties of widely separated views, we propose an extension to a recently publi...
详细信息
This work deals with the problem of dense matching from stereo images under wide baseline conditions. By considering the characteristic properties of widely separated views, we propose an extension to a recently published algorithm for detailed reconstruction of continuous surfaces. The algorithm compensates for the occurrent affine distortion between the views to allow low level intensity based comparison necessary for dense matching. The matching itself is performed by an enhanced region rowing based affine propagation method that takes surface distortion into account to handle complex piecewise-smooth surfaces. It is experimentally shown that this new method can achieve smooth and accurate reconstruction from wide baseline images of both indoor and outdoor scenes. To quantify the reconstruction results we have created realistic synthetic datasets with ground truth. These datasets form the core of a future testbed for comparison of different wide baseline surface reconstruction techniques. In the current study, results on the synthetic images are compared to the ground truth to measure the accuracy of our method.
Experimental structure analysis of biological molecules (e.g, proteins) or macromolecular complexes (e.g, viruses) can be used to generate three-dimensional density maps of these entities. Such a density map can be vi...
详细信息
Experimental structure analysis of biological molecules (e.g, proteins) or macromolecular complexes (e.g, viruses) can be used to generate three-dimensional density maps of these entities. Such a density map can be viewed as a three-dimensional gray-scale image where space is subdivided in voxels of a given size. The focus of this paper is the analysis of virus density maps. The hull of a virus consists of many copies of one or several different proteins. An important tool for the study of viruses is cryo-electron microscopy (cryo-EM), a technique with insufficient resolution to directly determine the arrangement of the proteins in the virus. We therefore created a tool that locates proteins in the three-dimensional density map of a virus. The goal is to fully determine the locations and orientations of the protein(s) in the virus given the virus' three-dimensional density map and a database of density maps of one or more protein candidates.
Modern automatic digitizers can sample huge amounts of 3ddata points on the object surface in a short time. Point based graphics is becoming a popular framework to reduce the cardinality of these data sets and to fil...
详细信息
Modern automatic digitizers can sample huge amounts of 3ddata points on the object surface in a short time. Point based graphics is becoming a popular framework to reduce the cardinality of these data sets and to filter measurement noise, without having to store in memory and process mesh connectivity. Main contribution of this paper is the introduction of soft clustering techniques in the field of point clouds processing. In this approach data points are not assigned to a single cluster, but they contribute in the determination of the position of several cluster centres. As a result a better representation of the data is achieved. In soft clustering techniques, a data set is represented with a reduced number of points called reference vectors (RV), which minimize an adequate error measure. As the position of the RVs is determined by "learning", which can be viewed as an iterative optimization procedure, they are inherently slow. We show here how partitioning the datadomain into disjointed regions called hyperboxes (HB), the computation can be localized and the computational time reduced to linear in the number of data points (O(N)), saving more than 75% on real applications with respect to classical soft-VQ solutions, making therefore VQ suitable to the task. The procedure is suitable for a parallel HW implementation, which would lead to a complexity sublinear in N. An automatic procedure for setting the voxel side and the other parameters can be derived from the data-set analysis. Results obtained in the reconstruction of faces of both humans and puppets as well as on models from clouds of points made available on the Web are reported anddiscussed in comparison with other available methods.
This paper gives a summary of automatic passive reconstruction of 3d scenes from images and video. Features are tracked between the images. Relative camera poses are then estimated based on the feature tracks. Both th...
详细信息
This paper gives a summary of automatic passive reconstruction of 3d scenes from images and video. Features are tracked between the images. Relative camera poses are then estimated based on the feature tracks. Both the feature tracking and the robust method used to estimate the camera poses has recently been shown to provide robust real-time performance. Given the camera poses, a more elaborate method is used to derive dense textured surfaces. The dense reconstruction method was originally part of the authors thesis [d. Nister (2001)]. It first computes depth maps using graph cuts, optimizing a Bayesian formulation. The graph cut operation is used to let depth map hypotheses from various sources compete in order to optimize the cost function. The hypothesis generators used are feature based surface triangulation, plane fitting and multihypothesis multiscale patch-based stereo. The requirements for a cost function to fit into the graph cut framework were already given in [d. Nister (2001)], along with a procedure for constructing the graph weights corresponding to any cost function that satisfies the requirements. depth maps from many different viewpoints are then robustly fused to give a consistent global result using a process called median fusion. The robustness of the median fusion is important. It departs from the too common practice of fusing results by either using the union of free-space constraints as the resulting free-space, or the union of all predicted surfaces as the resulting surfaces, which is tremendously error-prone, since it essentially corresponds to taking either the minimum or maximum of the individual results.
Polygonal models are ubiquitous in computer graphics but their real time manipulation and rendering especially in interactive application environments have become a threat because of their huge sizes and complexity. A...
详细信息
Polygonal models are ubiquitous in computer graphics but their real time manipulation and rendering especially in interactive application environments have become a threat because of their huge sizes and complexity. A whole family of automatic algorithms emergedduring the last decade to help out this problem, but they reduce the complexity of the models uniformly without caring about semantic importance of localized parts of a mesh. Only a few algorithms deal with adaptive simplification of polygonal models. We propose a new model for user-driven simplification exploiting the simplification hierarchy and hypertriangulation model [P. Cignoni et al.] that lends a user the most of the functionalities of existing adaptive simplification algorithms in one place and is quite simple to implement. The proposed new underlying data structures are compact and support real time navigation across continuous LOds of a model; any desirable LOd can be extracted efficiently and can further be fine-tuned. The proposed model for adaptive simplification supports two key operations for selective refinement and selective simplification; their effect has been shown on various polygonal models. Comparison with related work shows that the proposed model provides combined environment at reduced overhead of memory space usage and faster running times.
Nowadays image motion analysis is considered as a significant subject in image processing and according to its characteristics is divided into different categories. This paper focuses on camera jitter as fast translat...
详细信息
Nowadays image motion analysis is considered as a significant subject in image processing and according to its characteristics is divided into different categories. This paper focuses on camera jitter as fast translations in image sequence due to an unstable platform. Our studies have led to do correlation matching [C-S. Fuh et al. (1991)] between well featured templates [M.A.Z. Pisheh et al. (2003), Y-P. Tan et al. (1996)] of successive frames to prevent aperture problems and thus have good estimation of the translations. But camera vibration may also suffer from rotation and scaling as nonlinear motions [G.J. Foster et al. (2000)] that may defect the matching process. Therefore mapping of 3d image points into camera image plane has been investigated to trace the effects of these nonlinear parameters in 2d motion equations [F.V. Heijden, (1996)]. Now by approximating these equations for minute rotation and bounded scaling between frame i and i+1, the set of motion equations with four unknowns ( rotation, scaling, horizontal/vertical translations) could be solved just by LSE [M.S. Grewal (2001)] method for a unique and optimized response. After all when we apply the estimateddisplacements on each frame for compensation, an stabilized sequence will conclude.
"Policy-Maker" is an implementation of our concept for the security management of heterogeneous networks. It is entirely based on the Common Information Model (CIM) and the Web-Based Enterprise Management (W...
详细信息
"Policy-Maker" is an implementation of our concept for the security management of heterogeneous networks. It is entirely based on the Common Information Model (CIM) and the Web-Based Enterprise Management (WBEM) architecture, which is an industry standard of the distributed Management Task Force (dMTF). In our concept an administrator can specify security policies uniformly anddirectly within the CIM data model via a comfortable graphical user interface (GUI) provided by our "Policy-Editor". The policies are processed and executed within the WBEM architecture, in which component specific "providers" map the policies to the mechanisms of the target network devices. A policy can represent a hierarchy of rules, which are handled and solved in our concept by using several hierarchic provider-calls within the WBEM framework. Furthermore, CIM-based policy models for some concrete security mechanisms (e.g. IP-Firewalls) have been designed and implemented for testing the "Policy-Maker". The "Policy-Maker" Toolkit includes a 3d network visualization tool, which provides information on the current network topology and the available security components for the administrator. Finally "Policy-Maker" provides simulation components for testing the configuration of the network and its security mechanisms before a new configuration is applied to the real system.
The work presents a method for the high speed calculation of crude depth maps. Performance and applicability are illustrated for view interpolation based on two input video streams, but the algorithm is perfectly amen...
详细信息
The work presents a method for the high speed calculation of crude depth maps. Performance and applicability are illustrated for view interpolation based on two input video streams, but the algorithm is perfectly amenable to multicamera environments. First a fast plane sweep algorithm generates the crude depth map. Speed results from hardware accelerated transformations and parallel processing available on the GPU. All computations on the graphical board are performed pixel-wise and a single pass of the sweep only processes one input resolution. A second step uses a min-cut/max-flow algorithm to ameliorate the previous result. The depth map, a noisy interpolated image and correlation measures are available on the GPU. They are reused and combined with spatial connectivity information and temporal continuity considerations in a graph formulation. Position dependent sampling densities allow the system to use multiple image resolutions. The min-cut separation of this graph yields the global minimum of the associated energy function. Limiting the search range according to the initialisation provided by the plane sweep further speeds up the process. The required hardware is only two cameras and a regular PC.
Looking around in our every day environment, many of the encountered objects are specular to some degree. Actively using this fact when reconstructing objects from image sequences is the scope of the shape from specul...
详细信息
Looking around in our every day environment, many of the encountered objects are specular to some degree. Actively using this fact when reconstructing objects from image sequences is the scope of the shape from specularities problem. One reason why this problem is important is that standard structure from motion techniques fail when the object surfaces are specular. Here this problem is addressed by estimating surface shape using information from the specular reflections. A specular reflection gives constraints on the surface normal. The approach differs significantly from many earlier shapes from specularities methods since the normal data used is sparse. The main contribution is to give a solid foundation for shape from specularities problems. Estimation of surface shape using reflections is formulated as a variational problem and the surface is represented implicitly using a level set formulation. A functional incorporating all surface constraints is proposed and the corresponding level set motion PdE is explicitly derived. This motion is then proven to minimize the functional. As a part of this functional a variational approach to normal alignment is proposed and analyzed. Also novel methods for implicit surface interpolation to sparse point sets are presented together with a variational analysis. Experiments on both real and synthetic data support the proposed method.
暂无评论