A commonly encountered problem when creating 3d models of large real scenes is unnatural color texture fusion. due to variations in lighting and camera settings (both manual and automatic), captured color texture maps...
详细信息
This paper presents a new unsupervised technique aimed to generate stereoscopic views estimating depth information from a single input image. Using a single input image, vanishing lines/points are extracted using a fe...
详细信息
We examine the implications of shape on the process of finding dense correspondence and half-occlusions for a stereo pair of images. The desired property of the depth map is that it should be a piecewise continuous fu...
详细信息
ISBN:
(纸本)0769522238
We examine the implications of shape on the process of finding dense correspondence and half-occlusions for a stereo pair of images. The desired property of the depth map is that it should be a piecewise continuous function which is consistent with the images and which has the minimum number of discontinuities. To zeroeth order piecewise continuity becomes piecewise constancy. Using this approximation, we first discuss an approach for dealing with such a fronto-parallel shapeless world, and the problems involved therein. We then introduce horizontal and vertical slant to create a first order approximation to piecewise continuity. We highlight the fact that a horizontally slanted surface (ie. having depth variation in the direction of the separation of the two cameras) will appear horizontally stretched in one image as compared to the other image. Thus, while corresponding two images, N pixels on a scanline in one image may correspond to a different number of pixels M in the other image, which has consequences with regard to sampling and occlusion detection. We also discuss the asymmetry between vertical and horizontal slant, and the central role of non-horizontal edges in the context of vertical slant. Using experiments, we discuss cases where existing algorithms fail, and how the incorporation of new constraints provides correct results.
It is widely known that, for the affine camera model, both shape and motion data can be factorizeddirectly from the measurement matrix constructed from 2d image points coordinates. However classical algorithms for St...
详细信息
ISBN:
(纸本)0769522238
It is widely known that, for the affine camera model, both shape and motion data can be factorizeddirectly from the measurement matrix constructed from 2d image points coordinates. However classical algorithms for Structure from Motion (SfM) are not robust: measurement outliers, that is, incorrectly detected or matched feature points can destroy the result. A few methods to robustify SfM have already been proposed. different outlier detection schemes have been used. We examine an efficient algorithm by Trajkovic et al. [13] who use affine camera model and the Least Median of Squares (LMedS) method to separate inliers from outliers. LMedS is only applicable when the ratio of inliers exceeds 50%. We show that the Least Trimmed Squares (LTS) method is more efficient in robust SfM than LMedS. In particular we demonstrate that LTS can handle inlier ratios below 50%. We also show that using the real (Euclidean) motion data results in a more precise SfM algorithm that using the affine camera model. Based on these observations, we propose a novel robust SfM algorithm anddiscuss its advantages and limits. The proposed method and the Trajkovic procedure are quantitatively compared on synthetic data in different simulated situations. The methods are also tested on synthesized and real video sequences.
Similarity measuring is a key problem for 3d model retrieval. In this paper, we propose a novel shape descriptor "Thickness Histogram" (TH) by uniformly estimating thickness of a model using statistical meth...
详细信息
Modem automatic digitizers can sample huge amounts of 3ddata points on the object surface in a short time. Point based graphics is becoming a popular framework to reduce the cardinality of these data sets and to filt...
详细信息
ISBN:
(纸本)0769522238
Modem automatic digitizers can sample huge amounts of 3ddata points on the object surface in a short time. Point based graphics is becoming a popular framework to reduce the cardinality of these data sets and to filter measurement noise, without having to store in memory and process mesh connectivity. Main contribution of this paper is the introduction of soft clustering techniques in the field of point clouds processing. In this approach data points are not assigned to a single cluster, but they contribute in the determination of the position of several cluster centres. As a result a better representation of the data is achieved. In Soft clustering techniques, a data set is represented with a reduced number of points called Reference Vectors (RV), which minimize an adequate error measure. As the position of the RVs is determined by "learning", which can be viewed as an iterative optimization procedure, they are inherently slow. We show here how partitioning the datadomain into disjointed regions called hyperboxes (HB), the computation can be localized and the computational time reduced to linear in the number of data points (O(N)), saving more than 75% on real applications with respect to classical soft-VQ solutions, making therefore VQ suitable to the task. The procedure is suitable for a parallel HW implementation, which would lead to a complexity sub-linear in N. An automatic procedure for setting the voxel side and the other parameters can be derived from the data-set analysis. Results obtained in the reconstruction of faces of both humans and puppets as well as on models from clouds of points made available on the WEB are reported anddiscussed in comparison with other available methods.
This paper develops a digital watermarking methodology for 3-d graphical objects defined by polygonal meshes. In watermarking or fingerprinting the aim is to embed a code in a given media without producing identifiabl...
详细信息
This paper proposes a novel method for global registration based on matching 3d medial structures of unorganized point clouds or triangulated meshes. Most practical known methods are based on the Iterative Closest Poi...
详细信息
We are developing a system for interactive modeling of real world scenes, The acquisition device consists of a video camera enhanced with an attached laser system. As the operator sweeps the scene, the device acquires...
详细信息
Looking around in our every day environment, many of the encountered objects are specular to some degree. Actively using this fact when reconstructing objects front image sequences is the scope of the shape from specu...
详细信息
ISBN:
(纸本)0769522238
Looking around in our every day environment, many of the encountered objects are specular to some degree. Actively using this fact when reconstructing objects front image sequences is the scope of the shape from specularities problem. One reason why this problem is important is that standard structure from motion techniques fail when the object surfaces are specular Here this problem is addressed by estimating surface shape using information from the specular reflections. A specular reflection gives constraints on the surface normal. The approach taken in this paper differs significantly from many earlier shape from specularities methods since the normal data used is sparse. The main contribution of this paper is to give a solid foundation for shape from specularities problems. Estimation of surface shape using reflections is formulated as a variational problem and the surface is represented implicitly using a level set formulation. A functional incorporating all surface constraints is proposed and the corresponding level set motion PdE is explicitly derived. This motion is then proven to minimize the functional. As a part of this functional a variational approach to normal alignment is proposed and analyzed. Also novel methods for implicit surface interpolation to sparse point sets are presented together with a variational analysis. Experiments on both real and synthetic data support the proposed method.
暂无评论