We present a fast and robust method for computing an injective parameterization with low isometric distortion for disk-like triangular meshes. Harmonic function-based methods, with their rich mathematical foundation, ...
详细信息
We present a fast and robust method for computing an injective parameterization with low isometric distortion for disk-like triangular meshes. Harmonic function-based methods, with their rich mathematical foundation, are widely used. Harmonic maps are particularly valuable for ensuring injectivity under certain boundary conditions. In addition, they offer computational efficiency by forming a linear subspace [FW22]. However, this restricted subspace often leads to significant isometric distortion, especially for highly curved surfaces. Conversely, methods that operate in the full space of piecewise linear maps [SPSH*17] achieve lower isometric distortion, but at a higher computational cost. Aigerman et al. [AGK*22] pioneered a parameterization method that uses deep neural networks to predict the Jacobians of the map at mesh triangles, and integrates them into an explicit map by solving a Poisson equation. However, this approach often results in significant Poisson reconstruction errors due to the inability to ensure the integrability of the predicted neural Jacobian field, leading to unbounded distortion and lack of local injectivity. We propose a hybrid method that combines the speed and robustness of harmonic maps with the generality of deep neural networks to produce injective maps with low isometric distortion much faster than state-of-the-art methods. The core concept is simple but powerful. Instead of learning Jacobian fields, we learn metric tensor fields over the input mesh, resulting in a cus-tomized Laplacian matrix that defines a harmonic map in a modified metric [WGS23]. Our approach ensures injectivity, offers great computational efficiency, and produces significantly lower isometric distortion compared to straightforward harmonic maps.
Monte Carlo integration is a technique for numerically estimating a definite integral by stochastically sampling its integrand. These samples can be averaged to make an improved estimate, and the progressive estimates...
详细信息
Monte Carlo integration is a technique for numerically estimating a definite integral by stochastically sampling its integrand. These samples can be averaged to make an improved estimate, and the progressive estimates form a sequence that converges to the integral value on the limit. Unfortunately, the sequence of Monte Carlo estimates converges at a rate of O(root n), where n denotes the sample count, effectively slowing down as more samples are drawn. To overcome this, we can apply sequence transformation, which transforms one converging sequence into another with the goal of accelerating the rate of convergence. However, analytically finding such a transformation for Monte Carlo estimates can be challenging, due to both the stochastic nature of the sequence, and the complexity of the integrand. In this paper, we propose to leverage neural networks to learn sequence transformations that improve the convergence of the progressive estimates of Monte Carlo integration. We demonstrate the effectiveness of our method on several canonical 1D integration problems as well as applications in light transport simulation.
暂无评论