Human motion generation is a key task in computer graphics. While various conditioning signals such as text, action class, or audio have been used to harness the generation process, most existing methods neglect the c...
详细信息
Human motion generation is a key task in computer graphics. While various conditioning signals such as text, action class, or audio have been used to harness the generation process, most existing methods neglect the case where a specific body is desired to perform the motion. Additionally, they rely on skeleton-based pose representations, necessitating additional steps to produce renderable meshes of the intended body shape. Given that human motion involves a complex interplay of bones, joints, and muscles, focusing solely on the skeleton during generation neglects the rich information carried by muscles and soft tissues, as well as their influence on movement, ultimately limiting the variability and precision of the generated motions. In this paper, we introduce Shape-conditioned Motion Diffusion model (SMD), which enables the generation of human motion directly in the form of a mesh sequence, conditioned on both a text prompt and a body mesh. To fully exploit the mesh representation while minimizing resource costs, we employ spectral representation using the graph Laplacian to encode body meshes into the learning process. Unlike retargeting methods, our model does not require source motion data and generates a variety of desired semantic motions that is inherently tailored to the given identity shape. Extensive experimental evaluations show that the SMD model not only maintains the body shape consistently with the conditioning input across motion frames but also achieves competitive performance in text-to-motion and action-to-motion tasks compared to state-of-the-art methods.
We analyze the joint efforts made by the geometry processing and the numerical analysis communities in the last decades to define and measure the concept of "mesh quality". Researchers have been striving to ...
详细信息
We analyze the joint efforts made by the geometry processing and the numerical analysis communities in the last decades to define and measure the concept of "mesh quality". Researchers have been striving to determine how, and how much, the accuracy of a numerical simulation or a scientific computation (e.g., rendering, printing, modeling operations) depends on the particular mesh adopted to model the problem, and which geometrical features of the mesh most influence the result. The goal was to produce a mesh with good geometrical properties and the lowest possible number of elements, able to produce results in a target range of accuracy. We overview the most common quality indicators, measures, or metrics that are currently used to evaluate the goodness of a discretization and drive mesh generation or mesh coarsening/refinement processes. We analyze a number of local and global indicators, defined over two- and three-dimensional meshes with any type of elements, distinguishing between simplicial, quadrangular/hexahedral, and generic polytopal elements. We also discuss mesh optimization algorithms based on the above indicators and report common libraries for mesh analysis and quality-driven mesh optimization.
Steering and navigation are important components of character animation systems to enable them to autonomously move in their environment. In this work, we propose a synthetic vision model that uses visual features to ...
详细信息
Steering and navigation are important components of character animation systems to enable them to autonomously move in their environment. In this work, we propose a synthetic vision model that uses visual features to steer agents through dynamic environments. Our agents perceive optical flow resulting from their relative motion with the objects of the environment. The optical flow is then segmented and processed to extract visual features such as the focus of expansion and time-to-collision. Then, we establish the relations between these visual features and the agent motion, and use them to design a set of control functions which allow characters to perform object-dependent tasks, such as following, avoiding and reaching. Control functions are then combined to let characters perform more complex navigation tasks in dynamic environments, such as reaching a goal while avoiding multiple obstacles. Agent's motion is achieved by local minimization of these functions. We demonstrate the efficiency of our approach through a number of scenarios. Our work sets the basis for building a character animation system which imitates human sensorimotor actions. It opens new perspectives to achieve realistic simulation of human characters taking into account perceptual factors, such as the lighting conditions of the environment.
暂无评论