Researchers analyzed and presented volume data from the Visible Human Project (VHP) and data from high-resolution 3D ion-abrasion scanning electron microscopy (IA-SEM). They acquired the VHP data using cryosectioning,...
详细信息
Researchers analyzed and presented volume data from the Visible Human Project (VHP) and data from high-resolution 3D ion-abrasion scanning electron microscopy (IA-SEM). They acquired the VHP data using cryosectioning, a destructive approach to 3D human anatomical imaging resulting in whole-body images with a field of view approaching 2 meters and a minimum resolvable feature size of 300 microns. IA-SEM is a type of block-face imaging microscopy, a destructive approach to microscopic 3D imaging of cells. The field of view of IA-SEM data is on the order of 10 microns (whole cell) with a minimum resolvable feature size of 15 nanometers (single-slice thickness). Despite the difference in subject and scale, the analysis and modeling methods were remarkably similar. They are derived from image processing, computervision, and computer graphics techniques. Moreover, together we are employing medical illustration, visualization, and rapid prototyping to inform and inspire biomedical science. By combining graphics and biology, we are imaging across nine orders of magnitude of space to better promote public health through research. [ABSTRACT FROM AUTHOR]
Guidance for assemblable parts is a promising field for augmented reality. Augmented reality assembly guidance requires 6D object poses of target objects in real time. Especially in time-critical medical or industrial...
详细信息
ISBN:
(纸本)9798350374025;9798350374032
Guidance for assemblable parts is a promising field for augmented reality. Augmented reality assembly guidance requires 6D object poses of target objects in real time. Especially in time-critical medical or industrial settings, continuous and markerless tracking of individual parts is essential to visualize instructions superimposed on or next to the target object parts. In this regard, occlusions by the user's hand or other objects and the complexity of different assembly states complicate robust and real-time markerless multi-object tracking. To address this problem, we present Graph-based Object Tracking (GBOT), a novel graph-based single-view RGB-D tracking approach. The real-time markerless multi-object tracking is initialized via 6D pose estimation and updates the graph-based assembly poses. The tracking through various assembly states is achieved by our novel multi-state assembly graph. We update the multi-state assembly graph by utilizing the relative poses of the individual assembly parts. Linking the individual objects in this graph enables more robust object tracking during the assembly process. For evaluation, we introduce a synthetic dataset of publicly available and 3D printable assembly assets as a benchmark for future work. Quantitative experiments in synthetic data and further qualitative study in real test data show that GBOT can outperform existing work towards enabling context-aware augmented reality assembly guidance. Dataset and code will be made publically available**.
暂无评论