Reproducibly achieving proper implant alignment is a critical step in total hip arthroplasty (THA) procedures that has been shown to substantially affect patient outcome. In current practice, correct alignment of the ...
详细信息
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality suppor...
详细信息
Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red-green-blue-depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.
medical robotic ultrasound offers potential to assist interventions, ease long-term monitoring and reduce operator dependency. Various techniques for remote control of ultrasound probes through telemanipulation system...
详细信息
medical robotic ultrasound offers potential to assist interventions, ease long-term monitoring and reduce operator dependency. Various techniques for remote control of ultrasound probes through telemanipulation systems have been presented in the past, however not exploiting the potential of fully autonomous acquisitions directly performed by robotic systems. In this paper, a trajectory planning algorithm for automatic robotic ultrasound acquisition under expert supervision is introduced. The objective is to compute a suitable path for covering a volume of interest selected in diagnostic images, for example by prior segmentation. A 3D patient surface point cloud is acquired using a depth camera, which is the sole prerequisite besides the volume delineation. An easily parameterizable path function generates single or multiple parallel scan trajectories capable of dealing with large target volumes. A spline is generated through the preliminary path points and is transferred to a lightweight robot to perform the ultrasound scan using an impedance control mode. The proposed approach is validated via simulation as well as on phantoms and on animal viscera.
We present an architecture for online, incremental scene modeling which combines a SLAM-based scene understanding framework with semantic segmentation and object pose estimation. The core of this approach comprises a ...
详细信息
ISBN:
(纸本)9781509037636
We present an architecture for online, incremental scene modeling which combines a SLAM-based scene understanding framework with semantic segmentation and object pose estimation. The core of this approach comprises a probabilistic inference scheme that predicts semantic labels for object hypotheses at each new frame. From these hypotheses, recognized scene structures are incrementally constructed and tracked. Semantic labels are inferred using a multi-domain convolutional architecture which operates on the image time series and which enables efficient propagation of features as well as robust model registration. To evaluate this architecture, we introduce a large-scale RGB-D dataset JHUSEQ-25 as a new benchmark for the sequence-based scene understanding in complex and densely cluttered scenes. This dataset contains 25 RGB-D video sequences with 100,000 labeled frames in total. We validate our method on this dataset and demonstrate improved performance of semantic segmentation and 6-DoF object pose estimation compared with methods based on the single view.
暂无评论