A new imaging modality (viz., Long-Film [LF]) for acquiring long-length tomosynthesis images of the spine was recently enabled on the O-arm™ system and used in an IRB-approved clinical study at our institution. The work presented here implements and evaluates a combined image synthesis and registration approach to solve multi-modality registration of MR and LF images. The approach is well-suited for pediatric cases that use MR for preoperative diagnosis and aim for lower levels of intraoperative radiation exposure. A patch-based conditional GAN was used to synthesize 3D CT images from MR. The network was trained on deformably co-registered MR and CT image pairs. Synthesized images were registered to LF images using a model-based 3D-2D registration algorithm. Images from our clinical study were manually labeled, and the intra-user variability in anatomical landmark definition was measured in a simulation study. Geometric accuracy of registrations was evaluated on anatomical landmarks in separate test cases from the clinical study. The synthesis process generated CT images with clear bone structures. Analysis of manual labeling revealed 3.1±2.2 mm projection distance error between 3D and 2D anatomical landmarks. Anatomical MR landmarks projected on lateral LF images demonstrated a median projection distance error of 3.6 mm after registration. This work constitutes the first reported approach for MR to LF registration based on deep image synthesis. Preliminary results demonstrated the feasibility of globally rigid registration in aligning preoperative MR and intraoperative LF images. Work currently underway extends this approach to vertebra level, locally rigid / globally deformable registrations, with initialization based on automatically labeled vertebrae levels.
Purpose: A recent imaging method (viz., Long-Film) for capturing long-length images of the spine was enabled on the Oarm™ system. Proposed work uses a custom, multi-perspective, region-based convolutional neural network (R-CNN) for labeling vertebrae in Long-Film images and evaluates approaches for incorporating long contextual information to take advantage of the extended field-of-view and improve the labeling accuracy. Methods: Evaluated methods for incorporating contextual information include: (1) a recurrent network module with long short-term memory (LSTM) added after R-CNN classification; and (2) a post-processing, sequence-sorting step based on the label confidence scores. The models were trained and validated on 11,805 Long-Film images simulated from projections of 370 CT images and tested on 50 Long-Film images of 14 cadaveric specimens. Results: The multi-perspective R-CNN with LSTM module achieved 91.7% vertebrae level identification rate, compared to 72.4% when used without LSTM, thus demonstrating the improvement of incorporating contextual information. While sequence sorting achieved 89.4% in labeling accuracy, it failed to handle errors during detection and did not provide additional improvements when applied following the LSTM module. Conclusions: The proposed LSTM module significantly improved the labeling accuracy upon the base model through effective contextual information incorporation and training in an end-to-end fashion. Compared to sequence sorting, it showed more flexibility towards false positives and false negatives in vertebrae detection. The proposed model offers the potential to provide a valuable check for target localization and forms the basis for automatic measurement of spinal curvature changes in interventional settings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.