During medical interventions, cone-beam computer tomography (CBCT) imaging is a very powerful tool for guidance and assessment of the intervention’s success. In many applications, such as heart imaging or lung interventions, an automatic segmentation could improve the user’s interaction with the system. For automatic segmentation using deep learning-based methods a lot of labeled data is required, which makes these methods challenging to use on CBCT data as there is typically no data publicly available. In this paper, we make use of publicly available databases of computer tomography (CT) data set, to perform a domain adaption from CT to CBCT data by forward projection and reconstruction. Via this geometric domain adaptation, artificial CBCT volumes are produced with the great advantage that the segmentation of the original CT data can be re-used. We train a neural network, based on the U-Net, on this data to evaluate the impact of the domain adaptation on the quality of the segmentation of the lungs. The results of our experiments show a great improvement in the dice score of the predicted segmentation on real CBCT volumes using the artificial CBCT volumes as training data from 0.88 to 0.95, compared to using the original CT data. The presented method can be extended to model further artifacts which are typical for CBCT data, such as metal and motion artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.