C-arm fluoroscopy units provide continuously updating X-ray video images during surgical procedure. The
modality is widely adopted for its low cost, real-time imaging capabilities, and its ability to display radio-opaque
tools in the anatomy. It is, however, important to correct for fluoroscopic image distortion and estimate camera
parameters, such as focal length and camera center, for registration with 3D CT scans in fluoroscopic imageguided
procedures. This paper describes a method for C-arm calibration and evaluates its accuracy in multiple
C-arm units and in different viewing orientations. The proposed calibration method employs a commerciallyavailable
unit to track the C-arm and a calibration plate. The method estimates both the internal calibration
parameters and the transformation between the coordinate systems of tracker and C-arm. The method was
successfully tested on two C-arm units (GE OEC 9800 and GE OEC 9800 Plus) of different image intensifier
sizes and verified with a rigid airway phantom model. The mean distortion-model error was found to be 0.14
mm and 0.17 mm for the respective C-arms. The mean overall system reprojection error (which measures the
accuracy of predicting an image using tracker coordinates) was found to be 0.63 mm for the GE OEC 9800.
Reliable transbronchial access of peripheral lung lesions is desirable for the diagnosis and potential treatment
of lung cancer. This procedure can be difficult, however, because accessory devices (e.g., needle or forceps)
cannot be reliably localized while deployed. We present a fluoroscopic image-guided intervention (IGI) system
for tracking such bronchoscopic accessories. Fluoroscopy, an imaging technology currently utilized by many
bronchoscopists, has a fundamental shortcoming - many lung lesions are invisible in its images. Our IGI
system aligns a digitally reconstructed radiograph (DRR) defined from a pre-operative computed tomography
(CT) scan with live fluoroscopic images. Radiopaque accessory devices are readily apparent in fluoroscopic video,
while lesions lacking a fluoroscopic signature but identifiable in the CT scan are superimposed in the scene. The
IGI system processing steps consist of: (1) calibrating the fluoroscopic imaging system; (2) registering the CT
anatomy with its depiction in the fluoroscopic scene; (3) optical tracking to continually update the DRR and
target positions as the fluoroscope is moved about the patient. The end result is a continuous correlation of the
DRR and projected targets with the anatomy depicted in the live fluoroscopic video feed. Because both targets
and bronchoscopic devices are readily apparent in arbitrary fluoroscopic orientations, multiplane guidance is
straightforward. The system tracks in real-time with no computational lag. We have measured a mean projected
tracking accuracy of 1.0 mm in a phantom and present results from an in vivo animal study.
Stereotactic body-radiation therapy (SBRT) has gained acceptance in treating lung cancer. Localization of a
thoracic lesion is challenging as tumors can move significantly with breathing. Some SBRT systems compensate
for tumor motion with the intrafraction tracking of targets by two stereo fluoroscopy cameras. However, many
lung tumors lack a fluoroscopic signature and cannot be directly tracked. Small radiopaque fiducial markers,
acting as fluoroscopically visible surrogates, are instead implanted nearby. The spacing and configuration of
the fiducial markers is important to the success of the therapy as SBRT systems impose constraints on the
geometry of a fiducial-marker constellation. It is difficult even for experienced physicians mentally assess the
validity of a constellation a priori. To address this challenge, we present the first automated planning system
for bronchoscopic fiducial-marker placement. Fiducial-marker planning is posed as a constrained combinatoric
optimization problem. Constraints include requiring access from a navigable airway, having sufficient separation
in the fluoroscopic imaging planes to resolve each individual marker, and avoidance of major blood vessels.
Automated fiducial-marker planning takes approximately fifteen seconds, fitting within the clinical workflow.
The resulting locations are integrated into a virtual bronchoscopic planning system, which provides guidance to
each location during the implantation procedure. To date, we have retrospectively planned over 50 targets for
treatment, and have implanted markers according to the automated plan in one patient who then underwent
SBRT treatment. To our knowledge, this approach is the first to address automated bronchoscopic fiducialmarker
planning for SBRT.
Bronchoscopic biopsy of lymph nodes is an important step in staging lung cancer. Lymph nodes, however, lie
behind the airway walls and are near large vascular structures - all of these structures are hidden from the
bronchoscope's field of view. Previously, we had presented a computer-based virtual bronchoscopic navigation
system that provides reliable guidance for bronchoscopic sampling. While this system offers a major improvement
over standard practice, bronchoscopists told us that target localization- lining up the bronchoscope before
deploying a needle into the target - can still be challenging. We therefore address target localization in two
distinct ways: (1) automatic computation of an optimal diagnostic sampling pose for safe, effective biopsies, and
(2) a novel visualization of the target and surrounding major vasculature. The planning determines the final
pose for the bronchoscope such that the needle, when extended from the tip, maximizes the tissue extracted.
This automatically calculated local pose orientation is conveyed in endoluminal renderings by a 3D arrow. Additional
visual cues convey obstacle locations and target depths-of-sample from arbitrary instantaneous viewing
orientations. With the system, a physician can freely navigate in the virtual bronchoscopic world perceiving the
depth-of-sample and possible obstacle locations at any endoluminal pose, not just one pre-determined optimal
pose. We validated the system using mediastinal lymph nodes in eleven patients. The system successfully planned
for 20 separate targets in human MDCT scans. In particular, given the patient and bronchoscope constraints,
our method found that safe, effective biopsies were feasible in 16 of the 20 targets; the four remaining targets
required more aggressive safety margins than a "typical" target. In all cases, planning computation took only a
few seconds, while the visualizations updated in real time during bronchoscopic navigation.
Bronchoscopy is often performed for diagnosing lung cancer. The recent development of multidetector CT (MDCT) scanners and ultrathin bronchoscopes now enable the bronchoscopic biopsy and treatment of peripheral regions of interest (ROIs). Because the peripheral ROIs are often located several generations within the airway tree, careful planning is required prior to a procedure. The current practice for planning peripheral bronchoscopic procedures, however, is difficult, error-prone, and time-consuming. We propose a system for planning peripheral bronchoscopic procedures using patient-specific MDCT chest scans. The planning process begins with a semi-automatic segmentation of ROIs. The remaining system components are completely automatic, beginning with a new strategy for tracheobronchial airway-tree segmentation. The system then uses a new locally-adaptive approach for finding the interior airway-wall surfaces. From the polygonal airway-tree surfaces, a centerline-analysis method extracts the central axes of the airway tree. The system's route-planning component then analyzes the data generated in the previous stages to determine an appropriate path through the airway tree to the ROI. Finally, an automated report generator gives quantitative data about the route and both static and dynamic previews of the procedure. These previews consist of virtual bronchoscopic endoluminal renderings at bifurcations encountered along the route and renderings of the airway tree and ROI at the suggested biopsy location. The system is currently in use for a human lung-cancer patient pilot study involving the planning and subsequent live image-based guidance of suspect peripheral cancer nodules.
Robust and accurate segmentation of the human airway tree from multi-detector computed-tomography (MDCT) chest scans is vital for many pulmonary-imaging applications. As modern MDCT scanners can detect hundreds of airway tree branches, manual segmentation and semi-automatic segmentation requiring significant user intervention are impractical for producing a full global segmentation. Fully-automated methods, however, may fail to extract small peripheral airways. We propose an automatic algorithm that searches the entire lung volume for airway branches and poses segmentation as a global graph-theoretic optimization problem. The algorithm has shown strong performance on 23 human MDCT chest scans acquired by a variety of scanners and reconstruction kernels. Visual comparisons with adaptive region-growing results and quantitative comparisons with manually-defined trees indicate a high sensitivity to peripheral airways and a low false-positive rate. In addition, we propose a suite of interactive segmentation tools for cleaning and extending critical areas of the automatically segmented result. These interactive tools have potential application for image-based guidance of bronchoscopy to the periphery, where small, terminal branches can be important visual landmarks. Together, the automatic segmentation algorithm and interactive tool suite comprise a robust system for human airway-tree segmentation.
Previous research has indicated that use of guidance systems during endoscopy can improve the performance
and decrease the skill variation of physicians. Current guidance systems, however, rely on
computationally intensive registration techniques or costly and error-prone electromagnetic (E/M)
registration techniques, neither of which fit seamlessly into the clinical workflow. We have previously
proposed a real-time image-based registration technique that addresses both of these problems. We
now propose a system-level approach that incorporates this technique into a complete paradigm for
real-time image-based guidance in order to provide a physician with continuously-updated navigational
and guidance information. At the core of the system is a novel strategy for guidance of endoscopy. Additional
elements such as global surface rendering, local cross-sectional views, and pertinent distances
are also incorporated into the system to provide additional utility to the physician. Phantom results
were generated using bronchoscopy performed on a rapid prototype model of a human tracheobronchial
airway tree. The system has also been tested in ongoing live human tests. Thus far, ten such tests,
focused on bronchoscopic intervention of pulmonary patients, have been run successfully.
KEYWORDS: 3D image processing, Image segmentation, Medical imaging, Endoscopy, 3D acquisition, Chest, Diagnostics, Visualization, Bronchoscopy, Lung cancer
Physicians use endoscopic procedures to diagnose and treat a variety of medical conditions. For example,
bronchoscopy is often performed to diagnose lung cancer. The current practice for planning endoscopic procedures
requires the physician to manually scroll through the slices of a three-dimensional (3D) medical image. When
doing this scrolling, the physician must perform 3D mental reconstruction of the endoscopic route to reach a
specific diagnostic region of interest (ROI). Unfortunately, in the case of complex branching structures such as
the airway tree, ROIs are often situated several generations away from the organ's origin. Existing image-analysis
methods can help define possible endoscopic navigation paths, but they do not provide specific routes for reaching
a given ROI. We have developed an automated method to find a specific route to reach an ROI. Given a 3D
medical image, our method takes as inputs: (1) pre-defined ROIs; (2) a segmentation of the branching organ
through which the endoscopic device will navigate; and (3) centerlines (paths) through the segmented organ. We
use existing methods for branching-organ segmentation and centerline extraction. Our method then (1) identifies
the closest paths (routes) to the ROI; and (2) if necessary, performs a directed search for the organ of interest,
extending the existing paths to complete a route. Results from human 3D computed tomography chest images
illustrate the efficacy of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.