Laparoscopic liver resection is increasingly being performed with results comparable to open cases while incurring less
trauma and reducing recovery time. The tradeoff is increased difficulty due to limited visibility and restricted freedom of
movement. Image-guided surgical navigation systems have the potential to help localize anatomical features to improve
procedural safety and achieve better surgical resection outcome. Previous research has demonstrated that intraoperative
surface data can be used to drive a finite element tissue mechanics organ model such that high resolution preoperative
scans are registered and visualized in the context of the current surgical pose. In this paper we present an investigation of
using sparse data as imposed by laparoscopic limitations to drive a registration model. Non-contact laparoscopicallyacquired
surface swabbing and mock-ultrasound subsurface data were used within the context of a nonrigid registration
methodology to align mock deformed intraoperative surface data to the corresponding preoperative liver model as
derived from pre-operative image segmentations. The mock testing setup to validate the potential of this approach used a
tissue-mimicking liver phantom with a realistic abdomen-port patient configuration. Experimental results demonstrates a
range of target registration errors (TRE) on the order of 5mm were achieving using only surface swab data, while use of
only subsurface data yielded errors on the order of 6mm. Registrations using a combination of both datasets achieved
TRE on the order of 2.5mm and represent a sizeable improvement over either dataset alone.
Breast conservation therapy (BCT) is a desirable option for many women diagnosed with early stage breast cancer and involves a lumpectomy followed by radiotherapy. However, approximately 50% of eligible women will elect for mastectomy over BCT despite equal survival benefit (provided margins of excised tissue are cancer free) due to uncertainty in outcome with regards to complete excision of cancerous cells, risk of local recurrence, and cosmesis. Determining surgical margins intraoperatively is difficult and achieving negative margins is not as robust as it needs to be, resulting in high re-operation rates and often mastectomy. Magnetic resonance images (MRI) can provide detailed information about tumor margin extents, however diagnostic images are acquired in a fundamentally different patient presentation than that used in surgery. Therefore, the high quality diagnostic MRIs taken in the prone position with pendant breast are not optimal for use in surgical planning/guidance due to the drastic shape change between preoperative images and the common supine surgical position. This work proposes to investigate the value of supine MRI in an effort to localize tumors intraoperatively using image-guidance. Mock intraoperative setups (realistic patient positioning in non-sterile environment) and preoperative imaging data were collected from a patient scheduled for a lumpectomy. The mock intraoperative data included a tracked laser range scan of the patient's breast surface, tracked center points of MR visible fiducials on the patient's breast, and tracked B-mode ultrasound and strain images. The preoperative data included a supine MRI with visible fiducial markers. Fiducial markers localized in the MRI were rigidly registered to their mock intraoperative counterparts using an optically tracked stylus. The root mean square (RMS) fiducial registration error using the tracked markers was 3.4mm. Following registration, the average closest point distance between the MR generated surface nodes and the LRS point cloud was 1.76±0.502 mm.
The accurate identification of tumor margins during neurosurgery is a primary concern for the surgeon in order to maximize resection of malignant tissue while preserving normal function. The use of preoperative imaging for guidance is standard of care, but tumor margins are not always clear even when contrast agents are used, and so margins are often determined intraoperatively by visual and tactile feedback. Ultrasound strain imaging creates a quantitative representation of tissue stiffness which can be used in real-time. The information offered by strain imaging can be placed within a conventional image-guidance workflow by tracking the ultrasound probe and calibrating the image plane, which facilitates interpretation of the data by placing it within a common coordinate space with preoperative imaging. Tumor geometry in strain imaging is then directly comparable to the geometry in preoperative imaging. This paper presents a tracked ultrasound strain imaging system capable of co-registering with preoperative tomograms and also of reconstructing a 3D surface using the border of the strain lesion. In a preliminary study using four phantoms with subsurface tumors, tracked strain imaging was registered to preoperative image volumes and then tumor surfaces were reconstructed using contours extracted from strain image slices. The volumes of the phantom tumors reconstructed from tracked strain imaging were approximately between 1.5 to 2.4 cm3, which was similar to the CT volumes of 1.0 to 2.3 cm3. Future work will be done to robustly characterize the reconstruction accuracy of the system.
Using computational models, images acquired pre-operatively can be updated to account for intraoperative brain shift in image-guided surgical (IGS) systems. An optically tracked textured laser range scanner (tLRS) furnishes the 3D
coordinates of cortical surface points (3D point clouds) over the surgical field of view and provides a correspondence
between these and the pre-operative MR image. However, integration of the acquired tLRS data into a clinically
acceptable system compatible throughout the clinical workflow of tumor resection has been challenging. This is because acquiring the tLRS data requires moving the scanner in and out of the surgical field, thus limiting the number of acquisitions. Large differences between acquisitions caused by tumor resection and tissue manipulation make it difficult to establish correspondence and estimate brain motion. An alternative to the tLRS is to use temporally dense feature-rich stereo surgical video data provided by the operating microscope. This allows for quick digitization of the cortical surface in 3D and can help continuously update the IGS system. In order to understand the tradeoffs between these approaches as input to an IGS system, we compare the accuracy of the 3D point clouds extracted from the stereo video system of the surgical microscope and the tLRS for phantom objects in this paper. We show that the stereovision system of the surgical microscope achieves accuracy in the 0.46-1.5mm range on our phantom objects and is a viable alternative to the tLRS for neurosurgical applications.
In the context of open abdominal image-guided liver surgery, the efficacy of an image-guidance system relies on its ability to (1) accurately depict tool locations with respect to the anatomy, and (2) maintain the work flow of the surgical team. Laser-range scanned (LRS) partial surface measurements can be taken intraoperatively with relatively little impact on the surgical work flow, as opposed to other intraoperative imaging modalities. Previous research has demonstrated that this kind of partial surface data may be (1) used to drive a rigid registration of the preoperative CT image volume to intraoperative patient space, and (2) extrapolated and combined with a tissue-mechanics-based organ model to drive a non-rigid registration, thus compensating for organ deformations. In this paper we present a novel approach for intraoperative nonrigid liver registration which iteratively reconstructs a displacement field on the posterior side of the organ in order to minimize the error between the deformed model and the intraopreative surface data. Experimental results with a phantom liver undergoing large deformations demonstrate that this method achieves target registration errors (TRE) with a mean of 4.0 mm in the prediction of a set of 58 locations inside the phantom, which represents a 50% improvement over rigid registration alone, and a 44% improvement over the prior non-iterative single-solve method of extrapolating boundary conditions via a surface Laplacian.
Brain shift compromises the accuracy of neurosurgical image-guided interventions if not corrected by either intraoperative
imaging or computational modeling. The latter requires intraoperative sparse measurements for constraining and driving
model-based compensation strategies. Conoscopic holography, an interferometric technique that measures the distance
of a laser light illuminated surface point from a fixed laser source, was recently proposed for non-contact surface data
acquisition in image-guided surgery and is used here for validation of our modeling strategies. In this contribution, we
use this inexpensive, hand-held conoscopic holography device for intraoperative validation of our computational modeling
approach to correcting for brain shift. Laser range scan, instrument swabbing, and conoscopic holography data sets were
collected from two patients undergoing brain tumor resection therapy at Vanderbilt University Medical Center. The results
of our study indicate that conoscopic holography is a promising method for surface acquisition since it requires no contact
with delicate tissues and can characterize the extents of structures within confined spaces. We demonstrate that for two
clinical cases, the acquired conoprobe points align with our model-updated images better than the uncorrected images lending
further evidence that computational modeling approaches improve the accuracy of image-guided surgical interventions
in the presence of soft tissue deformations.
Intraoperative ultrasound imaging is a commonly used modality for image guided surgery and can be used to monitor
changes from pre-operative data in real time. Often a mapping of the liver surface is required to achieve image-tophysical
alignment for image guided liver surgery. Laser range scans and tracked optical stylus instruments have both
been utilized in the past to create an intraoperative representation of the organ surface. This paper proposes a method to
digitize the organ surface utilizing tracked ultrasound and to evaluate a relatively simple correction technique. Surfaces
are generated from point clouds obtained from the US transducer face itself during tracked movement. In addition, a
surface generated from a laser range scan (LRS) was used as the gold standard for evaluating the accuracy of the US
transducer swab surfaces. Two liver phantoms with varying stiffness were tested. The results reflected that the average
deformation observed for a 60 second swab of the liver phantom was 3.7 ± 0.9 mm for the more rigid phantom and 4.6 ±
1.2 mm for the less rigid phantom. With respect to tissue targets below the surface, the average error in position due to
ultrasound surface digitization was 3.5 ± 0.5 mm and 5.9 ± 0.9 mm for the stiffer and softer phantoms respectively. With
the simple correction scheme, the surface error was reduced to 1.1 ± 0.8 mm and 1.7 ± 1.0 mm, respectively; and the
subsurface target error was reduced to 2.0 ± 0.9 mm and 4.5 ± 1.8 mm, respectively. These results are encouraging and
suggest that the ultrasound probe itself and the acquired images could serve as a comprehensive digitization approach for
image guided liver surgery.
Image-guided surgery may reduce the re-excision rate in breast-conserving tumor-resection surgery, but
image guidance is difficult since the breast undergoes significant deformation during the procedure. In
addition, any imaging performed preoperatively is usually conducted in a very different presentation to that in
surgery. Biomechanical models combined with low-cost ultrasound imaging and laser range scanning may
provide an inexpensive way to provide intraoperative guidance information while also compensating for soft
tissue deformations that occur during breast-conserving surgery. One major cause of deformation occurs after
an incision into the tissue is made and the skin flap is pulled back with the use of retractors. Since the next
step in the surgery would be to start building a surgical plane around the tumor to remove cancerous tissue, in
an image-guidance environment, it would be necessary to have a model that corrects for the deformation
caused by the surgeon to properly guide the application of resection tools. In this preliminary study, two
anthropomorphic breast phantoms were made, and retractions were performed on both with improvised
retractors. One phantom underwent a deeper retraction that the other. A laser range scanner (LRS) was used to
monitor phantom tissue change before and after retraction. The surface data acquired with the LRS and
retractors were then used to drive the solution of a finite element model. The results indicate an encouraging
level of agreement between model predictions and data. The surface target error for the phantom with the
deep retraction was 2.2 +/- 1.2 mm (n=47 targets) with the average deformation of the surface targets at 4.2
+/- 1.6mm. For the phantom with the shallow retraction, the surface target error was 2.1 +/- 1.0 mm (n=70
targets) with the average deformation of the surface targets at 4.0 +/- 2.0 mm.
The objective of our work is a system that enables both mechanically and electronically shapable thermal energy
deposition in soft tissue ablation. The overall goal is a system that can percutaneously (and through a single
organ surface puncture) treat tumors that are large, multiple, geometrically complex, or located too close to
vital structures for traditional resection. This paper focuses on mechanical steering and image guidance aspects
of the project. Mechanical steering is accomplished using an active cannula that enables repositioning of the
ablator tip without complete retraction. We describe experiments designed to evaluate targeting accuracy of the
active cannula (also known as a concentric tube robot) in soft tissues under tracked 3D ultrasound guidance.
Laser range scanning an organ surface intraoperatively provides a cost effective and accurate means of measuring
geometric changes in tissue. A novel laser range scanner with integrated tracking was designed, developed, and analyzed
with the goal of providing intraoperative surface data during neurosurgery. The scanner is fitted with passive spheres to
be optically tracked in the operating room. The design notably includes a single-lens system capable of acquiring the
geometric information (as a Cartesian point cloud) via laser illumination and charge-coupled device (CCD) collection, as
well as the color information via visible light collection on the same CCD. The geometric accuracy was assessed by
scanning a machined phantom of known dimensions and comparing relative distances of landmarks from the point cloud
to the known distances. The ability of the scanner to be tracked was first evaluated by perturbing its orientation in front
of the optical tracking camera and recording the number of spheres visible to the camera at each orientation, and then by
observing the variance in point cloud locations of a fixed object when the tracking camera is moved around the scanner.
The scanning accuracy test resulted in an RMS error of 0.47 mm with standard deviation of 0.40 mm. The sphere
visibility test showed that four diodes were visible in most of the probable operating orientations, and the overall
tracking standard deviation was observed to be 1.49 mm. Intraoperative collection of cortical surface scans using the new
scanner is currently underway.
KEYWORDS: Tumors, Tissues, Reconstruction algorithms, Image registration, Detection and tracking algorithms, Elastography, Breast, Magnetic resonance imaging, Breast cancer, Process modeling
Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue
using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical
input to the algorithm, and are often determined by time-consuming point correspondence methods requiring manual
user input. Unfortunately, generation of accurate boundary conditions for the biomechanical model is often difficult due
to the challenge of accurately matching points between the source and target surfaces and consequently necessitates the
use of large numbers of fiducial markers. This study presents a novel method of automatically generating boundary
conditions by non-rigidly registering two image sets with a Demons diffusion-based registration algorithm. The use of
this method was successfully performed in silico using magnetic resonance and X-ray computed tomography image data
with known boundary conditions. These preliminary results have produced boundary conditions with accuracy of up to
80% compared to the known conditions. Finally, these boundary conditions were utilized within a 3D MIE
reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Preliminary results show a
reasonable characterization of the material properties on this first attempt and a significant improvement in the
automation level and viability of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.