Existing methods to improve the accuracy of tibiofibular joint reduction present workflow challenges, high radiation exposure, and a lack of accuracy and precision, leading to poor surgical outcomes. To address these limitations, we propose a method to perform robot-assisted joint reduction using intraoperative imaging to align the dislocated fibula to a target pose relative to the tibia. The approach (1) localizes the robot via 3D-2D registration of a custom plate adapter attached to its end effector, (2) localizes the tibia and fibula using multi-body 3D-2D registration, and (3) drives the robot to reduce the dislocated fibula according to the target plan. The custom robot adapter was designed to interface directly with the fibular plate while presenting radiographic features to aid registration. Registration accuracy was evaluated on a cadaveric ankle specimen, and the feasibility of robotic guidance was assessed by manipulating a dislocated fibula in a cadaver ankle. Using standard AP and mortise radiographic views registration errors were measured to be less than 1 mm and 1° for the robot adapter and the ankle bones. Experiments in a cadaveric specimen revealed up to 4 mm deviations from the intended path, which was reduced to ⪅2 mm using corrective actions guided by intraoperative imaging and 3D-2D registration. Preclinical studies suggest that significant robot flex and tibial motion occur during fibula manipulation, motivating the use of the proposed method to dynamically correct the robot trajectory. Accurate robot registration was achieved via the use of fiducials embedded within the custom design. Future work will evaluate the approach on a custom radiolucent robot design currently under construction and verify the solution on additional cadaveric specimens.
Purpose. Mobile C-arms capable of 2D fluoroscopy and 3D cone-beam CT (CBCT) are finding application in guidance of transbronchial lung biopsy, but unresolved deformable motion presents challenges to accurate target localization and guidance. We report the initial implementation of a method to resolve deformations via locally rigid / globally deformable 3D-2D registration for motion-compensated overlay of planning data in fluoroscopically-guided pulmonary interventions. Methods. The algorithm proceeds in 3 steps: (1) initialization by 3D-2D rigid registration of CBCT to fluoroscopy (driven by bone gradients); (2) local rigid 3D-2D registration of lung-thresholded CBCT to fluoroscopy within a region of interest (ROI) about each target location; and (3) aggregation of local rigid registrations to estimate global deformation. Several objective functions and optimizers were evaluated for soft-tissue target registration. Phantom studies were performed to determine operating parameters and assess performance with simulated lung deformation. Results. Soft-tissue thresholding and contrast enhancement improved target registration error (TRE) from 10.3 mm for conventional 3D-2D registration (driven primarily by rib gradients) to 3.8 mm using locally rigid 3D-2D registration in regions of interest about each target. The soft-tissue gradient orientation (GO) objective function was found to be superior to alternative similarity measures by de-emphasizing gradient magnitude (in favor of gradient orientation), permitting the algorithm to be better driven by soft-tissue edges. Conclusions. Registration driven by soft-tissue targets is achievable via a novel processing framework to de-emphasize non-target gradients. The proposed method could improve the accuracy of guidance in pulmonary interventions by updating target overlay in fluoroscopy.
The quality of neurosurgical planning can become compromised by soft tissue deformations that occur during surgery (i.e., brain shift). Conventional image guidance systems do not consider these intraoperative changes. In recent efforts, model based strategies have been developed to estimate surgical load displacements and modify the patient’s data intraoperatively to account for brain shift. While the efficacy of the model has been previously established, there is also an opportunity to further assist surgical planning with this pipeline. To address this, a mobile application designed for an Android tablet was developed to display simulated brain shifts that would occur during brain tumor surgery. The application has two primary functions to facilitate planning: a patient positioning mode and a simulation mode. The patient positioning mode allows the neurosurgeon to load the patient’s preoperative MR data and create a surgical plan (i.e., head orientation and craniotomy location) for the procedure. The simulation mode then displays both the preoperative data and the model predicted brain shift as a function of the specified orientation from the patient positioning mode. Additionally, to account for positional variations between planning and procedural implementation, the simulation mode also displays solutions with additional perturbations to the planned positioning to estimate shift possibilities. To assess the simulation mode prototype, practicing neurosurgeons were provided a prototype demonstration and interviews were performed to evaluate efficacy and design. Due to computational rendering and 3D rotation shortcomings based on clinical feedback, the prototype was redesigned into a full mixed reality simulation app on the Microsoft HoloLens. Preliminary survey responses show that the prototype could be an impactful surgical planning tool, especially among neurosurgeons with less experience.
Purpose: A method for fluoroscopic guidance of a robotic assistant is presented for instrument placement in pelvic trauma surgery. The solution uses fluoroscopic images acquired in standard clinical workflow and helps avoid repeat fluoroscopy commonly performed during implant guidance.
Approach: Images acquired from a mobile C-arm are used to perform 3D–2D registration of both the patient (via patient CT) and the robot (via CAD model of a surgical instrument attached to its end effector, e.g; a drill guide), guiding the robot to target trajectories defined in the patient CT. The proposed approach avoids C-arm gantry motion, instead manipulating the robot to acquire disparate views of the instrument. Phantom and cadaver studies were performed to determine operating parameters and assess the accuracy of the proposed approach in aligning a standard drill guide instrument.
Results: The proposed approach achieved average drill guide tip placement accuracy of 1.57 ± 0.47 mm and angular alignment of 0.35 ± 0.32 deg in phantom studies. The errors remained within 2 mm and 1 deg in cadaver experiments, comparable to the margins of errors provided by surgical trackers (but operating without the need for external tracking).
Conclusions: By operating at a fixed fluoroscopic perspective and eliminating the need for encoded C-arm gantry movement, the proposed approach simplifies and expedites the registration of image-guided robotic assistants and can be used with simple, non-calibrated, non-encoded, and non-isocentric C-arm systems to accurately guide a robotic device in a manner that is compatible with the surgical workflow.
Purpose. Surgical placement of pelvic instrumentation is challenged by complex anatomy and narrow bone corridors, and despite heavy reliance on intraoperative fluoroscopy, trauma surgery lacks a reliable solution for 3D surgical navigation that is compatible with steep workflow requirements. We report a method that uses routinely acquired fluoroscopic images in standard workflow to automatically detect and localize orthopaedic instruments for 3D guidance. Methods. The proposed method detects, establishes correspondence of, and localizes orthopaedic devices from a pair of radiographs. Instrument detection uses Mask R-CNN for segmentation and keypoint detection, trained on 4000 cadaveric pelvic radiographs with simulated guidewires. Keypoints on individual images are corresponded using prior calibration of the imaging system to backproject, identify, and rank-order ray intersections. Estimation of 3D instrument tip and direction was evaluated on a cadaveric specimen and patient images from an IRB-approved clinical study. Results. The detection network successfully generalized to cadaver and clinical images, achieving 87% recall and 98% precision. Mean geometric accuracy in estimating instrument tip and direction was (1.9 ± 1.6) mm and (1.8 ± 1.3)°, respectively. Simulation studies demonstrated 1.1 mm median error in 3D tip and 2.3° in 3D direction estimation. Preliminary tests in cadaver and clinical images show the basic feasibility of the overall approach. Conclusions. Experimental studies demonstrate the feasibility and highlight the potential of deep learning for 3D-2D registration of orthopaedic instruments as applied in fixation of pelvic fractures. The approach is compatible with routine orthopaedic workflow, does not require additional equipment (such as surgical trackers), uses common imaging equipment (mobile C-arm fluoroscopy), and does not require vendor-specific device models.
Purpose: A method and prototype for a fluoroscopically-guided surgical robot is reported for assisting pelvic fracture fixation. The approach extends the compatibility of existing guidance methods with C-arms that are in mainstream use (without prior geometric calibration) using an online calibration of the C-arm geometry automated via registration to patient anatomy. We report the first preclinical studies of this method in cadaver for evaluation of geometric accuracy. Methods: The robot is placed over the patient within the imaging field-of-view and radiographs are acquired as the robot rotates an attached instrument. The radiographs are then used to perform an online geometric calibration via 3D-2D image registration, which solves for the intrinsic and extrinsic parameters of the C-arm imaging system with respect to the patient. The solved projective geometry is then be used to register the robot to the patient and drive the robot to planned trajectories. This method is applied to a robotic system consisting of a drill guide instrument for guidewire placement and evaluated in experiments using a cadaver specimen. Results: Robotic drill guide alignment to trajectories defined in the cadaver pelvis were accurate within 2 mm and 1° (on average) using the calibration-free approach. Conformance of trajectories within bone corridors was confirmed in cadaver by extrapolating the aligned drill guide trajectory into the cadaver pelvis. Conclusion: This study demonstrates the accuracy of image-guided robotic positioning without prior calibration of the Carm gantry, facilitating the use of surgical robots with simpler imaging devices that cannot establish or maintain an offline calibration. Future work includes testing of the system in a clinical setting with trained orthopaedic surgeons and residents.
Purpose: Measurement of global spinal alignment (GSA) is an important aspect of diagnosis and treatment evaluation for spinal deformity but is subject to a high level of inter-reader variability.
Approach: Two methods for automatic GSA measurement are proposed to mitigate such variability and reduce the burden of manual measurements. Both approaches use vertebral labels in spine computed tomography (CT) as input: the first (EndSeg) segments vertebral endplates using input labels as seed points; and the second (SpNorm) computes a two-dimensional curvilinear fit to the input labels. Studies were performed to characterize the performance of EndSeg and SpNorm in comparison to manual GSA measurement by five clinicians, including measurements of proximal thoracic kyphosis, main thoracic kyphosis, and lumbar lordosis.
Results: For the automatic methods, 93.8% of endplate angle estimates were within the inter-reader 95% confidence interval (CI95). All GSA measurements for the automatic methods were within the inter-reader CI95, and there was no statistically significant difference between automatic and manual methods. The SpNorm method appears particularly robust as it operates without segmentation.
Conclusions: Such methods could improve the reproducibility and reliability of GSA measurements and are potentially suitable to applications in large datasets—e.g., for outcome assessment in surgical data science.
Purpose. We report the initial development of an image-based solution for robotic assistance of pelvic fracture fixation. The approach uses intraoperative radiographs, preoperative CT, and an end effector of known design to align the robot with target trajectories in CT. The method extends previous work to solve the robot-to-patient registration from a single radiographic view (without C-arm rotation) and addresses the workflow challenges associated with integrating robotic assistance in orthopaedic trauma surgery in a form that could be broadly applicable to isocentric or non-isocentric C-arms. Methods. The proposed method uses 3D-2D known-component registration to localize a robot end effector with respect to the patient by: (1) exploiting the extended size and complex features of pelvic anatomy to register the patient; and (2) capturing multiple end effector poses using precise robotic manipulation. These transformations, along with an offline hand-eye calibration of the end effector, are used to calculate target robot poses that align the end effector with planned trajectories in the patient CT. Geometric accuracy of the registrations was independently evaluated for the patient and the robot in phantom studies. Results. The resulting translational difference between the ground truth and patient registrations of a pelvis phantom using a single (AP) view was 1.3 mm, compared to 0.4 mm using dual (AP+Lat) views. Registration of the robot in air (i.e., no background anatomy) with five unique end effector poses achieved mean translational difference ~1.4 mm for K-wire placement in the pelvis, comparable to tracker-based margins of error (commonly ~2 mm). Conclusions. The proposed approach is feasible based on the accuracy of the patient and robot registrations and is a preliminary step in developing an image-guided robotic guidance system that more naturally fits the workflow of fluoroscopically guided orthopaedic trauma surgery. Future work will involve end-to-end development of the proposed guidance system and assessment of the system with delivery of K-wires in cadaver studies.
Purpose. Fracture reduction is a challenging part of orthopaedic pelvic trauma procedures, resulting in poor long-term prognosis if reduction does not accurately restore natural morphology. Manual preoperative planning is performed to obtain target transformations of target bones – a process that is challenging and time-consuming even to experts within the rapid workflow of emergent care and fluoroscopically guided surgery. We report a method for fracture reduction planning using a novel image-based registration framework. Method. An objective function is designed to simultaneously register multi-body bone fragments that are preoperatively segmented via a graph-cut method to a pelvic statistical shape model (SSM) with inter-body collision constraints. An alternating optimization strategy switches between fragments alignment and SSM adaptation to solve for the fragment transformations for fracture reduction planning. The method was examined in a leave-one-out study performed over a pelvic atlas with 40 members with two-body and three-body fractures simulated in the left innominate bone with displacements ranging 0–20 mm and 0°–15°. Result. Experiments showed the feasibility of the registration method in both two-body and three-body fracture cases. The segmentations achieved Dice coefficient of median 0.94 (0.01 interquartile range [IQR]) and root mean square error (RMSE) of 2.93 mm (0.56 mm IQR). In two-body fracture cases, fracture reduction planning yielded 3.8 mm (1.6 mm IQR) translational and 2.9° (1.8° IQR) rotational error. Conclusion. The method demonstrated accurate fracture reduction planning within 5 mm and shows promise for future generalization to more complicated fracture cases. The algorithm provides a novel means of planning from preoperative CT images that are already acquired in standard workflow.
Spinal degeneration and deformity present an enormous healthcare burden, with spine surgery among the main treatment modalities. Unfortunately, spine surgery (e.g., lumbar fusion) exhibits broad variability in the quality of outcome, with ~20-40% of patients gaining no benefit in pain or function (“failed back surgery”) and earning criticism that is difficult to reconcile versus rapid growth in frequency and cost over the last decade. Vital to advancing the quality of care in spine surgery are improved clinical decision support (CDS) tools that are accurate, explainable, and actionable: accurate in prediction of outcomes; explainable in terms of the physical / physiological factors underlying the prediction; and actionable within the shared decision process between a surgeon and patient in identifying steps that could improve outcome. This technical note presents an overview of a novel outcome prediction framework for spine surgery (dubbed SpineCloud) that leverages innovative image analytics in combination with explainable prediction models to achieve accurate outcome prediction. Key to the SpineCloud framework are image analysis methods for extraction of high-level quantitative features from multi-modality peri-operative images (CT, MR, and radiography) related to spinal morphology (including bone and soft-tissue features), the surgical construct (including deviation from an ideal reference), and longitudinal change in such features. The inclusion of such image-based features is hypothesized to boost the predictive power of models that conventionally rely on demographic / clinical data alone (e.g., age, gender, BMI, etc.). Preliminary results using gradient boosted decision trees demonstrate that such prediction models are explainable (i.e., why a particular prediction is made), actionable (identifying features that may be addressed by the surgeon and/or patient), and boost predictive accuracy compared to analysis based on demographics alone (e.g., AUC improved by ~25% in preliminary studies). Incorporation of such CDS tools in spine surgery could fundamentally alter and improve the shared decisionmaking process between surgeons and patients by highlighting actionable features to improve selection of therapeutic and rehabilitative pathways.
Purpose: Metal artifacts remain a challenge for CBCT systems in diagnostic imaging and image-guided surgery, obscuring visualization of metal instruments and surrounding anatomy. We present a method to predict C-arm CBCT orbits that will avoid metal artifacts by acquiring projection data that is least affected by polyenergetic bias. Methods: The metal artifact avoidance (MAA) method operates with a minimum of prior information, is compatible with simple mobile C-arms that are increasingly prevalent in routine use, and is consistent with either 3D filtered backprojection (FBP), more advanced (polyenergetic) model-based image reconstruction (MBIR), and/or metal artifact reduction (MAR) post-processing methods. MAA consists of the following steps: (i) coarse localization of metal objects in the field of view (FOV) via two or more low-dose scout views, coarse backprojection, and segmentation (e.g., with a U-Net); (ii) a simple model-based prediction of metal-induced x-ray spectral shift for all source-detector vertices (gantry rotation and tilt angles) accessible by the imaging system; and (iii) definition of a source-detector orbit that minimizes the view-to-view inconsistency in spectral shift. The method was evaluated in anthropomorphic phantom study emulating pedicle screw placement in spine surgery. Results: Phantom studies confirmed that the MAA method could accurately predict tilt angles that minimize metal artifacts. The proposed U-Net segmentation method was able to localize complex distributions of metal instrumentation (over 70% Dice coefficient) with 6 low-dose scout projections acquired during routine pre-scan collision check. CBCT images acquired at MAA-prescribed tilt angles demonstrated ~50% reduction in “blooming” artifacts (measured as FWHM of the screw shaft). Geometric calibration for tilted orbits at prescribed angular increments with interpolation for intermediate values demonstrated accuracy comparable to non-tilted circular trajectories in terms of the modulation transfer function. Conclusion: The preliminary results demonstrate the ability to predict C-arm orbits that provide projection data with minimal spectral bias from metal instrumentation. Such orbits exhibit strongly reduced metal artifacts, and the projection data are compatible with additional post-processing (metal artifact reduction, MAR) methods to further reduce artifacts and/or reduce noise. Ongoing studies aim to improve the robustness of metal object localization from scout views and investigate additional benefits of non-circular C-arm trajectories.
Purpose: Data-intensive modeling could provide insight on the broad variability in outcomes in spine surgery. Previous studies were limited to analysis of demographic and clinical characteristics. We report an analytic framework called “SpineCloud” that incorporates quantitative features extracted from perioperative images to predict spine surgery outcome.
Approach: A retrospective study was conducted in which patient demographics, imaging, and outcome data were collected. Image features were automatically computed from perioperative CT. Postoperative 3- and 12-month functional and pain outcomes were analyzed in terms of improvement relative to the preoperative state. A boosted decision tree classifier was trained to predict outcome using demographic and image features as predictor variables. Predictions were computed based on SpineCloud and conventional demographic models, and features associated with poor outcome were identified from weighting terms evident in the boosted tree.
Results: Neither approach was predictive of 3- or 12-month outcomes based on preoperative data alone in the current, preliminary study. However, SpineCloud predictions incorporating image features obtained during and immediately following surgery (i.e., intraoperative and immediate postoperative images) exhibited significant improvement in area under the receiver operating characteristic (AUC): AUC = 0.72 (CI95 = 0.59 to 0.83) at 3 months and AUC = 0.69 (CI95 = 0.55 to 0.82) at 12 months.
Conclusions: Predictive modeling of lumbar spine surgery outcomes was improved by incorporation of image-based features compared to analysis based on conventional demographic data. The SpineCloud framework could improve understanding of factors underlying outcome variability and warrants further investigation and validation in a larger patient cohort.
Purpose: A method for automatic computation of global spinal alignment (GSA) metrics is presented to mitigate the high variability of manual definitions in radiographic images. The proposed algorithm segments vertebral endplates in CT as a basis for automatic computation of metrics of global spinal morphology. The method is developed as a potential tool for intraoperative guidance in deformity correction surgery, and/or automatic definition of GSA in large datasets for analysis of surgical outcome. Methods: The proposed approach segments vertebral endplates in spine CT images using vertebral labels as input. The segmentation algorithm extracts vertebral boundaries using a continuous max-flow algorithm and segments the vertebral endplate surface by region-growing. The point cloud of the segmented endplate is forward-projected as a digitally reconstructed radiograph (DRR), and a linear fit is computed to extract the endplate angle in the radiographic plane. Two GSA metrics (lumbar lordosis and thoracic kyphosis) were calculated using these automatically measured endplate angles. Experiments were performed in seven patient CT images acquired from Spineweb and accuracy was quantified by comparing automatically-computed endplate angles and GSA metrics to manual definitions. Results: Endplate angles were automatically computed with median accuracy = 2.7°, upper quartile (UQ) = 4.8°, and lower quartile (LQ) = 1.0° with respect to manual ground-truth definitions. This was within the measured intra- observer variability = 3.1° (RMS) of manual definitions. GSA metrics had median accuracy = 1.1° (UQ = 3.1°) for lumbar lordosis and median accuracy = 0.4° (UQ = 3.0°) for thoracic kyphosis. The performance of GSA measurements was also within the variability of the manual approach. Conclusions: The method offers a potential alternative to time-consuming, manual definition of endplate angles for GSA computation. Such automatic methods could provide a means of intraoperative decision support in correction of spinal deformity and facilitate data-intensive analysis in identifying metrics correlating with surgical outcomes.
Motivation/Purpose: This work reports the development and validation of an algorithm to automatically detect and localize vertebrae in CT images of patients undergoing spine surgery. Slice-by-slice detections using the state-of-the art 2D convolutional neural network (CNN) architectures were combined to estimate vertebra centroid location in 3D including a method that combined detections in sagittal and coronal slices. The solution facilitates applications in image guided surgery and automatic computation of image analytics for surgical data science. Methods: CNN-based object detection models in 3D (volume) and 2D (slice) images were implemented and evaluated for the task of vertebrae detection. Slice-by-slice detections in 2D architectures were combined to estimate the 3D centroid location including a model that simultaneously evaluated 2D detections in orthogonal directions (i.e., sagittal and coronal slices) to improve the robustness against spurious false detections – called Ortho-2D. Performance was evaluated in a data set consisting of 85 patients undergoing spine surgery at our institution, including images presenting spinal instrumentation/implants, spinal deformity, and anatomical abnormalities that are realistic exemplars of pathology in the patient population. Accuracy was quantified in terms of precision, recall, F1 score, and the 3D geometric error in vertebral centroid annotation compared to ground truth (expert manual) annotation. Results: Three CNN object detection models were able to successfully localize vertebrae, with Ortho-2D model that combined 2D detections in orthogonal directions achieving best performance: precision = 0.95, recall = 0.99, and F1 score = 0.97. Overall centroid localization accuracy was 3.4 mm (median) [interquartile range (IQR) = 2.7 mm], and ~97% of detections (154/159 lumbar cases) yielded acceptable centroid localization error <15 mm (considering average vertebrae size ~25 mm). Conclusions: State-of-the-art CNN architectures were adapted for vertebral centroid annotation, yielding accurate and robust localization even in the presence of anatomical abnormalities, image artifacts, and dense instrumentation. The methods are employed as a basis for streamlined image guidance (automatic initialization of 3D-2D and 3D-3D registration methods in image-guided surgery) and as an automatic spine labeling tool to generate image analytics.
Purpose. We report the initial implementation of an algorithm that automatically plans screw trajectories for spinal
pedicle screw placement procedures to improve the workflow, accuracy, and reproducibility of screw placement in freehand
navigated and robot-assisted spinal pedicle screw surgery. In this work, we evaluate the sensitivity of the algorithm
to the settings of key parameters in simulation studies.
Methods. Statistical shape models (SSMs) of the lumbar spine were constructed with segmentations of L1-L5 and
bilateral screw trajectories of N=40 patients. Active-shape model (ASM) registration was devised to map the SSMs to
the patient CT, initialized simply by alignment of (automatically annotated) single-point vertebral centroids. The atlas
was augmented by definition of “ideal / reference” trajectories for each spinal pedicle, and the trajectories are
deformably mapped to the patient CT. A parameter sensitivity analysis for the ASM method was performed on 3
parameters to determine robust operating points for ASM registration. The ASM method was evaluated by calculating
the root-mean-square-error between the registered SSM and the ground-truth segmentation for the L1 vertebra, and the
trajectory planning method was evaluated by performing a leave-one-out analysis and determining the entry point, end
point, and angular differences between the automatically planned trajectories and the neurosurgeon-defined reference
trajectories.
Results. The parameter sensitivity analysis showed that the ASM registration algorithm was relatively insensitive to
initial profile length (PLinitial) less than ~4 mm, above which runtime and registration error increased. Similarly stable
performance was observed for a maximum number of principal components (PCmax) of at least 8. Registration error ~2
mm was evident with diminishing return beyond a number of iterations, Niter, ~2000. With these parameter settings,
ASM registration of L1 achieved (2.0 ± 0.5) mm RMSE. Transpedicle trajectories for L1 agreed with reference
definition by (2.6 ± 1.3) mm at the entry point, by (3.4 ± 1.8) mm at the end point, and within (4.9° ±2.8°) in angle.
Conclusions. Initial results suggest that the algorithm yields accurate definition of pedicle trajectories in unsegmented
CT images of the spine. The studies identified stable operating points for key algorithm parameters and support ongoing
development and translation to clinical studies in free-hand navigated and robot-assisted spine surgery, where fast,
accurate trajectory definition is essential to workflow.
Conventional optical tracking systems use cameras sensitive to near-infra-red (NIR) light detecting cameras and passively/actively NIR-illuminated markers to localize instrumentation and the patient in the operating room (OR) physical space. This technology is widely-used within the neurosurgical theatre and is a staple in the standard of care in craniotomy planning. To accomplish, planning is largely conducted at the time of the procedure with the patient in a fixed OR head presentation orientation. In the work presented herein, we propose a framework to achieve this in the OR that is free of conventional tracking technology, i.e. a trackerless approach. Briefly, we are investigating a collaborative extension of 3D slicer that combines surgical planning and craniotomy designation in a novel manner. While taking advantage of the well-developed 3D slicer platform, we implement advanced features to aid the neurosurgeon in planning the location of the anticipated craniotomy relative to the preoperatively imaged tumor in a physical-to-virtual setup, and then subsequently aid the true physical procedure by correlating that physical-to-virtual plan with a novel intraoperative MR-to-physical registered field-of-view display. These steps are done such that the craniotomy can be designated without use of a conventional optical tracking technology. To test this novel approach, an experienced neurosurgeon performed experiments on four different mock surgical cases using our module as well as the conventional procedure for comparison. The results suggest that our planning system provides a simple, cost-efficient, and reliable solution for surgical planning and delivery without the use of conventional tracking technologies. We hypothesize that the combination of this early-stage craniotomy planning and delivery approach, and our past developments in cortical surface registration and deformation tracking using stereo-pair data from the surgical microscope may provide a fundamental new realization of an integrated trackerless surgical guidance platform.
The fidelity of image-guided neurosurgical procedures is often compromised due to the mechanical deformations that occur during surgery. In recent work, a framework was developed to predict the extent of this brain shift in brain-tumor resection procedures. The approach uses preoperatively determined surgical variables to predict brain shift and then subsequently corrects the patient’s preoperative image volume to more closely match the intraoperative state of the patient’s brain. However, a clinical workflow difficulty with the execution of this framework is the preoperative acquisition of surgical variables. To simplify and expedite this process, an Android, Java-based application was developed for tablets to provide neurosurgeons with the ability to manipulate three-dimensional models of the patient’s neuroanatomy and determine an expected head orientation, craniotomy size and location, and trajectory to be taken into the tumor. These variables can then be exported for use as inputs to the biomechanical model associated with the correction framework. A multisurgeon, multicase mock trial was conducted to compare the accuracy of the virtual plan to that of a mock physical surgery. It was concluded that the Android application was an accurate, efficient, and timely method for planning surgical variables.
Brain shift describes the deformation that the brain undergoes from mechanical and physiological effects typically during a neurosurgical or neurointerventional procedure. With respect to image guidance techniques, brain shift has been shown to compromise the fidelity of these approaches. In recent work, a computational pipeline has been developed to predict “brain shift” based on preoperatively determined surgical variables (such as head orientation), and subsequently correct preoperative images to more closely match the intraoperative state of the brain. However, a clinical workflow difficulty in the execution of this pipeline has been acquiring the surgical variables by the neurosurgeon prior to surgery. In order to simplify and expedite this process, an Android, Java-based application designed for tablets was developed to provide the neurosurgeon with the ability to orient 3D computer graphic models of the patient’s head, determine expected location and size of the craniotomy, and provide the trajectory into the tumor. These variables are exported for use as inputs for the biomechanical models of the preoperative computing phase for the brain shift correction pipeline. The accuracy of the application’s exported data was determined by comparing it to data acquired from the physical execution of the surgeon’s plan on a phantom head. Results indicated good overlap of craniotomy predictions, craniotomy centroid locations, and estimates of patient’s head orientation with respect to gravity. However, improvements in the app interface and mock surgical setup are needed to minimize error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.