PurposeTo provide a simulation framework for routine neuroimaging test data, which allows for “stress testing” of deep segmentation networks against acquisition shifts that commonly occur in clinical practice for T2 weighted (T2w) fluid-attenuated inversion recovery magnetic resonance imaging protocols.ApproachThe approach simulates “acquisition shift derivatives” of MR images based on MR signal equations. Experiments comprise the validation of the simulated images by real MR scans and example stress tests on state-of-the-art multiple sclerosis lesion segmentation networks to explore a generic model function to describe the F1 score in dependence of the contrast-affecting sequence parameters echo time (TE) and inversion time (TI).ResultsThe differences between real and simulated images range up to 19% in gray and white matter for extreme parameter settings. For the segmentation networks under test, the F1 score dependency on TE and TI can be well described by quadratic model functions (R2>0.9). The coefficients of the model functions indicate that changes of TE have more influence on the model performance than TI.ConclusionsWe show that these deviations are in the range of values as may be caused by erroneous or individual differences in relaxation times as described by literature. The coefficients of the F1 model function allow for a quantitative comparison of the influences of TE and TI. Limitations arise mainly from tissues with a low baseline signal (like cerebrospinal fluid) and when the protocol contains contrast-affecting measures that cannot be modeled due to missing information in the DICOM header.
Recently published guidelines recommend the validation of models for medical image processing against variations of image acquisition parameters. This work presents a novel concept for systematic tests that investigate a model’s performance to MRI acquisition shifts in brain scans of multiple sclerosis patients. Contrast changes of T2-weighted FLAIR images related to changes of two scan parameters (TE, TI) are simulated using artificial data. These images were created from 114 brain MRI, normal tissue segmentation masks and expert MS lesion masks (NAMIC, OpenMS, MSSEG2, ISBI15, LC08). Contrast changes were simulated by using the FLAIR signal equation. The experiments evaluate the F1 scores of models trained on images based on different uniform and non-uniform acquisition parameters when tested on typical acquisition parameter shifts. The acquisition parameter variations of the experiments are based on guidelines for quality assurance and the analysis of protocols and images from different institutions and open datasets. The dependence of the F1 score on both parameters could be approximated by linear functions with R2 scores of 0.94 to 0.98. Thus, linear functions can be used to model the dependence of the F1 score on the influencing factors, thereby allowing for the derivation of a "save" range of image acquisition parameters to meet a desired performance metric. The segmentation model was more sensitive to changes in TE compared to TI. The maximum lesion F1 loss when applying the models to out-of-distribution data ranged from 0.04 to 0.36 and was significant in all the experiments, even when the model was trained on data representing scans of different contrast (p<0.01). This underlines the need for testing segmentation models against acquisition shifts.
Over the past decades, digital pathology has emerged as an alternative way of looking at the tissue at subcellular level. It enables multiplexed analysis of different cell types at micron level. Information about cell types can be extracted by staining sections of a tissue block using different markers. However, robust fusion of structural and functional information from different stains is necessary for reproducible multiplexed analysis. Such a fusion can be obtained via image co-registration by establishing spatial correspondences between tissue sections. Spatial correspondences can then be used to transfer various statistics about cell types between sections. However, the multi-modal nature of images and sparse distribution of interesting cell types pose several challenges for the registration of differently stained tissue sections. In this work, we propose a co-registration framework that efficiently addresses such challenges. We present a hierarchical patch-based registration of intensity normalized tissue sections. Preliminary experiments demonstrate the potential of the proposed technique for the fusion of multi-modal information from differently stained digital histopathology sections.
Visual tracking of surgical instruments is an essential part of eye surgery, and plays an important role for the surgeons as well as it is a key component of robotics assistance during the operation time. The difficulty of detecting and tracking medical instruments in-vivo images comes from its deformable shape, changes in brightness, and the presence of the instrument shadow. This paper introduces a new approach to detect the tip of surgical tool and its width regardless of its head shape and the presence of the shadows or vessels. The approach relies on integrating structural information about the strong edges from the RGB color model, and the tool location-based information from L*a*b color model. The probabilistic Hough transform was applied to get the strongest straight lines in the RGB-images, and based on information from the L* and a*, one of these candidates lines is selected as the edge of the tool shaft. Based on that line, the tool slope, the tool centerline and the tool tip could be detected. The tracking is performed by keeping track of the last detected tool tip and the tool slope, and filtering the Hough lines within a box around the last detected tool tip based on the slope differences. Experimental results demonstrate the high accuracy achieved in term of detecting the tool tip position, the tool joint point position, and the tool centerline. The approach also meets the real time requirements.
KEYWORDS: Magnetic resonance imaging, Liver, Data acquisition, Signal detection, Image registration, Radiotherapy, Image restoration, 3D image processing, Ultrasonography, Radon
Respiratory motion is a challenging factor for image-guided procedures in the abdominal region. Target localization,
an important issue in applications like radiation therapy, becomes difficult due to this motion. Therefore,
it is necessary to detect the respiratory signal to have a higher accuracy in planning and treatment. We propose
a novel image-based breathing gating method to recover the breathing signal directly from the image data. For
the gating we use Laplacian eigenmaps, a manifold learning technique, to determine the low-dimensional manifold
embedded in the high-dimensional space. Since Laplacian eigenmaps assign each 2D MR slice a coordinate
in a low-dimensional space by respecting the neighborhood relationship, they are well suited for analyzing the
respiratory motion. We perform the manifold learning on MR slices acquired from a fixed location. Then,
we use the resulting respiratory signal to derive a similarity criterion to be used in applications like 4D MRI
reconstruction. We perform experiments on liver data using one and three dimensions as the dimension of the
manifold and compare the results. The results from the first case show that using only one dimension as the
dimension of the manifold is not enough to represent the complex motion of the liver caused by respiration. We
successfully recover the changes due to respiratory motion by using three dimensions. The proposed method
has the potential of reducing the processing time for the 4D reconstruction significantly by defining a search
window for a subsequent registration approach. It is fully automatic and does not require any prior information
or training data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.