Managing patients with hydrocephalus and cerebrospinal fluid disorders requires repeated head imaging. In adults, this is typically done with computed tomography (CT) or less commonly magnetic resonance imaging (MRI). However, CT poses cumulative radiation risks and MRI is costly. Transcranial ultrasound is a radiation-free, relatively inexpensive, and optionally point-of-care alternative. The initial use of this modality has involved measuring gross brain ventricle size by manual annotation. In this work, we explore the use of deep learning to automate the segmentation of brain right ventricle from transcranial ultrasound images. We found that the vanilla U-Net architecture encountered difficulties in accurately identifying the right ventricle, which can be attributed to challenges such as limited resolution, artifacts, and noise inherent in ultrasound images. We further explore the use of coordinate convolution to augment the U-Net model, which allows us to take advantage of the established acquisition protocol. This enhancement yielded a statistically significant improvement in performance, as measured by the Dice similarity coefficient. This study presents, for the first time, the potential capabilities of deep learning in automating hydrocephalus assessment from ultrasound imaging.
Magnetic Resonance Imaging with tagging (tMRI) has long been utilized for quantifying tissue motion and strain during deformation. However, a phenomenon known as tag fading, a gradual decrease in tag visibility over time, often complicates post-processing. The first contribution of this study is to model tag fading by considering the interplay between T1 relaxation and the repeated application of radio frequency (RF) pulses during serial imaging sequences. This is a factor that has been overlooked in prior research on tMRI post-processing. Further, we have observed an emerging trend of utilizing raw tagged MRI within a deep learning-based (DL) registration framework for motion estimation. In this work, we evaluate and analyze the impact of commonly used image similarity objectives in training DL registrations on raw tMRI. This is then compared with the Harmonic Phase-based approach, a traditional approach which is claimed to be robust to tag fading. Our findings, derived from both simulated images and an actual phantom scan, reveal the limitations of various similarity losses in raw tMRI and emphasize caution in registration tasks where image intensity changes over time.
Magnetic resonance images are often acquired as several 2D slices and stacked into a 3D volume, yielding a lower through-plane resolution than in-plane resolution. Many super-resolution (SR) methods have been proposed to address this, including those that use the inherent high-resolution (HR) in-plane signal as HR data to train deep neural networks. Techniques with this approach are generally both self-supervised and internally trained, so no external training data is required. However, in such a training paradigm limited data are present for training machine learning models and the frequency content of the in-plane data may be insufficient to capture the true HR image. In particular, the recovery of high frequency information is usually lacking. In this work, we show this shortcoming with Fourier analysis; we subsequently propose and compare several approaches to address the recovery of high frequency information. We test a particular internally trained self-supervised method named SMORE on ten subjects at three common clinical resolutions with three types of modification: frequency-type losses (Fourier and wavelet), feature-type losses, and low-resolution re-gridding strategies for estimating the residual. We find a particular combination to balance between signal recovery in both spatial and frequency domains qualitatively and quantitatively, yet none of the modifications alone or in tandem yield a vastly superior result. We postulate that there may either be limits on internally trained techniques that such modifications cannot address, or limits on modeling SR as finding a map from low-resolution to HR, or both.
Real-time fringe projection profilometry (FPP) is developed as a 3D vision system to plan and guide autonomous robotic intestinal suturing. Conventional FPP requires sinusoidal patterns with multiple frequencies, and phase shifts to generate tissue point clouds, resulting in a slow frame rate. Therefore, although FPP can reconstruct dense and accurate tissue point clouds, it is often too slow for dynamic measurements. To address this problem, we propose a deep learning-based single-shot FPP algorithm, which reconstructs tissue point clouds with a single sinusoidal pattern using a Swin-Unet. With this approach, we have achieved a FPP imaging frame rate of 50Hz while maintaining high point cloud measurement accuracy. System performance was trained and evaluated both by synthesized and an experimental dataset. An overall relative error of 1~3% was achieved.
KEYWORDS: Signal attenuation, Optical coherence tomography, Backscatter, Tissues, Signal intensity, Monte Carlo methods, Biological samples, Scattering, Optical properties, Attenuation
SignificanceExtracting optical properties of tissue [e.g., the attenuation coefficient (μ) and the backscattering fraction] from the optical coherence tomography (OCT) images is a valuable tool for parametric imaging and related diagnostic applications. Previous attenuation estimation models depend on the assumption of the uniformity of the backscattering fraction (R) within layers or whole samples, which does not accurately represent real-world conditions.AimOur aim is to develop a robust and accurate model that calculates depth-wise values of attenuation and backscattering fractions simultaneously from OCT signals. Furthermore, we aim to develop an attenuation compensation model for OCT images that utilizes the optical properties we obtained to improve the visual representation of tissues.ApproachUsing the stationary iteration method under suitable constraint conditions, we derived the approximated solutions of μ and R on a single scattering model. During the iteration, the estimated value of μ can be rectified by introducing the large variations of R, whereas the small ones were automatically ignored. Based on the calculation of the structure information, the OCT intensity with attenuation compensation was deduced and compared with the original OCT profiles.ResultsThe preliminary validation was performed in the OCT A-line simulation and Monte Carlo modeling, and the subsequent experiment was conducted on multi-layer silicone-dye-TiO2 phantoms and ex vivo cow eyes. Our method achieved robust and precise estimation of μ and R for both simulated and experimental data. Moreover, corresponding OCT images with attenuation compensation provided an improved resolution over the entire imaging range.ConclusionsOur proposed method was able to correct the estimation bias induced by the variations of R and provided accurate depth-resolved measurements of both μ and R simultaneously. The method does not require prior knowledge of the morphological information of tissue and represents more real-life tissues. Thus, it has the potential to help OCT imaging based disease diagnosis of complex and multi-layer biological tissue.
Fringe projection profilometry (FPP) is being developed as a 3D vision system to assist robotic surgery and autonomous suturing. Conventionally, fluorescence markers are placed on a target tissue to indicate suturing landmarks, which not only increase the system complexity, but also impose safety concerns. To address these problems, we propose a numerical landmark detection algorithm based on deep learning. A landmark heatmap is regressed using an adopted U-Net from the four channel data generated by the FPP. A Markov random field leveraging the structure prior is developed to search the correct set of landmarks from the heatmap. The accuracy of the proposed method is verified through ex-vivo porcine intestine landmark detection experiments.
Optical coherence tomography (OCT) with a robust depth-resolved attenuation compensation method for a wide range of imaging applications is proposed and demonstrated. We derive a model for deducing the attenuation coefficients and the signal compensation value using the depth-dependent backscattering profiles, to mitigate under and overestimation in tissue imaging. We validated the method using numerical simulation and phantoms, where we achieved stable and robust compensation results over the entire depth of samples. The comparison between other attenuation characterization models and our proposed model is also performed.
We developed a fully automated abdominal tissue classification algorithm for swept-source OCT imaging using a hybrid multilayer perceptron (MLP) and convolutional neural network (CNN) classifier. For MLP, we incorporated an extensive set of features and a subset was chosen to improve network efficiency. For CNN, we designed a threechannel model combining the intensity information with depth-dependent optical properties of tissues. A rule-based decision fusion approach was applied to find more convincing predictions between these two portions. Our model was trained using ex vivo porcine samples, (~200 B-mode images, ~200,000 A-line signals), evaluated by a hold-out dataset. Compared to other algorithms, our classifiers achieve the highest accuracy of 0.9114 and precision of 0.9106. The promising results showed its feasibility for real-time abdominal tissue sensing during robotic-assisted laparoscopic OCT surgery.
KEYWORDS: Signal attenuation, Optical coherence tomography, Tissues, Calibration, Monte Carlo methods, Image segmentation, Visualization, Speckle, Signal to noise ratio, Point spread functions
Optical coherence tomography (OCT) with a robust depth-resolved attenuation compensation method for a wide range of imaging applications is proposed and demonstrated. The proposed novel OCT attenuation compensation algorithm introduces an optimized axial point spread function (PSF) to modify existing depth-resolved methods and mitigates under and overestimation in biological tissues, providing a uniform resolution over the entire imaging range. The preliminary study is implemented using A-mode numerical simulation, where this method achieved stable and robust compensation results over the entire depth of samples. The experiment results using phantoms and corneal imaging exhibit agreement with the simulation result evaluated using signal-to-noise (SNR) and contrast-to-noise (CNR) metrics.
Traditional methods for quantifying micro-fluid velocity using optical coherence tomography (OCT) can be divided into either phase-based or amplitude-based methods. Phase-based methods are less sensitive to the transverse velocity components and suffer from wrapped phase and phase instability problems, while amplitude-based methods focus more on segmenting flow areas instead of quantifying velocities. To solve these problems, we propose a method we termed optical flow optical coherence tomography (OFOCT) that can accurately compute spatially resolved velocity fields. An iterative solution to the proposed method is implemented using the graphics processing unit (GPU) for real-time data processing. The accuracy of the proposed method is verified through phantom flow experiments.
In a partial cornea transplant surgery, a procedure known as “Big Bubble” is used and it requires precise needle detection and tracking. To accomplish this goal, we used traditional image segmentation methods and trained a Convolutional Neural network (CNN) model to track the needle during the cornea transplant surgery guided by OCT B-scan imaging. The dataset was generated from the laboratory OCT system and we classified them to three categories. The network architecture is based on U-Net and modified to avoid overfitting. We are able to track the needle and detect the distance between the needle tip and cornea bottom layer based on these results.
Significance: Selective retina therapy (SRT) selectively targets the retinal pigment epithelium (RPE) and reduces negative side effects by avoiding thermal damages of the adjacent photoreceptors, the neural retina, and the choroid. However, the selection of proper laser energy for the SRT is challenging because of ophthalmoscopically invisible lesions in the RPE and different melanin concentrations among patients or even regions within an eye.
Aim: We propose and demonstrate SRT monitoring based on speckle variance optical coherence tomography (svOCT) for dosimetry control.
Approach: M-scans, time-resolved sequence of A-scans, of ex vivo bovine retina irradiated by 1.7-μs duration laser pulses were obtained by a swept-source OCT. SvOCT images were calculated as interframe intensity variance of the sequence. Spatial and temporal temperature distributions in the retina were numerically calculated in a 2-D retinal model using COMSOL Multiphysics. Microscopic images of treated spots were obtained before and after removing the upper neural retinal layer to assess the damage in both RPE and neural layers.
Results: SvOCT images show abrupt speckle variance changes when the retina is irradiated by laser pulses. The svOCT intensities averaged in RPE and photoreceptor layers along the axial direction show sharp peaks corresponding to each laser pulse, and the peak values were proportional to the laser pulse energy. The calculated temperatures in the neural retina layer and RPE were linearly fitted to the svOCT peak values, and the temperature of each lesion was estimated based on the fitting. The estimated temperatures matched well with previously reported results.
Conclusion: We found a reliable correlation between the svOCT peak values and the degree of retinal lesion formation, which can be used for selecting proper laser energy during SRT.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.