Photoacoustic ophthalmoscopy in rodents is gaining research momentum, due to advancement in transducer shape and technology. Needle transducers emerged as most valuable tool for photoacoustic retinal imaging and have proven to be sensitive enough to resolve retinal vasculature in-vivo. Nevertheless, placement of the eye and screening of the retina remains challenging, since needle transducers must remain static during image acquisition, while the optical field of view is limited. Such restriction mandates movement of the mouse to rotate the eye and therefore the imaging area on the retina. The needle transducer needs to be temporarily detached during this process to avoid damage to the eye or the transducer. Re-attachment involves additional application of ultrasound gel and doesn’t guarantee ideal placement for optimized imaging performance. Additive manufacturing can help to tackle those challenges and allows to design novel rotational rodent holders for imaging. Hence, we present a fully 3D printable rotatable tip/tilt mouse platform with the eye in the center of rotation, combined with a printable needle transducer holder. Such system guarantees optimal placement of the needle transducer during imaging and rotation of the mouse eye, avoiding detachment of the transducer and effortless screening of the retina. The capabilities for retinal screening are demonstrated by a multimodal optical coherence photoacoustic ophthalmoscopy system employing two separated wavelengths, 1310 nm for optical coherence and 570 nm for photoacoustic ophthalmoscopy.
Polarization-sensitive OCT (PS-OCT) derives image contrast from tissue birefringence. Here, we introduced triple-input polarization sensitive optical coherence tomography (TRIPS-OCT), a new polarimetric modulation and reconstruction strategy for depth-resolved tomographic birefringence imaging in-vivo. We modulated the polarization states between three repeated frames and enabled the reconstruction of the Mueller matrix at each location within the triple-measured frames. We demonstrated a 2-fold reduction of the birefringence noise floor compared to the conventional dual-input reconstruction method, and a 3-fold reduction of the measurement error of optic axis orientation in retinal imaging with the compensation of corneal retardance and diattenuation.
In frequency-domain optical coherence tomography (OCT) only half of the available depth range is used. This is due to the occurrence of complex conjugate (CC) ambiguity, which is an artifact resulting from the symmetry properties of the Fourier transform on real-valued spectrum that undermines the optimal sensitivity window. Current approaches require additional active or passive components, and increase systems complexity and cost. We present a novel deep-learning method for CC removal (CCR) based on a generative adversarial network (GAN). The model was trained to learn how to translate OCT scans with CC artifacts into full range images without the requirement of additional equipment or measurement. The data was collected from a phantom sample and human skin in vivo, using a swept source-OCT prototype. The GAN architecture adopted is based on the Pix2Pix model, where the discriminator is a PatchGAN and the generator is a U-net with skipped connections, and has been adapted for high resolution images of 864 × 1024 pixels. CCR-GAN receives as input the complete OCT signal, which consists of intensity and phase images. The findings and the evaluation metrics show that our model is able to effectively suppress CC artifact in OCT scans thereby providing a doubled imaging range. We demonstrated that our model is superior to prior approaches with respect to design complexity, imaging speed, and cost. CCR-GAN can be effectively used to suppress the CC mirror terms and generate full depth range in clinical imaging, that requires large ranging depth and high sensitivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.