Needle insertion is a vital procedure in both clinical diagnosis and therapeutical treatment. To ensure the accurate placement of needle, ultrasound (US) imaging is generally used to guide the needle insertion. However, due to depthdependent attenuation and angular dependency, US imaging always face the challenge in consistently and precisely visualizing the needle, necessitating the development of reliable methods to track the needle. Deep learning, an advanced tool that has proven effective and efficient in addressing imaging challenges, has shown promise in enhancing needle visibility in US images. But the existing approaches often rely on manual annotation or simulated data as ground truth, leading to heavy human workload and bias or difficulties in generalizing to real US images. Recently, photoacoustic (PA) imaging has shown the capability of high-contrast needle visualization. In this study, we explore the potential of PA imaging as reliable ground truth for training deep learning networks, eliminating the need for expert annotation. Our network, trained on ex vivo image datasets, demonstrated the abilities of precise needle localization in US images. This research represents a significant advancement in the application of deep learning and PA imaging in clinical settings, with the potential to enhance the accuracy and safety of needle-based procedures.
KEYWORDS: Image restoration, Image quality, Image processing, Acquisition tracking and pointing, Reconstruction algorithms, Signal to noise ratio, Photoacoustic tomography, In vivo imaging, Spatial filtering, Brain
SignificanceIn photoacoustic tomography (PAT), numerous reconstruction algorithms have been utilized to recover initial pressure rise distribution from the acquired pressure waves. In practice, most of these reconstructions are carried out on a desktop/workstation and the mobile-based reconstructions are far-flung. In recent years, mobile phones are becoming so ubiquitous, and most of them encompass a higher computing ability. Hence, realizing PAT image reconstruction on a mobile platform is intrinsic, and it will enhance the adaptability of PAT systems with point-of-care applications.AimTo implement PAT image reconstruction in Android-based mobile platforms.ApproachFor implementing PAT image reconstruction in Android-based mobile platforms, we proposed an Android-based application using Python to perform beamforming process in Android phones.ResultsThe performance of the developed application was analyzed on different mobile platforms using both simulated and experimental datasets. The results demonstrate that the developed algorithm can accomplish the image reconstruction of in vivo small animal brain dataset in 2.4 s. Furthermore, the developed application reconstructs PAT images with comparable speed and no loss of image quality compared to that on a laptop. Employing a two-fold downsampling procedure could serve as a viable solution for reducing the time needed for beamforming while preserving image quality with minimal degradation.ConclusionsWe proposed an Android-based application that achieves image reconstruction on cheap, small, and universally available phones instead of relatively bulky expensive desktop computers/laptops/workstations. A beamforming speed of 2.4 s is achieved without hampering the quality of the reconstructed image.
Photoacoustic ophthalmoscopy in rodents is gaining research momentum, due to advancement in transducer shape and technology. Needle transducers emerged as most valuable tool for photoacoustic retinal imaging and have proven to be sensitive enough to resolve retinal vasculature in-vivo. Nevertheless, placement of the eye and screening of the retina remains challenging, since needle transducers must remain static during image acquisition, while the optical field of view is limited. Such restriction mandates movement of the mouse to rotate the eye and therefore the imaging area on the retina. The needle transducer needs to be temporarily detached during this process to avoid damage to the eye or the transducer. Re-attachment involves additional application of ultrasound gel and doesn’t guarantee ideal placement for optimized imaging performance. Additive manufacturing can help to tackle those challenges and allows to design novel rotational rodent holders for imaging. Hence, we present a fully 3D printable rotatable tip/tilt mouse platform with the eye in the center of rotation, combined with a printable needle transducer holder. Such system guarantees optimal placement of the needle transducer during imaging and rotation of the mouse eye, avoiding detachment of the transducer and effortless screening of the retina. The capabilities for retinal screening are demonstrated by a multimodal optical coherence photoacoustic ophthalmoscopy system employing two separated wavelengths, 1310 nm for optical coherence and 570 nm for photoacoustic ophthalmoscopy.
Photoacoustic tomography (PAT) is a non-invasive imaging modality showing great potential in medical diagnosis and research due to its high optical contrast and high-resolution deep imaging. After laser irradiation on the tissue surface, energy absorption leads to the generation of acoustic waves (also known as PA waves), which can be collected by ultrasound detectors such as single-element ultrasound transducers (SUTs). A variety of image reconstruction algorithms can be employed to obtain the initial pressure distribution map. Previously, desktops or workstations are widely used for performing image-forming processes owing to their high computation power. But with the upgrade of mobile phones, they possess more and more powerful CPU or GPU, sometimes comparable to desktop computers. The capability of PAT can be further enhanced with the use of the mobile platform. In this work, we explored the usage of mobile platforms to reconstruct PAT images without sacrificing image quality. A mobile application was developed based on Python, implementing a simple delay-and-sum (DAS) beamformer for generating PAT images. HUAWEI P20 was employed to test the application performance, which spent less than 30 seconds to form a well-reconstructed PAT image with the SNR value more than 40 dB. Downsampling process can be performed, leading to much less reconstruction time while the photoacoustic target structure was still reconstructed properly, especially for two-fold downsampling operation. These results indicated that mobile platforms could support fast PAT image reconstruction and at the same time support good image quality.
Generating an image of acceptable quality will take several minutes in circular scanning geometry-based Photoacoustic tomographic (PAT) imaging systems. Although, the imaging speed can be improved by employing multiple single-element ultrasound transducers (UST) and faster scanning. The low signal-to-noise ratio at higher and the artifacts arising from sparse signal acquisition hamper the imaging speed. Thus, there exists a need to improve the speed of the PAT imaging system without compromising the image quality. To improve the frame rate of the PAT system, we propose a convolutional neural network (CNN) based deep learning architecture for reconstructing the artifact-free PAT images from the fast-scanning data. The proposed model is trained with the simulated dataset and its performance was evaluated using experimental phantom and in-vivo imaging. The efficiency to improve the frame rate was evaluated on both the single-UST and multi-UST PAT systems. Our results suggest that the proposed deep learning architecture improves the frame rate by six-fold in a single UST PAT system and by two-fold in a multi-UST PAT system. The fastest frame rate of ~ 3Hz was achieved without compromising the quality of the PAT image.
Significance: In circular scanning photoacoustic tomography (PAT), it takes several minutes to generate an image of acceptable quality, especially with a single-element ultrasound transducer (UST). The imaging speed can be enhanced by faster scanning (with high repetition rate light sources) and using multiple-USTs. However, artifacts arising from the sparse signal acquisition and low signal-to-noise ratio at higher scanning speeds limit the imaging speed. Thus, there is a need to improve the imaging speed of the PAT systems without hampering the quality of the PAT image.
Aim: To improve the frame rate (or imaging speed) of the PAT system by using deep learning (DL).
Approach: For improving the frame rate (or imaging speed) of the PAT system, we propose a novel U-Net-based DL framework to reconstruct PAT images from fast scanning data.
Results: The efficiency of the network was evaluated on both single- and multiple-UST-based PAT systems. Both phantom and in vivo imaging demonstrate that the network can improve the imaging frame rate by approximately sixfold in single-UST-based PAT systems and by approximately twofold in multi-UST-based PAT systems.
Conclusions: We proposed an innovative method to improve the frame rate (or imaging speed) by using DL and with this method, the fastest frame rate of ∼3 Hz imaging is achieved without hampering the quality of the reconstructed image.
Pulsed laser diodes (PLD) are preferred as excitation sources in photoacoustic tomography due to their low cost, compact size, and high pulse repetition rate. When PLD is used in conjunction with multiple single-element ultrasound transducers (SUT), the imaging speed can be improved. However, during PAT image reconstruction, the exact radius of each SUT is required for accurate reconstruction. Herein, we propose a novel deep learning approach to alleviate the need for radius calibration. We developed a convolutional neural network (fully dense U-Net) with a convolutional long short-term memory (LSTM) block as the bridge to reconstruct the PAT images. In vivo imaging was used to verify the performance of the network. Our results and analysis demonstrate that the proposed network eliminates the need for radius calibration without sacrificing the reconstructed PAT image quality.
In circular scanning geometry based Photoacoustic tomographic (PAT) imaging systems, the axial resolution is spatially invariant and is limited by the bandwidth of the detector. However, the tangential resolution is spatially varying and is dependent on the aperture size of the detector. Conventionally, large aperture size detectors are the preferred choice for detection element in circular view PAT imaging systems but it hampers the tangential resolution. Although several techniques have been proposed to improve the tangential resolution, they are also hindered by their inherent limitations. Herein, we propose a deep learning architecture to counter the spatially variant tangential resolution in circular scanning PAT.We used a (U-Net) based CNN architecture to improve the tangential resolution of the acquired PAT images. Our results suggest the proposed deep learning architecture improves the tangential resolution of the acquired by five folds, without compromising the image quality.
Assessment of morphological changes in cerebral venous sinus of small animal models is important to gain insights of various disease conditions such as intracranial hypotension, Idiopathic intracranial hypertension (IIH), Cerebral venous sinus thrombosis, subdural hematoma etc. Photoacoustic Tomography (PAT), a fast-growing non-invasive hybrid imaging modality which combines high optical contrast and resolution in deep tissue imaging offers a novel, rapid and cost-effective way to analyze the morphological changes of venous sinus in comparison with the conventional imaging modalities. In this study, we examined the morphological changes of sagittal sinus in the rat brain due to intracranial pressure changes induced by Cerebrospinal fluid (CSF) extraction using low cost pulsed laser diode (PLD) based desktop (PAT) system. Our results indicate that the desktop PLD-PAT system can be employed to evaluate the changes in the cerebral venous sinus in preclinical models. We observed a ~30% average increase in the area of sagittal venous sinus from the baseline, when the CSF is extracted.
Interferometers are widely used in industry for surface profiling of microsystems. It can be used to inspect both smooth (reflective) and rough (scattering) surfaces in wide range of sizes. If the object surface is smooth, the interference between reference and object beam results in visible fringes. If the object surface is optically rough, the interference between reference and object beam results in speckles. Typical microsystems such as MEMS consist of both smooth and rough surfaces on a single platform. Recovering the surface profile of such samples with single-wavelength is not straight forward. In this paper, we will discuss a dual-wavelength approach to measure surface profile of both smooth and rough surfaces simultaneously. Interference fringe pattern generated on a combined surface is acquired at two different wavelengths. The wrapped phases at each wavelength are calculated and subtracted to generate contour phase map. This subtraction reveals the contour fringes of rough and smooth surfaces simultaneously. The dual-wavelength contour measurement procedure and experimental results will be presented.
Digital speckle pattern interferometry (DSPI) has been widely used for surface metrology of optically rough surfaces. Single visible wavelength can provide high measurement accuracy, but it limits the deformation measurement range of the interferometer. Also, it is difficult to reveal the shape of a rough surface with one wavelength in normal illumination and observation geometry. Using more than one visible wavelength in DSPI, one can measure large deformations as well as shape using synthetic wavelength approach. In this work, we will discuss multi-wavelength speckle pattern interferometry using a Bayer RGB sensor. The colour sensor allows simultaneous acquisition of speckle patterns at different wavelengths. The colour images acquired using RGB sensor is split in to its individual components and corresponding interference phase map is recovered using error compensating phase shifting algorithm. The wrapped phase is unwrapped to quantify the deformation or shape information of the sample under inspection. Theoretical background of RGB interferometry for deformation and shape measurements, and experimental results will be presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.