Constructing a physics-augmented digital twin of the skull is imperative for a wide range of transcranial ultrasound applications including ultrasound computed tomography and focused ultrasound therapy. The high impedance contrast as well as the acoustic-elastic coupling observed between soft tissue and bone increase the complexity of the ultrasound wavefield considerably, thus emphasizing the need for waveform-based inversion approaches. This work applies reverse time migration in conjunction with the spectral-element method to an in vitro human skull to obtain a starting model, which can be used for full-waveform inversion and adjoint-based shape optimization. Two distinct brain phantoms are considered where the cranial cavity of the in vitro human skull was filled with (1) homogeneous water and (2) gelatin with two cylindrical inclusions. A 2D slice through the posterior of the skull was collected using a ring-like aperture consisting of 1024 ultrasound transducers with a bandwidth of approximately 1MHz to 3MHz. Waveform-based reverse time migration was then used to resolve the inner and outer contours of the skull from which a conforming hexahedral finite-element mesh was constructed. The synthetically generated measurements which are obtained by solving the coupled acoustic-viscoelastic wave equation are in good agreement with the observed laboratory measurements. It is demonstrated that using this revised wave speed model for recomputing the reverse time migration reconstructions allows for improved localization of the gelatin inclusions within the cranial cavity.
Stroke is a significant cause of mortality and disability in America. Due to differences in the treatment of ischemic and hemorrhagic stroke, imaging must be performed before administration of therapeutic medication. Unfortunately, the current standard imaging methods, namely CT and MRI, require specialized locations and staff, which can induce delays in triage, and therefore, treatment time. Recent work suggests that ultrasound tomography (UST) is capable of imaging in vivo tissue properties and may have potential as a diagnostic tool during stroke treatment which could be performed at the point of injury rather than at a local hospital. In this work, we investigate the feasibility of using UST imaging to image the brain via in silico, in vitro, and ex vivo studies. The results of this work indicate some of the challenges which must be overcome to effectively image in vivo stroke patients.
When using a ring array to perform ultrasound tomography, the most computationally intensive component of frequency-domain full waveform inversion (FWI) is the Helmholtz equation solver. The Helmholtz equation is an elliptic partial differential equation (PDE) whose discretization leads to a large system of equations; in many cases, the solution of this large system is itself the inverse problem and requires an iterative method. Our current solution relies on discretizing the 2D Helmholtz equation based on a 9-point stencil and using the resulting block tridiagonal structure to efficiently compute a block LU factorization. Conceptually, the L and U systems are equivalent to a forward and backward wave propagation along one of the spatial dimensions of the grid, resulting in a direct non-iterative solution to the Helmholtz equation based on a single forward and backward sweep. Based on this observation, the PDE representation of the Helmholtz equation is split into two one-way wave equations prior to discretization. The numerical implementations of these one-way wave equations are highly parallelizable and lend themselves favorably to accelerated GPU implementations. We consider the Fourier split-step and phase-shift-plus-interpolation (PSPI) methods from seismic imaging as numerical solutions to the one-way wave equations. We examine how each scheme affects the numerical accuracy of the final Helmholtz equation solution and present its impact on FWI with breast imaging examples.
Most ultrasound transducer designs are driven towards high frequencies by the need to maximize the resolution achievable by B-mode reflectivity imaging. Because full-waveform inversion (FWI) requires low frequencies to overcome cycle skipping, it is often difficult to apply FWI to the same high-frequency transducer hardware. Recent work on time-domain adaptive waveform inversion (AWI) addresses this based on a deconvolution approach to overcome cycle skipping. However, most clinical implementations of ultrasound tomography rely on frequency-domain FWI to quickly reconstruct images on clinically relevant time scales. Although the broadband nature of AWI makes it difficult to translate to the frequency domain, we can approximate the properties of AWI by using a frequency-differencing approach. We develop and describe both a model-domain and a data-domain method for frequency-differencing. Simulations and phantom experiments show that each frequency-differencing approach helps overcome cycle skipping by providing a sufficiently accurate starting model for FWI.
KEYWORDS: Breast, Tomography, Ultrasound tomography, Simulations, Medical image reconstruction, Image restoration, Data modeling, Acoustic tomography, Ultrasonography, In vivo imaging
The convergence of waveform inversion in ultrasound tomography is heavily impacted by the choice of starting model. Ray tomography is often used as the starting model for waveform inversion; however, artifacts resulting from ray tomography can continue to persist during waveform inversion. On the other hand, a homogeneous starting model for waveform inversion may result in cycle skipping artifacts if the frequency of the transmitted waveform is too high or the error between the starting model and ground-truth is too large. Clinical in vivo breast data suggests that waveform inversion from a homogeneous starting model is sufficient for an accurate reconstruction of the speed of sound if the starting model is close enough to the average speed of sound in the medium and the starting frequency for waveform inversion is sufficiently low to avoid cycle skipping. Comparing the results of waveform inversion using ray tomography and a homogeneous sound speed as initial models, the homogeneous starting model avoids oscillatory artifacts produced by ray tomography at the edges of the breast. Although the RMS error between the two waveform inversion results is 29.6 m/s, most of the error is the result of reconstruction artifacts at the edges of the breast. When the RMS error is measured inside the breast away from its boundaries, this RMS error drops down to 11.5 m/s.
Phase aberration is one the key sources of image degradation in handheld B-mode ultrasound imaging. Sound speed heterogeneities create phase aberrations in the image by inducing additional tissue-dependent delays and diffractive effects that conventional beamforming does not incorporate. For this reason, the Fourier split-step angular spectrum method is used to simulate pressure fields in a heterogeneous sound speed medium and create B-mode images based on the cross-correlation of transmitted and received wavefields. Because the strongest aberrations are caused by a laterally varying sound speed profile, this work presents a new sound speed estimator that can be used to correct for aberrations in laterally varying media. Phantom experiments show a 58-76% improvement in point target resolution and a 2.5x improvement in contrast-to-noise ratio because of the proposed sound speed estimation and phase aberration correction scheme.
PurposeIsolating the mainlobe and sidelobe contribution to the ultrasound image can improve imaging contrast by removing off-axis clutter. Previous work achieves this separation of mainlobe and sidelobe contributions based on the covariance of received signals. However, the formation of a covariance matrix at each imaging point can be computationally burdensome and memory intensive for real-time applications. Our work demonstrates that the mainlobe and sidelobe contributions to the ultrasound image can be isolated based on the receive aperture spectrum, greatly reducing computational and memory requirements.ApproachThe separation of mainlobe and sidelobe contributions to the ultrasound image is shown in simulation, in vitro, and in vivo using the aperture spectrum method and multicovariate imaging of subresolution targets (MIST). Contrast, contrast-to-noise-ratio (CNR), and speckle signal-to-noise-ratio are used to compare the aperture spectrum approach with MIST and conventional delay-and-sum (DAS) beamforming.ResultsThe aperture spectrum approach improves contrast by 1.9 to 6.4 dB beyond MIST and 8.9 to 13.5 dB beyond conventional DAS B-mode imaging. However, the aperture spectrum approach yields speckle texture similar to DAS. As a result, the aperture spectrum-based approach has less CNR than MIST but greater CNR than conventional DAS. The CPU implementation of the aperture spectrum-based approach is shown to reduce computation time by a factor of 9 and memory consumption by a factor of 128 for a 128-element transducer.ConclusionsThe mainlobe contribution to the ultrasound image can be isolated based on the receive aperture spectrum, which greatly reduces the computational cost and memory requirement of this approach as compared with MIST.
KEYWORDS: Ultrasonography, Scanners, Point spread functions, Scattering, Near field, Data modeling, Backscatter, Transducers, Speckle, Real time imaging
Isolating the mainlobe and sidelobe contribution to an ultrasound image can improve imaging contrast by removing sidelobe clutter. Previous work achieves the separation of mainlobe and sidelobe contributions based on the covariance of received signals. However, formation of a covariance matrix of receive signals at each imaging point can be computationally burdensome and memory intensive for real-time applications. This work demonstrates that the mainlobe and sidelobe contribution to the ultrasound image can be isolated based on the receive aperture spectrum, which greatly reduces computational and memory requirements. This aperture spectrumbased approach is shown to improve lesion contrast by16.5-41.2 dB beyond conventional delay-and-sum B-mode imaging, while the prior method based on the covariance model achieves 6.1 to 21.9 dB contrast improvement beyond conventional delay-and-sum.
We present full-waveform ultrasound computed tomography (USCT) for sound speed reconstruction based on the angular spectrum method using linear transducer arrays. We first present a transmission scenario in which plane-waves are emitted by a transmitting array and received by an array on the opposite side of the object of interest. These arrays are rotated around the object of interest to interrogate the medium from di↵erent view angles. Waveform inversion reconstruction is demonstrated on a numerical breast phantom, in which sound speed is varied from 1486 to 1584 m/s. This example is used to isolate and examine the impact of each view angles and frequency used in the reconstruction process. We also examine cycle-skipping artifacts as well as optimization schemes that can be used to overcome them. The goal of this work is to provide an opensource example and implementation of the waveform inversion reconstruction algorithm on Github: https:// github.com/rehmanali1994/FullWaveformInversionUSCT (DOI: 10.5281/zenodo.4774394). Next, we extend the waveform inversion framework to perform sound speed tomography for pulse-echo ultrasound imaging with a single linear array that transmits pulsed waves and receives signals backscattered from the medium. We first demonstrate that B-mode image reconstructions can be achieved using the angular spectrum method; then, we derive an optimization framework for estimating the sound speed in the medium by optimizing B-mode images with respect to slowness, via the angular spectrum method. We demonstrate an initial proof of concept with point targets in a homogeneous medium to demonstrate the fundamental principles of this new technique.
We present a refraction-corrected sound speed reconstruction technique for layered media based on the angular coherence of plane waves. Previous work has successfully shown that sound speed estimation and refraction- corrected image reconstruction can be achieved using the coherence of full-synthetic aperture channel data. However, methods for acquiring the full-synthetic aperture dataset require a large number of transmissions, which can confound sound speed estimation due to the scatterer motion between transmit events, especially for in-vivo application. Furthermore, sound speed estimation requires producing full-synthetic aperture coherence images for each trial sound speed, which can make the overall computational cost quite burdensome. The angular coherence beamformer, initially devised as a quicker alternative to the more conventional spatial coherence beamformer, measures coherence between fully-beamformed I/Q channel data for each plane wave as opposed to the receive channel data prior to receive beamforming. As a result, angular coherence beamforming can significantly reduce the computation time needed to reconstruct a coherence image by taking advantage of receive beamforming. Previous work has used the coherence maximization of full-synthetic aperture channel data to perform sound speed estimation. By replacing spatial coherence with angular coherence, we apply a similar methodology to channel data from plane-waves to significantly reduce the computational cost of sound speed estimation. This methodology has been confirmed by both simulated and experimental channel data from plane waves.
This work presents refraction-corrected sound speed reconstruction techniques for transmission-based ultrasound computed tomography using a circular transducer array. Pulse travel times between element pairs can be calculated from slowness (the reciprocal of sound speed) using the eikonal equation. Slowness reconstruction is posed as a nonlinear least squares problem where the objective is to minimize the error between measured and forward-modeled pulse travel times. The Gauss-Newton method is used to convert this problem into a sequence of linear least-squares problems, each of which can be efficiently solved using conjugate gradients. However, the sparsity of ray-pixel intersection leads to ill-conditioned linear systems and hinders stable convergence of the reconstruction. This work considers three approaches for resolving the ill-conditioning in this sequence of linear inverse problems: 1) Laplacian regularization, 2) Bayesian formulation, and 3) resolution-filling gradients. The goal of this work is to provide an open-source example and implementation of the algorithms used to perform sound speed reconstruction, which is currently being maintained on Github: https://github.com/ rehmanali1994/refractionCorrectedUSCT.github.io
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.