Applications utilizing free space beam propagation over long distances in the atmosphere require active sensors and beam shaping. New wavefront sensor designs promise improved performance in deep turbulence, but comprehensive comparisons of modern wavefront sensor designs within Adaptive Optics (AO) loops have yet to reveal winning system-level designs for adaptive optics systems capable of correcting deep turbulence. Here, we attempt to shed light on the problem using a comprehensive wave optics model to evaluate a least-squares based and interferometric-based wavefront sensing techniques, namely the Shack-Hartmann wavefront sensor and pupil plane off-axis digital holography in combination with optimal and adaptive predictive control. The Shack Hartmann wavefront sensor has been an established wavefront sensor that provides a measurement of the wavefront through fast measurements of the wavefront gradient and least squares reconstruction. Interferometric techniques such as digital holography provide higher resolution wavefront reconstruction and improved performance with strong turbulence but with stricter laser requirements and larger computation time. For an optimal AO design in a given application, there is a trade-off between reconstructed wavefront resolution and speed. In this paper, we use wave-optics simulation to qualitatively discuss the upper bounds of AO in deep turbulence, spatial resolution limitations of Shack-Hartmann and Digital Holography wavefront sensors. We show preliminary results of closed-loop AO performance in dynamic deep turbulence, inclusive of wind and limited spatial resolution. Additionally, we show a preliminary analysis of using predictive control to improve the temporal performance of an AO loop and compensate for latencies due to hardware.
The beam control system of a high energy laser (HEL) application can typically experience error amplification due to disturbance measurements that are associated with the non-common path of the optical train setup. In order to address this error, conventional schemes require offline identification or a calibration process to determine the non-common path error portion of a measured sequence that contains both common and non-common path disturbances. However, not only is it a challenging to model the properties of the non-common path disturbance alone but also a stationary model may not guarantee consistent jitter control performance and repeated calibration may be necessary. The paper first attempts to classify the non-common path error problem into two categories where the designer is only given one measurement or two measurements available for real-time processing. For the latter case, an adaptive correlated pre-filter is introduced here to provide in situ determination of the non-common path disturbance through an adaptive correlation procedure. Contrasting features and advantages of this algorithm will be demonstrated alongside a baseline approach of utilizing notch filters to bypass the non-common portion of the combined sequence.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.