MCAO Assisted Visible Imager and Spectrograph (MAVIS) is a new instrument for ESO’s VLT AOF. MAVIS embarks an Adaptive Optics (AO) system to cancel the image blurring induced by atmospheric turbulence. The latency and computational load induced by the system dimensioning led us to design a new software and hardware architecture for the Real Time Controller (RTC). Notably, the COSMIC framework harnesses GPUs for accelerated computation and is adept at scaling across multiple processes without overhead using shared memory. Employing a graph-based architecture where operations are intuitively represented as nodes. It aims at simplifying design, implementation, testing and integration by relying on robust concepts and useful tools. Recent updates have further enhanced its versatility, cementing its potential as a future-proof, extensible framework for AO advancements and their development process.
KEYWORDS: Real-time computing, Control systems, Data acquisition, Control systems design, Telecommunications, Field programmable gate arrays, Software development, Data communications, Turbulence, Tomography
To provide data sharper than JWST and deeper than HST, MAVIS (the MCAO Assisted Visible Imager and Spectrograph) will be driven by a state-of-the-art real-time control (RTC) system leveraging cutting edge technologies both in terms of hardware and software. As an implementation of the COSMIC platform, the MAVIS RTC will host a hard RTC module, fed in quasi real-time with optimized parameters from its companion soft RTC. In order to meet the AO performance requirement in the visible, the overall real-time pipeline latency should be in the range of few hundreds microseconds ; and, considering the several high order wavefront sensors (WFS) of the current optical design, the specifications of the hard RTC module are very close to those contemplated for ELT first light SCAO systems, making it as an at scale pathfinder for these future facilities. In this paper, we will review the hardware and software design and prototyping activities led during phase A of the project.
With the upcoming giant class of telescopes, Adaptive Optics (AO) has become more essential than ever before to get access to the full potential offered by those telescopes. The complexity of such AO systems is reaching extreme heights, and disruptive developments will have to be made in order to build them. One of the critical component of a AO system is the Real Time Controller (RTC) which will have to compute the slopes and the Deformable Mirror (DM) commands at high frequency, in a range of 0.5 to several kHz. Since the complexity of the computations involved in the RTC is increasing with the size of the telescope, fulfilling RTC requirements for Extremely Large Telescope (ELT) class is a challenge. As an example, the MICADO SCAO (Single Conjugate Adaptive Optics) system requires around 1 TMAC/s for the RTC to get sufficient performance. This complexity brings the need for High Performance Computing (HPC) techniques and standards, such as the use of hardware accelerator like GPU. On top of that, building a RTC is often project-dependent as the components and the interfaces change from one instrument to an other. The COSMIC platforms aims at developing a common AO RTC platform which is meant to be powerful, modular and available to the AO community. This development is a joint effort between Observatoire de Paris and the Australian National University (ANU) in collaboration with the Subaru Telescope. We focus here on the current status of the core hard real-time component of this platform. The H-RTC pipeline is composed of Business Units (BU): each BU is an independent process in charge of one particular operation, such as Matrix Vector Multiply (MVM) or centroid computation, that can be made on CPU or on GPU. BUs read and write data on Shared Memory (SHM) handled by the CACAO framework. Synchronization between each BU can then be made either by using semaphore or by busy waiting on the GPU to ensure very low jitter. The RTC pipeline can then be controlled through a Python interface. One of the key point of this architecture is that the interfaces of a BU with the various SHM is abstracted, so adding a new BU in the collection of available ones is straight forward. This approach allows a high performance, scalable, modular and configurable RTC pipeline that could fit the needs of any AO system configuration. Performance has been measured on a MICADO SCAO scale RTC pipeline with around 25,000 slopes by 5,000 actuators on a DGX-1 system equipped with 8 Tesla V100 GPUs. The considered pipeline is composed of two BUs : the first one takes an input the raw pyramid WFS image (produced by simulator), applies on it dark and flat references, and then extract the useful pixel from the image. The second BU performs the MVM and the integration of the commands following a classical integrator command law. Synchronization between the BU is made through GPU busy waiting on the BU inputs. Performance obtained shows a mean latency up to 235 μs using 4 GPUs, with a jitter of 4.4 μs rms and a maximum jitter of 30 μs
In the context of the Green Flash project we have assembled a full scale demonstrator for an E-ELT first light AO RTC, based on the GPU technology. Such facility, designed to drive in real-time the AO system, is composed of a real-time core, processing streaming data from sensors and controlling deformable optics, and a supervisor module, optimizing the control loop by providing updated versions of the control matrix at a regular rate depending on the observing conditions evolution. This RTC prototype is designed to assess the system performance in various configurations from single conjugate AO for the E-ELT, i.e. about 10 Gb/s of streaming data from a single sensor and a required performance of about 100 GMAC/s; to the dimensioning of a MCAO system with 100 Gb/s of streaming data and 1.5 TMAC/s performance. Both concepts rely on the same architecture, the latter being a scaled version of the former. We chose a very low level approach using a persistent kernel strategy on the GPUs to handle all the computation steps including pixel calibration, slopes and command vector computation. This approach simplifies the latency management by reducing the communication but led us to re-implement some GPUs standard features : communication mechanisms (guard, peer-to-peer), algorithms (generalized matrix-vector multiplication, reduce/all reduce) and new synchronization mechanisms on a multi node - multi GPUs system. In order to assess the performance of the full AO RTC prototype under realistic conditions, we have concurrently implemented a real-time simulator able to feed the real-time core with data by emulating the sensors data transfer protocols and interacting with simulated deformable optics. The real-time simulator is able to deliver high precision simulated data and simulate the whole retro-action loop for the SCAO case, enabling a full scale / full feature test of the prototype. In this paper, we report on the design and characterization the AO RTC prototype performance in SCAO mode and discuss a strategy for its integration in tmographic AO mode.
The compute and control for adaptive optics (cacao) package is an open-source modular software environment for real-time control of modern adaptive optics system. By leveraging many-core CPU and GPU hardware, it can scale up to meet the demanding computing requirements of current and future high frame rate, high actuator count adaptive optics (AO) systems. cacao’s modular design enables both simple/barebone operation, and complex full-featured AO control systems. cacao’s design is centered on data streams that hold real-time data in shared memory along with a synchronization mechanism for computing processes. Users and programmers can add additional features by coding modules that interact with cacao’s data stream format. We describe cacao’s architecture and its design approach. We show that accurate timing knowledge is key to many of cacao’s advanced operation modes. We discuss current and future development priorities, including support for machine learning to provide real-time optimization of complex AO systems.
The Green Flash initiative responds to a critical challenge in the astronomical community. Scaling up the real-time control solutions of AO instruments in operation to the specifications of the AO modules at the core of the next generation of extremely large telescopes is not a viable option. The main goal of this project is to design and build a prototype for an AO RTC targeting the E-ELT first-light AO instrumentation. We have proposed innovative technical solutions based on emerging technologies in High Performance Computing, assessed this enabling technologies through prototyping and are now assembling a full scale demonstrator to be validated with a simulator and eventually tested on sky. In this paper, we report on downselection process that led us to the final prototype architecture and the performance of our full scale prototype obtained with a real-time simulator.
Our team has developed a common environment for high performance simulations and real-time control of AO systems based on the use of Graphics Processors Units in the context of the COMPASS project. Such a solution, based on the ability of the real time core in the simulation to provide adequate computing performance, limits the cost of developing AO RTC systems and makes them more scalable. A code developed and validated in the context of the simulation may be injected directly into the system and tested on sky. Furthermore, the use of relatively low cost components also offers significant advantages for the system hardware platform. However, the use of GPUs in an AO loop comes with drawbacks: the traditional way of offloading computation from CPU to GPUs - involving multiple copies and unacceptable overhead in kernel launching - is not well suited in a real time context. This last application requires the implementation of a solution enabling direct memory access (DMA) to the GPU memory from a third party device, bypassing the operating system. This allows this device to communicate directly with the real-time core of the simulation feeding it with the WFS camera pixel stream. We show that DMA between a custom FPGA-based frame-grabber and a computation unit (GPU, FPGA, or Coprocessor such as Xeon-phi) across PCIe allows us to get latencies compatible with what will be needed on ELTs. As a fine-grained synchronization mechanism is not yet made available by GPU vendors, we propose the use of memory polling to avoid interrupts handling and involvement of a CPU. Network and Vision protocols are handled by the FPGA-based Network Interface Card (NIC). We present the results we obtained on a complete AO loop using camera and deformable mirror simulators.
The main goal of Green Flash is to design and build a prototype for a Real-Time Controller (RTC) targeting the European Extremely Large Telescope (E-ELT) Adaptive Optics (AO) instrumentation. The E-ELT is a 39m diameter telescope to see first light in the early 2020s. To build this critical component of the telescope operations, the astronomical community is facing technical challenges, emerging from the combination of high data transfer bandwidth, low latency and high throughput requirements, similar to the identified critical barriers on the road to Exascale. With Green Flash, we will propose technical solutions, assess these enabling technologies through prototyping and assemble a full scale demonstrator to be validated with a simulator and tested on sky. With this R&D program we aim at feeding the E-ELT AO systems preliminary design studies, led by the selected first-light instruments consortia, with technological validations supporting the designs of their RTC modules. Our strategy is based on a strong interaction between academic and industrial partners. Components specifications and system requirements are derived from the AO application. Industrial partners lead the development of enabling technologies aiming at innovative tailored solutions with potential wide application range. The academic partners provide the missing links in the ecosystem, targeting their application with mainstream solutions. This increases both the value and market opportunities of the developed products. A prototype harboring all the features is used to assess the performance. It also provides the proof of concept for a resilient modular solution to equip a large scale European scientific facility, while containing the development cost by providing opportunities for return on investment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.